Chrome added an Agentic Browsing audit to Lighthouse
In late 2025 Chrome shipped an experimental new category to Lighthouse: Agentic Browsing. It sits alongside Performance, Accessibility, Best Practices, and SEO — the four categories most teams already optimize against — and scores something none of those touch: how easy your site is for an AI agent to read, understand, and transact on.
If you make money from a website in 2026, this is the first audit your new traffic source is going to fail.
This post is the breakdown: what each audit checks, why Chrome added it, and what it takes to pass.
The audits below are from the public spec at developer.chrome.com/docs/lighthouse/agentic-browsing/scoring. The category is marked experimental — names and weightings may shift before stable release.
Why Chrome did this
The short version: Chrome can see that agentic browsing is happening. Operator, Computer Use, Project Mariner, Perplexity, ChatGPT's browse mode — a meaningful and growing share of the requests hitting public web servers are agents, not humans. Most of those agents fail at most sites, most of the time. The reason is structural: the modern web was built for a graphical browser rendered to a human eye, not for a language model with a 50-kB context budget.
Chrome decided the cheapest place to fix this is at audit time. If you can't pass the Agentic Browsing category, you don't get the agent traffic. That's the whole game.
The audits, one by one
llms-txt-present
What it checks: Whether https://yourdomain.com/llms.txt returns a
200 with a plausible plain-text content type.
Why it matters: llms.txt is the curated reading list an LLM
fetches when it lands on your domain. Without it, the model picks URLs
at random.
What it takes to pass: Ship a file. The format is small enough that a senior engineer can hand-write one in an afternoon for a 5-page site. For larger sites, the file needs to be generated from the real DOM — which is what the BridgeToAgent kit does.
llms-txt-well-formed
What it checks: Parseability against the llmstxt.org reference parser. Headings, bullets, link syntax all valid.
Why it matters: A malformed file gets ignored. A model that can't parse the manifest reverts to URL guessing, which is what we were trying to avoid.
What it takes to pass: Run your file through the reference parser before shipping. The BridgeToAgent generator does this as a build gate — malformed kits never leave the server.
agents-json-present
What it checks: /agents.json returns valid JSON with the right
content type.
Why it matters: agents.json is the control panel — it tells an
agent what it can do on your site. Search, request a quote, add to
cart. Without it, the agent has to scrape your forms.
What it takes to pass: A typed manifest of your site's public actions. See our docs on the format for the shape.
agents-json-actions-typed
What it checks: That every action declared has at least one typed
parameter, or parameters: [] if intentionally zero-arg.
Why it matters: Untyped actions are unreliable. An agent calling a
search endpoint needs to know whether q is a string or an array, and
whether it's required.
What it takes to pass: Be explicit. No "parameters": null.
schema-org-density
What it checks: Schema.org JSON-LD blocks on the homepage and on primary product/article pages, above a minimum density threshold (one typed block per major page at current settings).
Why it matters: Schema.org is how an agent decides what kind of
page it's looking at without parsing the human copy. A Product
block tells it "you can act on this — there's a price and an
identifier." An Article block tells it "you can cite this — there's
an author and a date." A FAQPage block tells it "extract these
question-answer pairs verbatim."
What it takes to pass: Most CMS platforms have a one-click Schema toggle. WordPress: Yoast or RankMath. Shopify: built into theme metadata. Webflow: the new structured-data field on collection items. The audit summary the BridgeToAgent kit ships flags every page that needs Schema and what type to add.
sitemap-discoverable
What it checks: /sitemap.xml is valid and referenced from
/robots.txt.
Why it matters: A sitemap is the agent's URL inventory.
Without it, the agent walks the homepage navigation and prays. Most
sitemaps are auto-generated; the failure mode is forgetting to
reference them from robots.txt.
What it takes to pass: Add the line:
Sitemap: https://yourdomain.com/sitemap.xml
to your robots.txt. One line. Two minutes.
agent-runbook-present
What it checks: Whether agent-instructions.md is fetchable from
the root path.
Why it matters: This is the file that tells the agent how to behave: how to quote prices, where to find canonical answers, which content to summarize vs. link. It's the difference between an agent confidently mis-quoting your shipping policy and an agent deferring to your shipping page.
What it takes to pass: A plain Markdown file. See our docs for the structure we recommend.
auto-discovery-links
What it checks: The homepage <head> contains <link rel="alternate"> references to the kit files, so an agent can find
them from a single HTTP request to the root.
Why it matters: Saves the agent a round-trip per file. On large sites this is the difference between an agent succeeding within its context budget and timing out.
What it takes to pass: Three <link> tags in your homepage <head>.
The platform-specific install guides we ship have the copy-paste
snippet for every major CMS.
webmcp-annotations (emerging)
What it checks: WebMCP-style per-element annotations on
interactive elements — <button>, <form>, <input> — that tell an
agent what each control does and how to invoke it.
Why it matters: This is the future-state agent surface. It moves
the action manifest from a separate file (agents.json) into the page
itself, the same way Schema.org moved structured data from sidecar XML
into the HTML.
What it takes to pass: Not yet, realistically. The WebMCP spec is still moving. The honest answer is that this audit may fail on every site today and that's OK — Lighthouse weights it lower than the core audits while the standard stabilizes. We'll ship WebMCP-annotated kit output as a free add-on the moment the spec lands.
What a typical score looks like
Sites with no kit installed score in the 10–30 range — usually
they pass sitemap-discoverable, sometimes pass schema-org-density
if they're on a modern CMS with built-in Schema, and fail
everything else.
Sites with a kit installed cleanly score 75 or higher. The remaining ~25 points come from:
- WebMCP annotations (waiting on spec)
- Sitemap hygiene specific to the platform
- Schema.org density on individual product/article pages
We surface all three in the free readiness audit at bridgetoagent.com so you know exactly which gaps are inside the kit's scope and which need a CMS-side fix.
How to run it on your site
Chrome 130+ ships the experimental category. Open DevTools → Lighthouse tab → enable "Agentic Browsing" under Categories → Analyze. The category may be behind a feature flag depending on your Chrome channel; see the Chrome docs for current rollout state.
If you don't want to wait, the free readiness audit we run replicates the public-spec checks and gives you the same score Chrome will, in five seconds, with no install.
The bigger pattern
Lighthouse is how Google operationalizes web-quality opinions at scale. When Lighthouse adds a category, the rest of the web optimization industry follows — within 18 months it becomes table stakes. That's what happened with Core Web Vitals. That's what happened with Accessibility. That's what's starting to happen with Agentic Browsing.
The window where this is differentiating closes faster than most teams expect. The sites that ship the kit while the audit is still experimental show up in agent answer panels before their competitors do. That's the whole opportunity.