Scanner
Test your site the way agents see it
Seven weighted dimensions. A single score. Clear next steps.
Dimensions
What we measure
Weights reflect how strongly each signal predicts successful agent interaction.
- Discoverability15%
Can agents find the machine-facing surfaces? llms.txt, agents.json, structured data, sitemap.
- Legibility20%
Can agents extract canonical business facts without interpreting marketing copy?
- Freshness10%
Are timestamps, version numbers, and provenance signals present and current?
- Trust15%
Are there source-of-truth declarations, policy boundaries, and provenance chains?
- Comparison readiness15%
Are product and offer attributes structured for cross-site comparison?
- Action readiness15%
Are there declared action endpoints? Can an agent identify the next valid step?
- Standards compliance10%
Conformance with agents.json, llms.txt, reasoning.json, WebMCP, schema.org.
Sample aggregate score
Benchmark
Ask the same questions agents ask
- 1.Can an agent identify what the site sells without guessing?
- 2.Can it retrieve price, availability, and policy details cleanly?
- 3.Can it explain the offer's differentiation using structured facts?
- 4.Can it move from understanding to a valid next action?
- 01.Discoverability34%
- 02.Legibility52%
- 03.Freshness28%
- 04.Trust38%
- 05.Comparison readiness41%
- 06.Action readiness19%
- 07.Standards compliance11%
Dimension detail
Hover a row to see what we measure across 2,400+ scans.