Scanner

Test your site the way agents see it

Seven weighted dimensions. A single score. Clear next steps.

Dimensions

What we measure

Weights reflect how strongly each signal predicts successful agent interaction.

  • Discoverability15%

    Can agents find the machine-facing surfaces? llms.txt, agents.json, structured data, sitemap.

  • Legibility20%

    Can agents extract canonical business facts without interpreting marketing copy?

  • Freshness10%

    Are timestamps, version numbers, and provenance signals present and current?

  • Trust15%

    Are there source-of-truth declarations, policy boundaries, and provenance chains?

  • Comparison readiness15%

    Are product and offer attributes structured for cross-site comparison?

  • Action readiness15%

    Are there declared action endpoints? Can an agent identify the next valid step?

  • Standards compliance10%

    Conformance with agents.json, llms.txt, reasoning.json, WebMCP, schema.org.

31 / 100

Sample aggregate score

Benchmark

Ask the same questions agents ask

  • 1.Can an agent identify what the site sells without guessing?
  • 2.Can it retrieve price, availability, and policy details cleanly?
  • 3.Can it explain the offer's differentiation using structured facts?
  • 4.Can it move from understanding to a valid next action?
  • 01.Discoverability
    34%
  • 02.Legibility
    52%
  • 03.Freshness
    28%
  • 04.Trust
    38%
  • 05.Comparison readiness
    41%
  • 06.Action readiness
    19%
  • 07.Standards compliance
    11%

Dimension detail

Hover a row to see what we measure across 2,400+ scans.