top of page
Studio (1).png
facebook icon.png
linked in icon.png

AI Search Has No Rankings: How to Measure Visibility Without Results Pages

  • Aug 11, 2025
  • 5 min read

Updated: Dec 6, 2025




Why You Can’t Track AI Search Like Google (and What to Do Instead)



Traditional search tracking is built on a simple promise: type in a query, see results, and measure where you rank.


AI search doesn’t work like that.


Tools like ChatGPT, Gemini, and Perplexity don’t display fixed results. They generate answers that can change with every run, every model update, and every user context. That’s why the idea of “AI rank tracking” is misleading — you can’t track AI visibility the same way you track SEO rankings.


But that doesn’t mean AI visibility shouldn’t be tracked at all.


It means the questions — and the metrics — need to change.




Why Traditional Rank Tracking Breaks in AI Search



SEO rank tracking relies on three core assumptions:


  • Deterministic results – the same query produces broadly similar SERPs

  • Fixed positions – rankings are ordered and measurable

  • Known demand – keyword volumes guide priorities



AI breaks all three.


AI responses are probabilistic, not deterministic. The same prompt can surface different brands, citations, or formats each time. There are no fixed positions — brands appear in passing, in changing order, often without clear attribution. And prompt-level demand data is largely invisible, locked inside AI company servers.


It gets even messier:


  • Different models return different answers

  • Even the same model can vary internally

  • Personalisation changes outputs based on context, location, or history



This is why treating AI prompts like keywords doesn’t work.


Tracking whether your brand appears for one specific prompt tells you very little. A better question is:


Across thousands of relevant prompts, how often does AI associate my brand with this topic or category?

That shift in thinking is what makes AI visibility measurable.




From Rankings to Probabilities: How AI Tracking Actually Works



Instead of tracking individual prompts, the goal is to measure aggregate visibility.


This is the core idea behind Ahrefs’ Brand Radar: analysing millions of AI prompts and responses to understand how often a brand is connected to a topic — directionally, not absolutely.


You’re no longer asking, “Did we rank?”

You’re asking, “How much of this conversation do we own?”


Think of it less like search rankings and more like polling. Individual answers vary, but large sample sizes reveal reliable trends.




The Prompt Volume Problem (and How to Work Around It)



One of the biggest challenges with AI tracking is that we don’t know what people are asking at scale.


Search engines publish keyword volumes. AI companies don’t.


To get around this, Brand Radar seeds its AI prompts using real-world search data — including keyword databases and “People Also Ask” queries — paired with known search volume.


These prompts are still synthetic, but they reflect genuine demand.


The goal isn’t to tell you whether you appeared for one question. It’s to show how visible your brand is across entire topics.


If you already have strong visibility across a topic, tracking every individual prompt within it adds little value. The probability is already clear.




Why Aggregation Matters More Than Accuracy



AI outputs are noisy.


Run the same prompt three times and you might:


  • Be mentioned once

  • Omitted once

  • Replaced by a competitor once



Individually, those results are meaningless.


But aggregate thousands of prompts and the randomness smooths out. Patterns emerge. Maybe your brand appears in ~60% of relevant AI responses. That’s actionable insight.


This is why small prompt samples don’t work. Many AI tools cap tracking at 50–100 queries — far too small to understand true Share of Voice.


Large-scale aggregation is what turns chaos into signal.




AI Is Part of a Bigger Discovery System



AI visibility doesn’t exist in isolation.


When you combine AI data with social, search, and web visibility, you can see how brands actually break through.


Take trend-driven products: visibility often flows in this order:


  1. Social platforms spark interest

  2. Search demand follows

  3. Web mentions increase

  4. AI assistants start surfacing the brand



By tracking this across channels, you can understand how long it takes for a brand or trend to enter AI conversations — and what activities accelerate that process.


AI is just one layer of discovery, but it’s increasingly the final one before a decision is made.




Measuring AI Share of Voice (Even With Variance)



Variance doesn’t prevent comparison.


If one brand appears in ~60% of AI responses and another appears in ~40%, the direction is clear — even if exact numbers fluctuate.


Tracking AI Share of Voice over time shows:


  • Whether you’re gaining or losing ground

  • How you compare to competitors

  • Whether your efforts are shifting visibility in the right direction



The key is consistency: same competitors, same topic sets, same measurement approach.


You’re not chasing precision. You’re tracking momentum.




Topic-Level Ownership Beats Query-Level Obsession



A handful of prompts can’t tell you whether AI truly associates your brand with a category.


But hundreds of variations can.


Instead of asking:


  • “Do we show up for this query?”



Ask:


  • “Across all prompts about this topic, how often are we mentioned?”



This approach reveals:


  • Subtopics you clearly “own”

  • Broader markets where you’re invisible

  • Gaps competitors are filling that you aren’t



Those gaps — the unknown unknowns — are often the most valuable insights AI tracking provides.




What AI Responses Reveal About Your Brand Positioning



Even imperfect data can teach you something important: how AI frames your brand.


Across many responses, patterns emerge:


  • Are you positioned as the budget option?

  • Are competitors framed as enterprise or premium?

  • Do you get recommended for simplicity, while others are known for depth?



These narratives matter. They shape how users interpret your relevance at the exact moment of discovery.


And they can lag behind reality.


If your product or strategy has evolved, AI may still describe the “old you” — until enough consistent signals shift the narrative. Tracking those changes shows whether your repositioning is actually sticking.




AI Visibility Is No Longer Optional



Organic clicks are declining fast.


When AI Overviews appear in Google, clickthrough rates for top results drop by roughly a third. And more discovery is moving to AI assistants for product comparison and recommendations.


If your brand isn’t present in AI answers, you’re missing customers at decision time.


AI tracking doesn’t need to be perfect to be valuable. It just needs to show whether you’re part of the conversation — or being left out of it.




Micro vs Macro AI Tracking: You Need Both



The most useful AI tracking combines two levels:



Micro tracking



Focus on high-stakes prompts:


  • Branded questions

  • Competitor comparisons

  • Bottom-of-funnel purchase queries



Even with variability, these are worth monitoring closely.



Macro tracking



Zoom out to understand:


  • Topic-level ownership

  • Market-wide Share of Voice

  • Competitive positioning trends



Macro tracking is where strategy lives. Micro tracking is where execution is protected.


Together, they answer two critical questions:


  • Are we visible where it matters most?

  • Are we building long-term dominance in our market?





Final Thoughts



You’ll never track AI search the way you track Google rankings — and that’s fine.


AI visibility is a compass, not a ruler. It tells you whether you’re moving in the right direction, not your exact coordinates.


The real risk isn’t imperfect data. It’s ignoring AI visibility while competitors establish themselves as the default answers.


Start tracking now. Treat the data as directional. And use it to shape your content, PR, and positioning before the window closes.

 
 
 

Comments


bottom of page