Bing Webmaster Tools now includes an AI Performance report in public preview.
It shows when pages from a site are cited as sources inside AI search across Microsoft Copilot, Bing AI summaries, and some partner integrations.
You get total citations, average cited pages, a list of grounding queries, page-level citation counts, and a trend line. Providing clearer insight into content that’s being used to support AI answers.

To be clear, this isn’t another dashboard to track every day and worry.
Instead, marketers can use this information to research, plan, and improve content to get picked up by AI – and I’m going to show you how.
Copilot Might be Small – But it’s Still Relevant
Microsoft Copilot, while small compared to ChatGPT, is still responsible for traffic. It has around 150 million monthly active users, and Microsoft has analyzed 37.5 million Copilot conversations to understand how people use AI answers.
Then you have to remember – a lot of B2B discovery happens on work devices where Bing is the default search engine. Even if a buyer ends up on Google later, their first exposure to content can easily happen in a Bing Copilot answer.
Although the data in this dashboard is for Microsoft only (and partners – which is unclear), it can still be used as estimations for wider AI search optimization.
The Useful Part Is the Grounding Queries
Most reporting tools tell you what happened after a click or an impression. This report shows the wording that caused AI to retrieve content in the first place.
Which, if you’re already optimizing for GEO/AEO, you know how important this can be.
Grounding queries show:
- The phrases AI associates with a site
- The way people frame questions and comparisons
- The topics a site is being pulled into even if the exact terms are not being targeted
So let’s get into how you can actually use this information.
Start by Sorting for What AI Already Pulls
Open the report and sort the Pages table by citations. You are looking for the small set of URLs doing most of the work.

Treat these results as starting points to expand.
Now click one of those URLs and identify what type of content it is and why it’s working:
- What topic is it about?
- What format is it in (guide, definition, comparison, FAQ)?
- What parts are “answer-shaped” (clear headings, tight sections, lists, tables)?
A page with consistent citations is telling you two things:
- The topic is common enough that AI keeps needing an answer
- The page is structured clearly enough that retrieval works
Both are rare. Most content fails the second part.
Use a Single Grounding Query to Build a Brief
In the Grounding Query table, a term might look like:
Profound vs Semrush reviews

That’s not a keyword to stuff into a post. It is a decision being made by the market: people want a comparison, and they want proof from reviews, not vendor claims.
If a query like that shows up for a site, do three things:
1: Check what page is getting cited.
If the cited page isn’t explicitly a comparison, there’s a gap. AI is using the site anyway, but it isn’t getting a clean answer. That usually caps how often the site gets pulled in.
2: Create or rebuild the comparison page so it answers the query directly.
The outline writes itself:
- What each tool is for and who it fits
- Where they are similar
- Where they differ in use cases
- Pricing and packaging differences, if they can be stated plainly
- Setup and time to value
- Strengths and weak spots in reviews, summarized without fluff
- A short recommendation section that maps to team type and maturity
3: Make the retrieval easy.
Put the conclusion near the top. Use subheads that match the language in the query. Add a simple comparison table. Add an FAQ block with the common follow-ups seen across grounding queries, like best for agencies, best for enterprise, best for reporting, and best for competitive research.
Then link that page from relevant product, category, and use case pages so the AI can find it from multiple paths.
That’s what to do with a single query and a citation count. This is packaging an answer that the AI is already trying to build.
Build a Citation Keyword Set and a Missing Section List
After pulling the top grounding queries, split them into two working lists.
Citation keyword set: queries where the site is cited repeatedly. These are topics AI already connects to the site. This list tells you what to deepen.
Missing section list: queries where the site is cited, but the page doesn’t answer the question cleanly. This shows up when the query is clearly a question or comparison, but the cited page is a guide or a category page. This list tells you what to fix first.
Instead of teams spending their time inventing new topics. This report lets them prioritize topics where AI is already willing to use the site.
Turn One Cited URL Into a Small Cluster
When a single page gets the most citations, there’s a concentration risk. If that URL slips, visibility disappears.
The simple move is to build a small cluster around the top cited page:
- A definitions or glossary page that covers key terms used in grounding queries
- One or two comparison pages tied to the same theme
- An implementation page that shows steps, requirements, and mistakes
- A short FAQ page that answers the repeated questions the AI keeps seeing
This gives AI multiple stable entry points for the same topic area, with internal links that reinforce what each page is about.
If Pages Are Indexed but Not Cited, Fix Clarity Before Anything Else
Bing isn’t subtle about what gets cited: Clear structure, direct answers, evidence, and recency are important.
When a page is eligible but not getting cited:
Move the answer higher. If a reader has to scroll to find the point, AI often can’t extract it easily.
Make subheads match the language seen in grounding queries. If the query is phrased as a question, include that question as a subhead and answer it directly underneath.
Use simple formatting where it improves extraction. Tables, short lists, and FAQs are a way to make the content usable as a source. But with value not as a GEO hack.
Update out-of-date sections. If a page references old versions, outdated pricing ranges, or old process steps, it becomes less reliable as a citation even if it still ranks.
Use the trend line to confirm improvements
Pick one page. Update it with structural changes, missing sections, and new information. Submit it with IndexNow if possible. Then watch citations over the following days and weeks.
If citations don’t move, that’s still a result. Either the topic isn’t showing up often in AI answers, or the page is still not being retrieved cleanly. Both outcomes change what to do next.
- If the topic isn’t showing up often in AI answers: stop trying to brute-force that page. Go back to Grounding Queries and find where demand is, then build/expand pages around those queries.
- If the topic does show up but your page still isn’t being retrieved cleanly: this is a page problem. Rewrite the page to make the answer easier to extract: add an ‘answer-first’ summary, tighter H2s that match grounding query wording, a comparison table or bullets, and a short FAQ section. Then re-submit and re-check.
Either way, the next step is switch topics if demand is low, or fix structure if retrieval is failing.
Make It a Weekly Routine
The report becomes useful when it feeds a repeatable planning loop.
Each week:
Pull the top cited pages and the top grounding queries. Cluster the queries into a few themes. Then make decisions:
- Two new pages for themes the site shows up around, but doesn’t own
- Two refreshes for pages that are cited inconsistently or drifting down
- One consolidation where multiple pages are being cited for the same theme, and should be merged into a stronger page
This is enough to turn citation data into an editorial system.
What Changes Now
Stop treating AI search visibility as a mystery channel that can’t be influenced.
- Use grounding queries as the backlog.
- Expand what is already being cited.
- Fix pages that are being pulled in for queries they don’t answer well.
- Build small clusters around citation heavy pages so everything isn’t dependent on one URL.
That’s the value of this release. It shows what AI is already trying to use a site for, and it gives enough detail to turn that into content work that is hard to argue with.



ChatGPT
Claude
Perplexity






