Introduction to Microsoft’s New Framework
Microsoft’s Bing team has published a framework that describes how indexing requirements change when the goal is to support AI answers rather than to rank search results. This framework identifies five measurement areas where the company says the two systems diverge. It also names "abstention" as a design choice for AI-powered retrieval. In this article, we will explore the key points of this framework and what it means for the future of search.
Understanding Traditional Search and Grounding Indexing
The post argues that traditional search indexing and grounding indexing share the same foundation but serve different goals. Traditional search asks "which pages should a user visit?" while the grounding layer asks "what information can an AI system responsibly use to construct a response?" This difference in goals leads to different measurement requirements.
Five Categories of Measurement Requirements
Microsoft identifies five categories where the measurement requirements differ:
- Factual Fidelity: In traditional search, some ranking mismatch is tolerable because a user can click through and evaluate. In grounding, breaking content into retrievable chunks can distort page substance in ways that never appear in any ranking signal.
- Source Attribution Quality: Attribution is helpful in traditional search but is a "core signal" in grounding. Not all indexed content matters equally as evidence for an AI answer.
- Freshness: Stale content in search is a ranking problem, but in grounding, a stale fact produces a misleading response.
- Coverage of High-Value Facts: A missed document in search is recoverable because alternative results exist. In grounding, the index must ensure that the specific facts and sources that people are likely to ask about are actually available and groundable.
- Contradictions: Traditional search can surface one source above another and let the user decide. A grounding system can’t do that because an AI system that silently arbitrates between contradictory sources may confidently assert the wrong thing.
Abstention and Iterative Retrieval
The post also covers two design differences between the systems:
- Abstention: Declining to answer is a valid outcome when support is missing, stale, or conflicting. Traditional search doesn’t need to make this judgment because it presents options for a human to evaluate.
- Iterative Retrieval: Traditional search is typically a single interaction where a query goes in and ranked results come out. Grounding systems may need to ask follow-up questions, refine retrieval based on intermediate results, and combine evidence from multiple sources. Errors in early retrieval steps can compound through subsequent reasoning steps in ways that no human reviewer would catch in real-time.
Context of the Framework
This blog post comes after a series of moves by Microsoft to build out its grounding tooling and give publishers visibility into it. In February, Microsoft launched the AI Performance dashboard in Bing Webmaster Tools, giving sites their first page-level citation data for AI-generated answers. The company also rewrote the Bing Webmaster Guidelines to include GEO as a named optimization category and added grounding query-to-page mapping to the dashboard. At SEO Week in April, Madhavan previewed four additional features for the dashboard, including Citation Share and grounding query intent labels.
Why This Matters
This framework clarifies what Microsoft says its systems need from the index for AI answers. Microsoft states that grounding relies on the same crawling, quality, and web understanding as search, but grounded answers require accurate, fresh, attributable, and consistent evidence. Stale facts, weak sources, and contradictions pose risks when content is used for answers.
Looking Ahead
The post offers insight into why some content is easier for AI to cite. If the Citation Share and intent-label features previewed at SEO Week ship, they could help test whether the measurement priorities described here show up in actual publisher data.
Conclusion
In conclusion, Microsoft’s new framework provides valuable insights into the differences between traditional search indexing and grounding indexing. By understanding these differences, we can better appreciate the challenges and opportunities of developing AI-powered search systems. As Microsoft continues to evolve its indexing and grounding capabilities, it will be interesting to see how these changes impact the way we interact with search engines and the quality of the answers we receive.

