Tuesday, July 15, 2025

Google’s ‘srsltid’ Parameter Appears

Write an article about Google’s srsltid parameter, originally meant for product tracking, is...

The Future of Blogging:...

The world of blogging is constantly evolving, and it's essential to stay ahead...

The Ultimate Blog Traffic...

Blogging is an amazing way to share your thoughts, experiences, and expertise with...

Which Social Media Platform...

Social media has become a crucial part of our daily lives, and it's...
HomeSEOGoogle Improves RAG

Google Improves RAG

Introduction to AI Search and Assistants

Google researchers have introduced a new method to improve AI search and assistants by enhancing Retrieval-Augmented Generation (RAG) models. This method helps RAG models recognize when retrieved information lacks sufficient context to answer a query, which can lead to more reliable and accurate AI-generated responses.

The Problem with Current RAG Models

Current RAG models, such as Gemini and GPT, often attempt to answer questions even when the retrieved data contains insufficient context. This can result in hallucinations, or incorrect answers, instead of abstaining from answering. The researchers found that these models can provide correct answers when given sufficient context, but they also answer correctly 35-65% of the time even when the context is insufficient.

Defining Sufficient Context

The researchers define sufficient context as meaning that the retrieved information contains all the necessary details to derive a correct answer. This classification does not require the answer to be verified, but rather assesses whether the retrieved information provides a reasonable foundation for answering the query. Insufficient context, on the other hand, means that the retrieved information is incomplete, misleading, or missing critical details needed to construct an answer.

- Advertisement -

Sufficient Context Autorater

The Sufficient Context Autorater is an LLM-based system that classifies query-context pairs as having sufficient or insufficient context. The best performing autorater model, Gemini 1.5 Pro (1-shot), achieved a 93% accuracy rate, outperforming other models and methods.

Reducing Hallucinations with Selective Generation

The researchers discovered that RAG-based LLM responses were able to correctly answer questions 35-62% of the time when the retrieved data had insufficient context. They used this discovery to create a Selective Generation method that uses confidence scores and sufficient context signals to decide when to generate an answer and when to abstain. This achieves a balance between allowing the LLM to answer a question when there’s a strong certainty it is correct and abstaining when there’s insufficient context.

How Selective Generation Works

The researchers describe how Selective Generation works: "…we use these signals to train a simple linear model to predict hallucinations, and then use it to set coverage-accuracy trade-off thresholds. This mechanism differs from other strategies for improving abstention in two key ways. First, because it operates independently from generation, it mitigates unintended downstream effects…Second, it offers a controllable mechanism for tuning abstention, which allows for different operating settings in differing applications, such as strict accuracy compliance in medical domains or maximal coverage on creative generation tasks."

Takeaways

The research paper does not state that AI will always prioritize well-structured pages, but rather that context sufficiency is one factor that influences AI-generated responses. Confidence scores also play a role in intervening with abstention decisions. Pages with complete and well-structured information are more likely to contain sufficient context, but other factors such as how well the AI selects and ranks relevant information also play a role.

Characteristics of Pages with Insufficient Context

Pages with insufficient context may be lacking enough details to answer a query, misleading, incomplete, contradictory, or require prior knowledge. The necessary information to make the answer complete may be scattered across different sections instead of presented in a unified response.

Relation to Google’s Quality Raters Guidelines

Google’s Quality Raters Guidelines (QRG) has concepts that are similar to context sufficiency. For example, the QRG defines low-quality pages as those that don’t achieve their purpose well because they fail to provide necessary background, details, or relevant information for the topic. The guidelines also describe low-quality pages as those with a large amount of off-topic and unhelpful content, or those with a large amount of "filler" or meaningless content.

Conclusion

The research paper introduces a new method to improve AI search and assistants by enhancing RAG models’ ability to recognize when retrieved information lacks sufficient context. This method can lead to more reliable and accurate AI-generated responses. While the paper does not state that AI will always prioritize well-structured pages, it highlights the importance of context sufficiency in AI-generated responses. By understanding the characteristics of pages with insufficient context and the relation to Google’s Quality Raters Guidelines, publishers can create content that is more useful for AI-generated answers.

- Advertisement -

Latest Articles

- Advertisement -

Continue reading

Transform Your Blog: The Top 20 WordPress Themes for a Professional Look

WordPress is one of the most popular blogging platforms, and for good reason. It's easy to use, customizable, and has a wide range of themes to choose from. When it comes to creating a professional-looking blog, the theme you...

The Ultimate Guide to Increasing Your Blog’s Visibility: A Step-by-Step Plan

Increasing your blog's visibility is crucial to attracting and retaining a large audience. With millions of blogs on the internet, it can be challenging to stand out from the crowd. However, with a well-planned strategy, you can improve your...

Google’s New Graph Foundation Model Improves Precision By Up To 40X

Introduction to Graph Foundation Models Google has announced a breakthrough in artificial intelligence (AI) with the development of Graph Foundation Models (GFM). This new technology has the ability to generalize to previously unseen graphs, delivering a three to forty times...

The Ultimate Guide to Avoiding Rookie Blogging Mistakes

Blogging is an amazing way to express yourself, share your ideas, and connect with others who have similar interests. However, as a beginner, it's easy to make mistakes that can hurt your blog's credibility and growth. In this guide,...