Introduction to the Next Generation of Search
Google’s approach to search has undergone significant changes over the years. Initially, the search engine ranked content based on keyword matching, with PageRank extending this paradigm using anchor text from links. The introduction of the Knowledge Graph in 2012 marked a shift towards ranking answers based on real-world entities, moving from "strings to things." Today, this evolution continues with what Google referred to in 2012 as "the next generation of search," which leverages the collective intelligence of the web to understand the world more like humans do.
Understanding Long-Form Answers in Search
The current search paradigm surfaces long-form answers that address three or more additional questions beyond the user’s initial query. This development shatters the traditional SEO paradigm of optimizing for a single keyword for one search result, splintering it due to query fan-out. Google’s Danny Sullivan and John Mueller have offered guidance on what SEOs should focus on in this new landscape. However, the critical aspect of AI search they avoid discussing is the impact of query fan-out on referrals, where Google ranks a handful of pages for multiple queries, affecting traffic to other sites.
Guidance on Writing for Long-Form Answers
Given the shift towards long-form answers, there’s a misconception that creating content in "bite-sized chunks" is beneficial for AI understanding. However, this approach is arbitrary and ignores the fact that properly structured web pages are already divided into logical sections through headings and HTML elements. Danny Sullivan warns against breaking content into chunks solely for search engines, emphasizing that content should be crafted for humans, as systems designed to understand human-structured content will improve over time to reward human-centric writing.
The Issue of Garbage AI Search Results
A more pressing concern is the quality of search results provided by Google’s AI mode, which often prioritizes low-quality, non-authoritative sites over expert publications. This issue is compounded by the hiding of expert content under the "More" tab, requiring users to click through to find reputable sources. The promotion of garbage results over high-quality content is a significant problem, leading to a decline in traffic for subject-matter experts and a lack of joy in discovery through Google Search.
Examples of Poor Search Results
A search for styling a sweatshirt, for instance, yields results from an abandoned Medium blog, a LinkedIn article, and a sneaker retailer’s website, none of which are authoritative or expert sources. In contrast, high-quality results from reputable sites like GQ or the New York Times are hidden under the "More" tab. This phenomenon is not isolated and reflects a broader issue with Google’s current search algorithm prioritizing low-quality content.
Conclusion
The shift in Google’s search paradigm towards long-form answers and AI-driven results has significant implications for SEOs and publishers. While guidance from Google’s Danny Sullivan and John Mueller emphasizes the importance of crafting content for humans, the more critical issue of garbage search results and the hiding of expert content under the "More" tab needs urgent attention. Google’s AI mode, in its current state, promotes low-quality sites and lacks the expertise and authority that users seek. A reset in Google’s approach, potentially by reintroducing the original search algorithm with AI features accessible under a "More" tab, could help restore the joy of discovery and promote high-quality, expert content. Ultimately, the focus should be on enhancing the user experience and ensuring that search results are both relevant and authoritative.

