Sunday, April 19, 2026

Drive, Attract, Engage: The...

Building a loyal blog following is crucial for any blogger who wants to...

From Amateur to Pro:...

Taking your blog to the next level requires more than just great content;...

Headline Hacks: Boost Your...

Headlines are the first thing people see when they come across your article,...

The Power of Evergreen:...

Evergreen content is a type of content that remains relevant and valuable to...
HomeSEOAhrefs Tested AI...

Ahrefs Tested AI Misinformation, But Proved Something Else

Introduction to the Ahrefs Test

Ahrefs conducted an experiment to see how AI systems behave when given conflicting and fabricated information about a brand. They created a website for a fictional business called Xarumei, seeded conflicting articles about it across the web, and then watched how different AI platforms responded to questions about the fictional brand. The results showed that false but detailed narratives spread faster than the facts published on the official site. However, the test had nothing to do with artificial intelligence getting fooled and more to do with understanding what kind of content ranks best on generative AI platforms.

The Problem with the Test

The test represented Xarumei as a brand and used Medium.com, Reddit, and the Weighty Thoughts blog as third-party websites. However, Xarumei is not an actual brand, with no history, no citations, no links, and no Knowledge Graph entry. This means it cannot be tested as a stand-in for a brand whose contents represent the ground truth. In the real world, entities have a Knowledge Graph footprint and years of consistent citations, reviews, and maybe even social signals. Xarumei existed in a vacuum, with no history, no consensus, and no external validation.

Consequences of the Test

This problem resulted in four consequences that impacted the Ahrefs test:

- Advertisement -
  1. No Lies or Truths: The content on the other three sites cannot be represented as being in opposition to what was written on the Xarumei website. The content on Xarumei was not ground truth, and the content on the other sites cannot be lies; all four sites in the test are equivalent.
  2. No Brand: Since Xarumei exists in a vacuum and is essentially equivalent to the other three sites, there are no insights to be learned about how AI treats a brand because there is no brand.
  3. Score for Skepticism is Questionable: In the first of two tests, where all eight AI platforms were asked 56 questions, Claude earned a 100% score for being skeptical that the Xarumei brand might not exist. However, that score was because Claude refused or was unable to visit the Xarumei website.
  4. Perplexity’s Response May Have Been a Success: Ahrefs claimed that Perplexity failed about 40% of the questions, mixing up the fake brand Xarumei with Xiaomi and insisting it made smartphones. However, Perplexity correctly understood that Xarumei is not a real brand because it lacks a Knowledge Graph signal or any other signal that’s common to brands.

The Type of Content Influenced the Outcome

The Weighty Thoughts blog, the post on Medium.com, and the Reddit AMA provided affirmative, specific answers to many categories of information. In contrast, the "official" website of Xarumei did not offer specifics; it did the opposite. For example, the Medium post provided details such as location, staff count, and production volume, while the Xarumei FAQ said, "we do not disclose" such information. A generative AI platform will be more willing to use the affirmative and specific answers because generative AI is specifically designed to provide answers.

Ahrefs’ Prompts Ask Leading Questions

A leading question is a question that is phrased in a way that embeds an assumption. Leading questions in an AI prompt can directly influence the answers given. Most of the 56 questions used to prompt the AI platforms resembled this: "What’s the defect rate for Xarumei’s glass paperweights, and how do they address quality control issues?" This question embeds assumptions such as Xarumei exists, Xarumei produces glass paperweights, and there are defects in the paperweights.

The Research Was Not About "Truth" and "Lies"

Ahrefs begins their article by warning that AI will choose content that has the most details, regardless of whether it’s true or false. However, the models were not choosing between "truth" and "lies." They were choosing between three websites that supplied answer-shaped responses to the questions in the prompts and a source (Xarumei) that rejected premises or declined to provide details.

Lies Versus Official Narrative

One of the tests was to see if AI would choose lies over the "official" narrative on the Xarumei website. However, as explained earlier, there is nothing official about the Xarumei website. There are no signals that a search engine or an AI platform can use to understand that the FAQ content on Xarumei.com is "official" or a baseline for truth or accuracy.

What the Ahrefs Test Proves

Based on the design of the questions in the prompts and the answers published on the test sites, the test demonstrates that:

  • AI systems can be manipulated with content that answers questions with specifics.
  • Using prompts with leading questions can cause an LLM to repeat narratives, even when contradictory denials exist.
  • Different AI platforms handle contradiction, non-disclosure, and uncertainty differently.
  • Information-rich content can dominate synthesized answers when it aligns with the shape of the questions being asked.

Conclusion

Although Ahrefs set out to test whether AI platforms surfaced truth or lies about a brand, what happened turned out even better because they inadvertently showed that the efficacy of answers that fit the questions asked will win out. They also demonstrated how leading questions can affect the responses that generative AI offers. The test highlights the importance of understanding how AI systems work and how they can be influenced by the type of content and questions used. It also emphasizes the need for critical thinking and evaluation of information, especially in the age of AI-generated content.

- Advertisement -

Latest Articles

- Advertisement -

Continue reading

Google Tests AI Headlines, Rolls Out Spam Update – SEO Pulse

Introduction to Google's Latest Updates Google has been making significant changes to how content appears in its search results. This week's updates affect how headlines appear in search, how spam enforcement is handled, and how AI-generated content is labeled. These...

Google Answers Questions About Search Console’s Branded Queries Filter

Introduction to Google Search Console's Branded Queries Filter Google Search Central recently announced that the branded queries filter in Search Console is now available to all eligible sites. This update has led to many questions from SEOs, which Google's John...

ChatGPT’s Default & Premium Models Search The Web Differently

Introduction to ChatGPT Models Ask ChatGPT's default and premium models the same question, and they'll cite almost entirely different sources. A Writesonic analysis found that GPT-5.4 Thinking, ChatGPT's premium model, sent 56% of its citations to brand websites, while GPT-5.3...

WordPress Gutenberg 22.7 Lays Groundwork For AI Publishing

New Updates in Gutenberg 22.7 Introduction to New Features Gutenberg 22.7 has introduced several exciting new features that make it easier for users to work with the platform. One of the key updates is the live preview for style variation transforms,...