Can artificial intelligence be tricked?
Can artificial intelligence be tricked? The case that challenged Gemini and ChatGPT
A journalist’s experiment reveals how AI can assume fabricated information is valid by prioritizing well-structured content.
Trust in generative artificial intelligence is once again under debate. An experiment conducted by a BBC journalist demonstrated that it is possible to induce models like Google’s Gemini and OpenAI’s ChatGPT to reproduce completely false information as if it were verified data.
The case exposes a structural weakness in systems that combine language models with real-time web searches: if the content appears credible and is published online, it can be cited without thorough validation.
The Experiment That Fooled ChatGPT and Gemini in Less Than 24 Hours
Tech journalist Thomas Germain published a fabricated article on his personal blog claiming to be “the world’s best hot dog-eating journalist,” including false details such as a supposed championship held in South Dakota.
The result was immediate: in less than 24 hours, both Gemini and ChatGPT began citing this content as a valid source when asked about the topic. The false claim became “verifiable fact” within the responses generated by the models.
The experiment reveals a critical point: when AI systems access the web to supplement their training, they prioritize structured and coherent content, even without actual editorial verification.
Why does this happen?
Models like Gemini and ChatGPT operate using statistical language prediction. When they integrate online search tools, the process combines:
1: Retrieving publicly available information.
2: Automatically evaluating relevance.
3: Generating a fluid and contextualized response.
The problem arises when credibility assessments fail to distinguish rigorously enough between an established media outlet and a personal blog presenting plausible information.
Source: www.itsitio.com

