Artificial intelligence promised us convenience, knowledge at our fingertips, and smarter decisions. But like every story written by human hands, there is another side—darker, quieter, often ignored until it grows too big. That side is misinformation.
On September 5, 2025, NewsGuard—a platform dedicated to evaluating the reliability of media and online content—published a report that startled many. They found that AI platforms, the same ones millions rely on daily, have doubled their rate of false responses in just one year.
In August 2024, AI’s misinformation rate stood at 18%. Today, that number has jumped to 35%. Almost double. Imagine: more than one in every three answers you receive from a chatbot may be tainted with bias, unreliable sources, or even outright propaganda.
This is not just a statistic. It is a wake-up call. Because behind each hallucination, behind each carefully crafted yet false narrative, lies the potential to sway decisions—personal, political, and even financial.
So, how did this happen?
The Concrete Data Behind the Crisis
The NewsGuard report did not speak in vague metaphors. It gave names and numbers. The AI system most contaminated with misinformation? Inflection, with over 56% of its answers flawed. Next came Perplexity, at 46%, and then two giants: ChatGPT and Meta, both reporting 40% misinformation rates.
Others fared slightly better but still concerning: Mistral and Copilot at 36%, Grok and You.com at 33%. And at the bottom of the flawed ranking—ironically, the least flawed—stood Gemini (Google) at 16% and Claude (Anthropic) at just 10%.
The numbers are chilling. If the AI you use belongs to the wrong half of this list, chances are, more than a third of what you read could be false. Not only that—NewsGuard emphasized that much of this misinformation comes from sites deliberately seeded by malicious actors, including state-backed propaganda.
Now pause for a moment. What happens when falsehoods are packaged with the same eloquence and confidence as truth? Many people stop questioning. And in that silence, misinformation spreads like wildfire.
The Hidden Hands: Propaganda and Disinformation
One of the most striking points in the report was the role of Russia’s Storm-1516 campaign. Already in 2024, nearly a third of AI-generated outputs carried propaganda linked to this network. That same propaganda whispered across digital borders, weaving lies into everyday conversations.
Take one example: a shocking claim that French President Emmanuel Macron had AIDS, tied to a fabricated personal scandal. Chatbots repeating this falsehood? Around 55 million interactions were recorded. Some AI tools denied the story. Others, disturbingly, fueled the controversy, lending credibility to something that never happened.
And Russia is not alone. The United States has also been scrutinized. Surveys in 2024 revealed deep mistrust among Americans: half admitted being “more concerned than excited” about AI’s rise. By 2025, 75% of respondents said they sometimes or never trusted AI at all.
The truth is bitter: AI is not neutral. It reflects the sources it is fed, the networks it listens to, and the hands guiding it from behind the curtains.
Improvements or Illusions?
It would be unfair to ignore the progress. NewsGuard acknowledged that the non-response rate of AI dropped from 31% in 2024 to 0% in 2025. In simpler terms, chatbots now always give you an answer.
But here lies the paradox: always answering means often answering wrongly. With real-time internet access, AI systems became more vulnerable to amplifying falsehoods, especially during breaking news.
Yes, companies are making pledges. They talk about partnerships with trusted organizations, about building filters, about better guardrails. Yet NewsGuard warns us: unless the very ecosystem of information is purified, unless we stop normalizing tainted data, no amount of technological tweaking will save us.
It’s like drinking from a well. You can polish the bucket, but if the water is poisoned, each sip carries danger.
What You Can Do: Turning Awareness Into Action
Here is where the story shifts from observation to decision. Knowing that AI can spread misinformation is not enough—you must protect yourself.
-
Use fact-checking tools like NewsGuard. They help separate truth from falsehood when AI cannot.
-
Develop digital literacy. Learn how to question, verify, and cross-check. Do not accept every chatbot answer at face value.
-
Choose trusted AI platforms. If you must rely on AI, look for those with lower misinformation rates and visible partnerships with credible news outlets.
-
Adopt responsible usage habits. Don’t forward unverified claims. Don’t rely on AI for sensitive decisions like health, law, or investments without expert input.
And most importantly—support platforms, services, and initiatives that fight misinformation. By doing so, you are not just protecting yourself. You are helping to build a healthier digital ecosystem where truth can breathe again.
Final Thoughts: A Choice We Cannot Delay
The rising tide of false information carried by artificial intelligence is no longer just a technical flaw. It is a societal issue, one that touches politics, economics, and personal lives.
The good news? You have power. By making intentional choices—purchasing services from fact-checking platforms, adopting digital literacy programs, and holding AI providers accountable—you can shift the balance.
Not-so-clever artificial intelligences may be spreading falsehoods. But clever, thoughtful human beings—you and I—still hold the pen to write what comes next.