A recent study by the European Broadcasting Union (EBU) and the BBC revealed that leading AI assistants inaccurately represent news content in almost 50% of their responses. The international research, which analyzed 3,000 responses across 14 languages, evaluated AI assistants like OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity for their accuracy, sourcing, and ability to differentiate between opinion and fact.
The findings showed that 45% of the AI responses examined contained significant issues, with 81% exhibiting some form of problem. Surprisingly, 7% of online news consumers and 15% of individuals under 25 rely on AI assistants for news consumption, as per the Reuters Institute’s Digital News Report 2025.
Companies such as Gemini, Google, OpenAI, and Microsoft have expressed their commitment to enhancing their platforms. Gemini encourages feedback to enhance user experience, while OpenAI and Microsoft are actively addressing issues like hallucinations, where AI models generate incorrect information due to inadequate data. Perplexity boasts a 93.9% factuality accuracy rate in one of its research modes.
The study also highlighted that a third of AI assistant responses contained serious sourcing errors, with 72% of Gemini’s responses showing significant sourcing issues compared to other assistants. Inaccuracies, including outdated information, were present in 20% of responses across all AI assistants.
Notable examples of misrepresentation included Gemini providing incorrect information on law changes regarding disposable vapes and ChatGPT erroneously reporting Pope Francis as the current Pope several months after his passing. The research involved 22 public-service media organizations from 18 countries, emphasizing the potential threat AI assistants pose to public trust in news content.
The EBU emphasized the need for AI companies to enhance the accuracy of their responses to news queries and promote accountability, drawing parallels to news organizations’ procedures for error identification and correction. Upholding accountability for AI assistants is crucial to maintaining public trust and democratic participation, according to EBU media director Jean Philip De Tender.
