Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
What: AI assistants misrepresent news content in 45% of cases, undermining trust in both technology and the brands associated with it.
Why it is important: This issue mirrors retail’s own challenges with AI, where transparency and responsible implementation are essential for consumer trust.
A major international study led by the BBC and the European Broadcasting Union has revealed that AI assistants misrepresent news content in nearly half of all cases, regardless of language or territory. The research, which evaluated over 3,000 responses from leading AI tools, found systemic issues with accuracy, sourcing, and the distinction between fact and opinion. Despite these shortcomings, a significant portion of the public—especially younger audiences—continues to trust AI-generated news summaries, often assuming machine-generated content is inherently accurate. However, the presence of errors has a profound impact on trust, not only in the AI tools themselves but also in the news brands cited within summaries. The study underscores that accountability is widely distributed, with audiences holding AI providers, regulators, and news organizations responsible for mistakes. These findings highlight the reputational risks for any brand associated with AI-generated content and reinforce the need for robust oversight, transparency, and responsible implementation as AI becomes more deeply embedded in information delivery.
IADS Notes: The BBC and EBU’s findings on AI-generated news errors closely mirror trends in retail, where trust, transparency, and responsible AI implementation are increasingly critical. As detailed in Retail Dive’s “Retailers face trust challenges as generative AI becomes more integrated” (Nov 2024), consumer trust hinges on clear disclosure and value-added use cases. Harvard Business Review’s “Consumers don’t want AI to seem human” (Jan 2025) emphasizes the importance of transparency and human oversight, while Inside Retail’s “Retail’s AI psychosis: The industry must not outsource its brain” (Sep 2025) warns against over-reliance on automation at the expense of human insight. The business imperative for responsible AI is underscored in Harvard Business Review’s “How responsible AI protects the bottom line” (Mar 2025), which links privacy and auditability to adoption and satisfaction. Finally, Forbes’ “AI-powered shopping growing dramatically, Adobe reports” (Mar 2025) demonstrates that as AI becomes mainstream, accuracy and responsible implementation are essential for sustaining trust and driving business value.
Full report: Audience use and perceptions of AI assistants for news
