BBC & European Broadcasting Union. (2025). News Integrity in AI Assistants TOOLKIT. European Broadcasting Union. https://www.ebu.ch/files/live/sites/ebu/files/Publications/MIS/open/EBU-MIS-BBC_News_Integrity_in_AI_Assistants_Toolkit_2025.pdf
An "AI Assistant" seems to mean any generative AI tool. This toolkit addresses the questions What makes a good AI assistant response to a news question? & What are the problems that need to be fixed? It describes "four key components of a good AI assistant
response" "1. Accuracy: is the information provided by the AI assistant correct?" "2. Providing context: is the AI assistant providing all relevant and necessary information?" "3. Distinguishing opinion from fact: is the AI assistant clear whether the information it is providing is fact or opinion?" "4. Sourcing: is the AI assistant clear and accurate about where the information it provides comes from?"
It also gives many examples of how things go wrong - drawing on results from the report referenced below. The toolkit is meant for tech companies, media companies and their stakeholders.
I found this via: Archer, P. & De Tender, J.P. (2025, October). News Integrity in AI Assistants: An international PSM study. European Broadcasting Union. https://www.ebu.ch/files/live/sites/ebu/files/Publications/MIS/open/EBU-MIS-BBC_News_Integrity_in_AI_Assistants_Report_2025.pdf
PSM = Public
Service Media. Thanks to the MILA newsletter that highlighted that report.
Photo by Sheila Webber: part of the "Dear Library" exhibition at the National Library of Scotland, December 2025. Visitors are asked to categorise a variety of European initiatives.
Curating information literacy stories from around the world since 2005 - - - Stories identified, chosen and written by humans!
Monday, December 22, 2025
News Integrity in AI Assistants
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment