一项主要研究显示,大赦国际的新闻助理经常提供不准确或来源不可靠的信息,81%的答复有缺陷。
A major study reveals AI news assistants frequently provide inaccurate or poorly sourced information, with 81% of responses flawed.
欧洲广播联盟和英国广播公司的一项主要研究发现,诸如ChatgPT、Copilot、Gemini和Pollicity等主要的AI助理经常提供不准确或误导性的新闻,45%的回复含有重大问题,81%的回复有缺陷。
A major study by the European Broadcasting Union and BBC finds that leading AI assistants like ChatGPT, Copilot, Gemini, and Perplexity frequently deliver inaccurate or misleading news, with 45% of responses containing major issues and 81% showing some flaw.
研究人员对14种语言的3000个答案进行了分析,发现存在广泛问题,包括虚假事实、过时的信息和来源不足,Gemini的归因错误率最高,达72%。
Analyzing 3,000 answers across 14 languages, researchers found widespread problems including false facts, outdated information, and poor sourcing, with Gemini showing the highest rate of attribution errors at 72%.
这项研究涉及来自18个国家的22个公共服务媒体单位,突显了越来越多的关切,因为年轻用户越来越多地依赖AI新闻,增加了公众信任和民主参与的风险。
The study, involving 22 public-service media outlets from 18 countries, highlights growing concerns as younger users increasingly rely on AI for news, raising risks to public trust and democratic engagement.
虽然一些公司承认目前存在挑战,但报告呼吁加强问责制,改进AI的准确性和来源。
While some companies acknowledge ongoing challenges, the report calls for greater accountability and improvements in AI accuracy and sourcing.