麻省理工学院的研究显示,尽管有客观的培训数据,但大型AI语言模型表现出左倾偏向。
MIT study reveals large AI language models exhibit left-leaning bias despite objective training data.
麻省理工学院的研究人员发现,在诸如ChatGPT等AI应用软件中使用的大型语言模型,即使经过客观信息培训,也可以表现出左倾政治偏见。
Researchers at MIT found that large language models, used in AI applications like ChatGPT, can exhibit a left-leaning political bias even when trained on objective information.
由博士候选人Suyash Fulay和研究科学家Jad Kabbara领导的这项研究表明,尽管使用了所谓真实的数据集,但偏见依然存在,使人们对这些模型的可靠性和可能滥用表示关切。
The study, led by PhD candidate Suyash Fulay and Research Scientist Jad Kabbara, showed that bias persisted despite using supposedly truthful datasets, raising concerns about the reliability and potential misuse of these models.