AI自动化工具改变用户对政治和社会问题的看法, 即便有偏见警告,
AI autocomplete tools shift users' views on politics and social issues, even with bias warnings, a new study finds.
人工智能自动化工具可以改变用户对社会和政治问题的看法 — — 即使有人警告说存在偏见,Cornell Tech(Cornell Tech)研究于3月11日在《科学进步》杂志上发表。
AI autocomplete tools can shift users’ opinions on social and political issues—even when warned about bias, a Cornell Tech study published March 11 in Science Advances found.
在有2,500多名参与者参加的两个实验中,关于死刑和投票权等专题的有偏见的大赦国际建议导致了可衡量的态度变化,即使使用者不接受案文,也是如此。
In two experiments with over 2,500 participants, biased AI suggestions on topics like the death penalty and voting rights led to measurable attitude changes, even when users didn’t accept the text.
尽管发出警告,但这种影响持续存在,并出现在各种政治观点中,大赦国际实时反馈证明比静态论点更有影响力。
The effect persisted despite warnings and occurred across political views, with real-time AI feedback proving more influential than static arguments.
研究人员说,在写作任务中广泛使用人工智能引起了人们对对信仰和公共言论的微妙、不为人注意的影响的担忧。
Researchers say the widespread use of AI in writing tasks raises concerns about subtle, unnoticed influence on beliefs and public discourse.