研究显示,人工智能系统缺乏对同情心和道德的培训,提出了一种将人工智能与社会价值观保持一致的新方法. Study reveals AI systems lack training in empathy and ethics, proposing a new method to align AI with societal values.
Purdue大学的研究人员发现,AI系统主要接受信息和实用价值方面的培训,往往忽略了有利于社会、福祉和公民价值。 Researchers from Purdue University found that AI systems are trained primarily on information and utility values, often overlooking prosocial, well-being, and civic values. 这项研究审查了大赦国际各大公司使用的三个数据集,发现在同情、司法和人权方面缺乏培训。 The study examined three datasets used by major AI companies and found a lack of training on empathy, justice, and human rights. 该小组采用了一种称为 " 从人类反馈中强化学习 " 的方法,以帮助AI系统与社会价值观保持一致,利用经整理的数据集确保道德行为并更好地服务于社区价值观。 The team introduced a method called reinforcement learning from human feedback to help AI systems align with societal values, using curated datasets to ensure ethical behavior and better serve community values.