在来自美国国防部的压力和竞争之下,人类放弃了其安全第一人工智能发展政策。
Anthropic abandoned its safety-first AI development policy under pressure from the U.S. Defense Department and competition.
由于安全问题,如果竞争对手正在更快地进步,Anthropic 已经放弃了严格的负责任扩展政策,不再承诺停止人工智能开发.
Anthropic has abandoned its strict Responsible Scaling Policy, no longer committing to halt AI development over safety concerns if competitors are advancing faster.
这一变化是由竞争压力和缺乏联邦条例所驱动的,取代了具有约束力的保障措施,代之以公共目标的无约束力的“前沿安全路线图”。
The change, driven by competitive pressure and lack of federal regulation, replaces binding safeguards with a nonbinding "Frontier Safety Roadmap" of public goals.
这一转变是继美国国防部的要求不断升级之后发生的,美国国防部威胁要援引《国防生产法》,并削减一项价值2亿美元的合同,除非人类放松其限制。
The shift follows escalating demands from the U.S. Defense Department, which threatened to invoke the Defense Production Act and cut a $200 million contract unless Anthropic relaxed its restrictions.
虽然该公司坚持不允许人工智能用于自主武器或大规模监视, 但这项行动与最初的安全第一立场相当重要, 因为工业压力越来越大, 要求优先关注速度及国家安全利益。
While the company maintains it won’t allow AI use in autonomous weapons or mass surveillance, the move marks a significant pivot from its original safety-first stance amid growing industry pressure to prioritize speed and national security interests.