加利福尼亚州颁布了先入国的AI安全法,要求透明度、事故报告以及强力AI系统严格的风险限制。
California enacts first-in-nation AI safety law requiring transparency, incident reporting, and strict risk limits for powerful AI systems.
加利福尼亚州州长Gavin Newsom已经签署了参议院第53号法案,这是首个国家的法律,要求开发强大的AI系统的公司公开披露安全协议,在15天内报告重大事件,并遵守严格的风险阈值。
California Governor Gavin Newsom has signed Senate Bill 53, the first-in-the-nation law requiring companies developing powerful AI systems to publicly disclose safety protocols, report critical incidents within 15 days, and comply with strict risk thresholds.
这项法律针对能够造成灾难性伤害的边界AI模式,被界定为10亿美元的损失或50多人受伤或死亡,并且对每次违反者处以100万美元的罚款。
The law targets frontier AI models capable of causing catastrophic harm—defined as $1 billion in damage or more than 50 injuries or deaths—and imposes $1 million fines per violation.
它包括保护举报人、公共研究云,以及对新开办企业的豁免,以促进创新。
It includes whistleblower protections, a public research cloud, and exemptions for startups to promote innovation.
这项立法由专家投入和两党支持制定,目的是在联邦行动停滞不前的情况下,平衡大赦国际的进步与公共安全。
The legislation, shaped by expert input and bipartisan support, aims to balance AI advancement with public safety amid stalled federal action.