卡内基·梅隆教授Zico Kolter领导了OpenAI的安全委员会,该委员会可以阻止AI的释放,以确保在向盈利实体过渡期间的安全。
Zico Kolter, a Carnegie Mellon professor, leads OpenAI’s safety committee, which can halt AI releases to ensure safety during its transition to a for-profit entity.
Zico Kolter是卡内基·梅隆大学42岁的计算机科学教授,领导OpenAI的安全和安保委员会,该委员会由四名成员组成,有权阻止被认为不安全的AI释放。
Zico Kolter, a 42-year-old computer science professor at Carnegie Mellon University, leads OpenAI’s Safety and Security Committee, a four-member panel with the power to halt AI releases deemed unsafe.
作为一个多年前的任命, 他的角色成为了OpenAI2025过渡到一个以利为目的的公益公司的关键要求,
Appointed over a year ago, his role became a key requirement for OpenAI’s 2025 transition into a for-profit public benefit corporation, mandated by California and Delaware regulators.
这些协定确保安全决定优先于财务目标,给予Kolter充分观察营利性董事会会议和获取安全数据的权利。
The agreements ensure safety decisions take precedence over financial goals, granting Kolter full observation rights to for-profit board meetings and access to safety data.
该委员会处理的风险包括在武器开发、网络攻击和心理健康伤害方面滥用人工智能的风险。
The committee addresses risks including AI misuse in weapons development, cyberattacks, and mental health harm.
虽然科尔特拒绝确认该小组是否曾经阻止过发布,但他强调了先进人工智能日益增长的威胁格局。
While Kolter declined to confirm if the panel has ever blocked a release, he emphasized the growing threat landscape of advanced AI.
在开放国际协会从非盈利性使命转向商业企业的过程中,他的领导地位反映出强化审查。
His leadership reflects heightened scrutiny as OpenAI navigates its shift from nonprofit mission to commercial enterprise.