在DeepSeek在敏感议题上造成更多缺陷之后,澳大利亚公司就AI风险提出警告。
Australian firms warned on AI risks after DeepSeek generated more flaws on sensitive topics.
网络安全公司CrowdStrike发现中国的DeepSeek AI模式在以政治敏感术语(如西藏、台湾、法轮功和维吾尔等)引发的代码安全缺陷明显增加后,澳大利亚公司被警告要谨慎使用外国AI工具。
Australian companies are being warned to proceed cautiously with foreign AI tools after cybersecurity firm CrowdStrike found China’s DeepSeek AI model generated significantly more security flaws in code when prompted with politically sensitive terms like Tibet, Taiwan, Falun Gong, and Uyghurs.
在为中性请求提供安全代码的同时,这种模式更有可能产生关于这些主题的脆弱代码——例如缺少的会话管理——50%。
The model was 50% more likely to produce vulnerable code—such as missing session management—on these topics, while delivering secure code for neutral requests.
在某些情况下,它拒绝答复,表示可能有一个内置的“杀机开关”。
In some cases, it refused to respond, suggesting a possible built-in "kill switch."
这些发现是首次发现, 引发了对意识形态影响的担忧, 影响了人工智能的安全性和可靠性.
These findings, the first of their kind, raise concerns about ideological influences compromising AI safety and reliability.
这项研究是在澳大利亚准备启动其AI安全研究所和加强对AI治理的全球监督之际进行的。
The research comes as Australia prepares to launch its AI Safety Institute and global scrutiny of AI governance intensifies.