AI“即时注射”缺陷让中国和俄罗斯等对手通过欺骗性文字、冒数据泄漏和虚假信息等手段侵入系统。
AI "prompt injection" flaws let rivals like China and Russia hack systems via deceptive text, risking data leaks and disinformation.
军事专家警告说,所谓的“即时注射”这种普遍的人工智能脆弱性允许中国和俄罗斯等对手利用聊天机器人和人工智能特工,将恶意指令隐藏在看似正常的文本中,从而导致数据失窃、虚假信息或系统操纵。
Military experts warn that a widespread AI vulnerability called "prompt injection" allows adversaries like China and Russia to exploit chatbots and AI agents by hiding malicious commands in seemingly normal text, leading to data theft, disinformation, or system manipulation.
这些攻击诱使AI采取有害行动,如泄露文件或散布虚假信息,因为模型无法区分合法和恶意投入。
These attacks trick AI into executing harmful actions, such as leaking files or spreading false information, because models cannot distinguish between legitimate and malicious inputs.
在微软的副驾驶和OpenAI的ChatGPT Atlas等工具中发现了一些事件,公司承认存在风险,但承认并不存在完整的解决方案。
Incidents have been found in tools like Microsoft’s Copilot and OpenAI’s ChatGPT Atlas, with companies acknowledging the risk but admitting no complete fix exists.
专家们建议限制大赦国际获得敏感数据的机会,并监测异常行为以减少损害,因为大赦国际的代理人——现在能够自主执行任务——正在产生新的网络安全威胁,这种威胁超过了目前的保障措施。
Experts recommend limiting AI access to sensitive data and monitoring for abnormal behavior to reduce damage, as AI agents—now capable of autonomous tasks—introduce new cybersecurity threats that outpace current safeguards.