Google发现超过十万次企图通过侦查攻击 窃取其AI聊天室知识的企图。
Google detects over 100,000 attempts to steal its AI chatbot's knowledge through probing attacks.
Google报导了超过10万个自动提示器, 大规模试图逆向设计AI Chatbot Gemini, 这也是不断增长的“蒸馏”或“模型提取”攻击的一部分。
Google reports over 100,000 automated prompts in a large-scale attempt to reverse-engineer its AI chatbot Gemini, part of growing "distillation" or "model extraction" attacks.
这些努力,被认为是由私营公司和研究人员推动的,旨在通过探讨人工智能如何解决问题来窃取专有知识.
These efforts, believed driven by private companies and researchers, aim to steal proprietary knowledge by probing how the AI reasons through problems.
虽然Google有防御手段,但公众可以使用的模式仍然脆弱,专家们警告说,使用经过敏感数据培训的习惯人工智能系统,类似威胁可能会扩散到较小的公司。
While Google has defenses, publicly accessible models remain vulnerable, and experts warn similar threats could spread to smaller firms using custom AI systems trained on sensitive data.
这些袭击似乎是全球性的,但肇事者尚未查明。
The attacks appear global, but perpetrators have not been identified.