Google忽略了Gemini AI中隐藏的缺陷, 让黑客通过隐形文字、冒数据风险和错误信息输入恶意命令。
Google ignores a hidden flaw in Gemini AI that lets hackers inject malicious commands via invisible text, risking data and misinformation.
谷歌的Gemini AI(Google’s Gemini AI)最近揭露了“ASCII走私”缺陷, 让攻击者用隐形统一代码字符隐藏文字中的恶意指令,
A newly revealed "ASCII smuggling" flaw in Google’s Gemini AI lets attackers hide malicious commands in text using invisible Unicode characters, tricking the AI into generating false summaries or altering meeting details without user awareness.
与ChatGPT和Copilot等竞争性AI模型不同,Gemini无法检测或阻止这些输入.
Unlike competing AI models such as ChatGPT and Copilot, Gemini fails to detect or block these inputs.
尽管Gmail、Docs和日历内依赖Gemini的用户面临风险, 但Google拒绝解决这个问题, 将其贴上“社会工程”问题标签,
Google has refused to fix the issue, labeling it a "social engineering" problem rather than a security flaw, despite the risk to users relying on Gemini within Gmail, Docs, and Calendar.
批评者警告这一决定使企业易受数据泄漏和错误信息的影响,特别是因为AI系统日益使敏感任务自动化。
Critics warn the decision leaves enterprises vulnerable to data leaks and misinformation, especially as AI systems increasingly automate sensitive tasks.