人工智能深度伪造错误地牵连了一名ICE官员和误认受害者,涉及2026年1月明尼阿波利斯枪击案,尽管没有真实影像,这些信息在网络上迅速传播。
AI deepfakes falsely implicated an ICE officer and misidentified victims in a Jan 2026 Minneapolis shooting, spreading rapidly online despite no real footage.
人工智能生成的2026年1月在明尼阿波利斯发生致命枪击事件的受害者和枪手的深度假冒在社交媒体上迅速传播, 虚假地揭露了ICE官员的身份, 通过数字方式改变了受害者的图像, 并误认了与此无关的人.
AI-generated deepfakes of the victim and shooter in a January 2026 fatal shooting in Minneapolis spread rapidly on social media, falsely revealing the ICE officer’s identity, digitally altering images of the victim, and misidentifying unrelated individuals.
尽管没有真实的镜头显示警官的脸, 像伊隆·马斯克的Grok这样的工具被用来创建超现实但伪造的图像,
Despite no authentic footage showing the officer’s face, tools like Elon Musk’s Grok were used to create hyper-realistic but fabricated images, including explicit and dehumanizing depictions.
错误情报将该官员与一个名叫Steve Grove的人错误地联系起来,并歪曲了其他个人的情况,同时是佛罗里达政府的一个片段。
Misinformation falsely linked the officer to a man named Steve Grove and misrepresented other individuals, while a clip of Florida Gov.
Ron DeSantis被错误地说成是对该事件的评论。
Ron DeSantis was falsely presented as commentary on the incident.
专家证实警官是Jonathan Ross 受害人是Renee Nicole Good
Experts confirm the officer is Jonathan Ross, and the victim was Renee Nicole Good.
突发事件突出了人工智能驱动的虚假信息的危险,
The incident highlights the dangers of AI-fueled disinformation in breaking news, eroding public trust and distorting reality.