澳大利亚研究人员创建了Silverer(Silverer),这个工具毒害图像以阻止人工智能滥用,
Australian researchers created Silverer, a tool that poisons images to block AI abuse, now testing with AFP.
澳大利亚研究人员开发了一个称为Silverer的原型工具,通过利用数据中毒来保护个人图像,打击AI产生的虐待儿童材料和深层假象骗局。
Australian researchers have developed a prototype tool called Silverer to fight AI-generated child abuse material and deepfake scams by using data poisoning to protect personal images.
工具由AFP和Monash大学的AiLECS实验室创建, 工具在照片在网上分享之前, 暗中改变照片, 嵌入混淆AI模型的无形模式。
Created by the AFP and Monash University’s AiLECS Lab, the tool subtly alters photos before they’re shared online, embedding invisible patterns that confuse AI models.
当罪犯试图利用这些图像来训练人工智能时,结果模糊不清或扭曲。
When criminals try to use these images to train AI, the results are blurry or distorted.
该工具目前正在与澳大利亚联邦统计局测试,目的是为恶意使用人工智能设置障碍,最终可能向公众开放,以帮助保护数字身份。
Currently in testing with the AFP, the tool aims to create barriers to malicious AI use and may eventually be available to the public to help protect digital identities.