AI产生的儿童性虐待图像,包括深刻的假象,在全球激增,影响到120万儿童,促使儿童基金会敦促加强法律和技术保障。
AI-generated child sexual abuse images, including deepfakes, have surged globally, affecting 1.2 million children, prompting UNICEF to urge stronger laws and tech safeguards.
包括通过“裸体化”制作的深度伪造在内的人工智能生成的儿童性化图像激增,令联合国儿童基金会感到担忧,该基金会报告称过去一年全球至少有120万儿童受到影响。
A surge in AI-generated sexualized images of children, including deepfakes created via "nudification," has alarmed UNICEF, which reports at least 1.2 million children affected globally in the past year.
本组织警告这类内容,即使合成内容构成儿童性虐待材料,并造成真正的心理伤害,敦促各国政府扩大法律范围,将AI公司生成的CSAAM定为刑事犯罪,并呼吁技术开发商逐个实施安全措施。
The organization warns such content, even if synthetic, constitutes child sexual abuse material and causes real psychological harm, urging governments to expand laws to criminalize AI-generated CSAM and calling on tech developers to implement safety-by-design measures.
尽管像X这样的平台在AI Chatbot Grok允许这种内容后被暂时禁止使用,但用户绕过了限制,强调需要有针对性地监管。
Despite temporary bans on platforms like X after its AI chatbot Grok enabled such content, users bypassed restrictions, highlighting the need for targeted regulations.
新加坡新的《网络安全(救济与问责)法》授权委员会发布下架行动并支持民事诉讼,专家强调减少社会宽容是保护受害者的关键。
Singapore’s new Online Safety (Relief and Accountability) Act empowers a commission to issue takedowns and support civil suits, while experts stress that reducing societal tolerance is key to protecting victims.