LAION从AI图像生成数据库中删除了2,000个虐待儿童图像链接。 LAION removes 2,000 child abuse imagery links from AI image-generator database.
在LAION的AI研究人员已经从数据库中删除了2,000多条含有可疑儿童性虐待图像的链接,该数据库被用于培训流行的AI图像生成者,如稳定传播和Midjourney。 AI researchers at LAION have removed over 2,000 links containing suspected child sexual abuse imagery from their database, which has been used to train popular AI image-generators like Stable Diffusion and Midjourney. 作为对数据库中包含有助于制作描绘儿童的现实照片的链接的先前发现的回应, LAION 与监督团体和反滥用组织合作, 清理数据库并发布修订版, 以用于未来的人工智能研究. In response to previous findings that the database contained links contributing to the production of photorealistic deepfakes depicting children, LAION worked with watchdog groups and anti-abuse organizations to clean up the database and release a revised version for future AI research. 但是,下一步是撤回仍然能够生成虐待儿童图像的受污染模型。 However, the next step is to withdraw tainted models still able to produce child abuse imagery.