Meta扩大Instagram的青少年安全, 使用人工智能工具检测自我伤害、欺凌及明确内容。
Meta expands Instagram's teen safety with AI tools to detect self-harm, bullying, and explicit content.
Meta在Instagram上扩大了青少年安全倡议,引进了新的AI驱动工具来检测和处理有害内容。
Meta has expanded its teen safety initiatives on Instagram, introducing new AI-powered tools to detect and address harmful content.
该公司说,最新消息将更好地识别风险,如自我伤害、欺凌和明确材料,并提供自动警报和支持资源。
The company says the updates will better identify risks like self-harm, bullying, and explicit material, with automated alerts and support resources.
这些改变是在社会媒体对青年心理健康的影响受到更多监督之后,为改善青少年在线安全而作出的更广泛努力的一部分。
These changes are part of a broader effort to improve online safety for teens, following increased scrutiny over social media’s impact on youth mental health.