告密者说,TikTok和Meta在2026年促进了安全接触,尽管有证据表明用户受到伤害,但传播有害内容。
Whistleblowers say TikTok and Meta boosted engagement over safety in 2026, spreading harmful content despite evidence of user harm.
举报者声称,TikTok和Meta在2026年将用户参与优先于安全,允许仇恨言论,暴力和极端主义等有害内容传播,特别是在Instagram Reels和TikTok的算法驱动的料上.
Whistleblowers allege TikTok and Meta prioritized user engagement over safety in 2026, allowing harmful content like hate speech, violence, and extremism to spread, especially on Instagram Reels and TikTok’s algorithm-driven feeds.
内部报告显示,尽管有证据表明伤害加剧,但领导层向团队施压,要求放松内容限制以竞争。
Internal reports show leadership pressured teams to relax content restrictions to compete, despite evidence of increased harm.
据报告,包括青少年在内的用户受到算法建议激进化的影响,而安全小组资源不足,被忽视。
Users, including teens, reported being radicalized by algorithmic recommendations, while safety teams were under-resourced and ignored.
两家公司均否认有不当行为,引用了新的安全工具,但内部人士表示,经济激励仍超过用户福祉。
Both companies deny wrongdoing, citing new safety tools, but insiders say financial incentives still outweigh user wellbeing.