一名芝加哥法官警告移民代理人使用AI强制报告有不准确的风险,并侵蚀信任。
A Chicago judge warns immigration agents' use of AI in force reports risks inaccuracies and erodes trust.
芝加哥的一位联邦法官使用AI工具(如ChatGPT)起草使用武力的报告, 对移民代理人提出警告,
A federal judge in Chicago has raised alarms over immigration agents using AI tools like ChatGPT to draft use-of-force reports, citing discrepancies between AI-generated narratives and body camera footage.
在脚注中,Sara Ellis法官质疑基于最低限度投入的报告的可靠性,警告这种做法有不准确的危险,破坏法律标准,削弱公众信任。
In a footnote, Judge Sara Ellis questioned the reliability of reports based on minimal input, warning the practice risks inaccuracies, undermines legal standards, and erodes public trust.
专家说,如果使用公共平台, 人工智能可以扭曲官员的观点, 创建误导性账户, 暴露敏感数据.
Experts say AI can distort officer perspectives, create misleading accounts, and expose sensitive data if public platforms are used.
国土安全部没有作出评论,也没有明确的联邦政策。
The Department of Homeland Security has not commented, and no clear federal policies exist.
有些州现在要求对AI产生的内容贴上标签,但是仍然缺乏广泛的保障措施。
Some states now require labeling of AI-generated content, but widespread safeguards remain absent.