自2024年初以来,黑客利用AI推动钓鱼、骗局和影响行动,针对台湾、美国学校和中国批评家,但没有出现新的威胁。
Since early 2024, hackers have used AI to boost phishing, scams, and influence ops, targeting Taiwan, U.S. schools, and Chinese critics, but no new threats have emerged.
OpenAI的最新威胁报告显示,自2024年初以来,恶意行为者 — — 包括诈骗者和国家支持的团体 — — 越来越多地使用人工智能工具,如ChatGPT、Claude和DeepSeek等工具来强化现有的网上犯罪策略,如钓鱼、制作骗取内容和影响操作,而不是开发新的策略。
OpenAI’s latest threat report reveals that since early 2024, malicious actors—including scammers and state-backed groups—have increasingly used AI tools like ChatGPT, Claude, and DeepSeek to enhance existing cybercrime tactics, such as phishing, scam content creation, and influence operations, rather than developing new ones.
这家公司扰乱了40多个恶意网络,袭击了台湾的半导体部门、美国学术机构和批评中国共产党的中国共产党人士,他们经常利用AI将信息自动化、建立假投资网站和伪造金融顾问。
The company disrupted over 40 malicious networks, with attacks targeting Taiwan’s semiconductor sector, U.S. academic institutions, and critics of the Chinese Communist Party, often using AI to automate messages, build fake investment sites, and fabricate financial advisor personas.
虽然人工智能提高了效率和规模,但没有出现根本的新威胁.
While AI improves efficiency and scale, no fundamentally new threats have emerged.
一些演员正在通过删除破折号等风格标记来逃避检测来进行调整。
Some actors are adapting by removing stylistic markers like em dashes to evade detection.
值得注意的是,现在利用大赦国际侦查诈骗的频率比创造诈骗的频率高出三倍,这突出表明了大赦国际在促成和打击网络犯罪方面的双重作用。
Notably, AI is now used three times more often to detect scams than to create them, highlighting its dual role in both enabling and combating cybercrime.
OpenAI的保障措施继续阻碍显然恶意的请求。
OpenAI’s safeguards continue to block clearly malicious requests.