人工智能创作者承认,他们并不完全理解 先进的人工智能模型是如何运作的, 引发新的研究。
AI creators admit they don't fully grasp how advanced AI models operate, prompting new research.
人工智能(AI)创造者承认,他们并不完全理解AI模型是如何思考的, 不同于遵循设定逻辑的传统软件。
Artificial intelligence (AI) creators admit they don't fully understand how AI models think, unlike traditional software that follows set logic.
创世的AI找到了自己的成功之路, 使得它的内部运作成为一个谜。
Generative AI finds its own path to success, making its inner workings a mystery.
研究人员正在研究“机械可解释性”,以便更好地了解AI计算结果,目的是防止滥用和发现偏见。
Researchers are studying "mechanistic interpretability" to understand AI calculations better, aiming to prevent misuse and detect biases.
这可能导致在国家安全等关键领域采取更安全的办法,并给市场带来竞争优势。
This could lead to safer adoption in critical areas like national security and give a competitive edge in the market.