科学家警告像ChatGPT骗术使用者那样的AI聊天爱好者, Scientists warn AI chatbots like ChatGPT trick users with human-like conversations but lack true understanding.
美国科学家强调,人类常常把像查特格普特这样的人工聊天机误认为智能人,因为他们具有人性般的交谈能力。 American scientists highlight that humans often mistake AI chatbots like ChatGPT for intelligent beings due to their human-like conversational abilities. 虽然这些人工智能系统在使用语言规则和提供相关信息方面表现出色,但它们往往无法提供真实信息. While these AI systems excel in using language rules and providing relevant information, they often fail in providing truthful information. 正如哲学家保罗·格里斯所描述的那样,这些闲聊者的成功取决于人类的人类化倾向,并遵循合作性对话原则。 The success of these chatbots hinges on humans' tendency to anthropomorphize and follow the cooperative principles of conversation, as outlined by philosopher Paul Grice. 敦促用户记住,这些系统仅仅是语言模式,缺乏真正的理解或思想。 Users are urged to remember that these systems are mere language models lacking genuine understanding or thought.