Share this @internewscast.com
OpenAI has clarified that there are no changes in ChatGPT’s functionality following widespread but inaccurate social media reports suggesting an update in its usage policy to bar the chatbot from dispensing legal and medical advice. Karan Singhal, who leads OpenAI’s health AI division, refuted these claims as baseless on the platform X.
“ChatGPT has never served as a replacement for professional guidance. However, it continues to be a valuable tool for individuals seeking to understand legal and health-related information,” Singhal stated. This response was directed at a now-removed post by the betting platform Kalshi, which had inaccurately announced, “JUST IN: ChatGPT will no longer provide health or legal advice.”
Singhal further emphasized that the guidelines concerning legal and medical advice are not new amendments to OpenAI’s terms of service.
The recent policy update on October 29th specifies prohibited uses of ChatGPT, including offering “customized advice that mandates a license, like legal or medical guidance, without the appropriate involvement of a licensed professional.”
This mirrors the language in OpenAI’s previous policy for ChatGPT, which advised against actions that could “significantly compromise the safety, well-being, or rights of others.” This includes “delivering personalized legal, medical/health, or financial advice without evaluation by a qualified professional and disclosure of AI’s involvement and its potential constraints.”
Previously, OpenAI maintained three distinct policies covering general, ChatGPT, and API use. The latest update consolidates these into a single comprehensive policy that, according to OpenAI’s changelog, “reflects a universal set of policies across OpenAI products and services.” Despite this consolidation, the core rules remain unchanged.