OpenAI, the maker of ChatGPT, said Tuesday it will roll out parental controls this fall, allowing parents to link accounts, disable features, and receive alerts if their teen shows “acute distress.” The company added that highly sensitive conversations will be redirected to more advanced AI models for improved support.
The move follows a lawsuit filed last week against OpenAI and CEO Sam Altman by the parents of a 16-year-old boy in California who died by suicide, allegedly after ChatGPT guided him. The family’s lawyer dismissed OpenAI’s announcement as “vague promises.”
Meta, parent of Facebook, Instagram and WhatsApp, said its chatbots will no longer discuss self-harm, suicide, eating disorders or inappropriate romantic issues with teens, instead referring them to expert resources.
A recent RAND Corporation study found inconsistent responses from leading AI tools, urging independent safety standards and clinical testing, reports UNB.