SAN FRANCISCO, California: American artificial intelligence (AI) company, OpenAI has launched parental controls for ChatGPT on both web and mobile platforms starting Monday.
The move comes in the wake of a lawsuit filed by the parents of a teenager who died by suicide, alleging that the chatbot had advised him on methods of self-harm.
According to the company, the new feature allows parents and teens to link their accounts to enable enhanced safety measures. One party must send an invitation, and the parental controls will only be activated once the other accepts.
US regulators are increasingly scrutinising AI companies over the potential risks posed by chatbots. In August, Reuters reported that Meta’s AI rules had allowed flirtatious conversations with children, raising concerns.
Under OpenAI’s new parental control measures, parents will be able to limit exposure to sensitive content, control whether ChatGPT remembers previous conversations, and decide if their teens’ chats can be used to train OpenAI’s models, the Microsoft-backed company announced on X.
Parents will also have the option to set quiet hours that block access during specified times and disable features like voice mode, image generation, and editing. However, OpenAI clarified that parents will not have access to their teens’ chat transcripts.
In rare instances where the system and trained reviewers identify serious safety risks, parents may be notified with only the necessary information to protect the teen’s safety.
Additionally, parents will be informed if a teen chooses to unlink their accounts, OpenAI said.
OpenAI, which has about 700 million weekly active users for its ChatGPT products, is building an age prediction system to help it predict whether a user is under 18 so that the chatbot can automatically apply teen-appropriate settings.
Last month, Meta also introduced new safety measures for teenagers using its AI products. The company announced plans to train its systems to prevent flirty conversations and avoid discussions related to self-harm or suicide with minors. Additionally, it will temporarily limit access to certain AI characters to enhance user safety.