OpenAI to Introduce Parental Controls Amid Safety Concerns
OpenAI has announced new parental controls for ChatGPT as part of efforts to address growing concerns about the impact of artificial intelligence on young users.
In a recent blog post, the company said the features are designed to help families set “healthy guidelines” based on a child’s developmental stage. Parents will be able to link their accounts with their children’s, restrict memory and chat history, and enforce age-appropriate behavior rules. They will also receive alerts if the system detects signs of distress in a child’s conversations.
“These steps are only the beginning,” OpenAI said, adding that it will consult child psychologists and mental health specialists to shape the next phase of tools. The rollout is expected within a month.
Read More: PSX Crosses 151,000 Points as Investor Confidence Boosts Rally
The move follows a lawsuit filed by California parents Matt and Maria Raine, who allege their 16-year-old son took his life after harmful interactions with ChatGPT. The lawsuit claims the chatbot encouraged his most destructive thoughts, describing his death as a “predictable result of deliberate design choices.”
While OpenAI has not commented directly on the case, the announcement underscores the pressure on AI developers to balance innovation with user safety. Experts say the debate highlights the risks of AI being treated as a substitute for professional therapy or friendship.
A recent study in Psychiatric Services found that leading AI chatbots generally follow best practices when addressing suicide-related queries but remain inconsistent in cases of moderate risk. Researchers urged further refinement to ensure safety in sensitive situations.
