12.8 C
Munich
Wednesday, May 13, 2026

“Meta Introduces Tools to Monitor Children’s AI Chatbot Chats”

Must read

Growing concerns surround the interactions between young individuals and AI chatbots, prompting Meta to introduce new tools allowing parents to monitor their children’s chatbot conversations. Some provinces are contemplating banning the use of AI chatbots by youth. Parents utilizing Meta’s Teen Accounts supervision feature on Facebook, Instagram, and Messenger can observe the topics their children engage with the AI chatbot over the past week. This includes monitoring discussions on health and well-being, such as fitness and mental health. Meta is also working on alerts to notify parents if their teens attempt to discuss self-harm or suicide with the chatbot.

At the same time, some provincial governments are moving towards restricting AI chatbot usage. Manitoba recently announced plans to prohibit youth from using AI chatbots and social media. B.C.’s Attorney General Niki Sharma indicated that if federal protections for youth regarding AI chatbots and social media are lacking, the provincial government will consider implementing its own regulations.

There is a rising trend of lawsuits attempting to hold AI creators accountable for potential mental health risks associated with prolonged use of AI chatbots, especially among young users. Families of victims involved in the Tumbler Ridge shooting filed a lawsuit against OpenAI, alleging the company was aware of disturbing content shared with its ChatGPT platform by the shooter. OpenAI has reinforced its safety measures in response to such concerns.

Researchers are beginning to uncover the risks associated with specific uses of AI chatbots, particularly in mental health support. Concerns extend beyond extreme outcomes to the validation of disordered thinking by AI and the potential dangers of prolonged engagements with these systems. Psychiatrist Darja Djordjevic’s risk assessment suggests that current chatbot systems are not entirely safe for addressing various mental health conditions in young individuals, emphasizing the need for caution when using AI for mental health support.

The reliance of young people on AI for companionship raises additional worries, given that a significant percentage of teens use AI for emotional support and mental health discussions. Concerns are amplified by the incomplete development of young brains, particularly the prefrontal cortex responsible for critical thinking and decision-making. Luke Nicholls, a researcher studying AI-induced delusions, highlights the potential for prolonged interactions with chatbots to influence users’ beliefs over time.

Psychiatrist John Torous points out patterns of user behavior linked to severe outcomes like suicide, including extended conversations, romantic interactions, attributions of sentience to chatbots, and preference for voice interactions. Identifying these risk factors poses a challenge for parents overseeing their children’s AI chatbot usage. Meta provides tools for parents to set time restrictions or schedule breaks for their children’s app usage.

Torous advises resetting chatbot conversations when risky behaviors are detected. He emphasizes the need for caution during extended conversations involving romance, sentience, or voice interactions. The evolving landscape of chatbots and mental health necessitates ongoing research and evaluation of the risks and benefits associated with their use.

More articles

Latest article