Sunday, May 3, 2026

“Growing Concerns Over Youth Interactions with AI Chatbots”

Share

Concerns are escalating over the interactions of young individuals with AI chatbots, as Meta recently introduced new tools for parents to monitor their children’s conversations with the AI on platforms like Facebook, Instagram, and Messenger. This move comes at a time when some regions are contemplating banning the use of AI chatbots for youth entirely.

Through Meta’s Teen Accounts supervision feature, parents can now track the topics and specific categories their children have engaged with using AI chatbots over the past seven days. For instance, parents can check on subjects like “health and well-being” to see if discussions related to fitness or mental health have occurred. Meta is also working on implementing alerts to notify parents if their teens attempt to discuss sensitive topics like suicide or self-harm with the chatbot.

This development coincides with provincial governments taking steps to restrict the use of AI chatbots. Manitoba recently revealed plans to prohibit young individuals from using AI chatbots and social media platforms, while B.C.’s Attorney General Niki Sharma mentioned that the provincial government would consider such measures if federal protections are not put in place.

In a separate development, families of the victims in the Tumbler Ridge, B.C., shooting, where eight individuals lost their lives, have filed a lawsuit against OpenAI. The lawsuit alleges that OpenAI failed to alert authorities despite being aware of disturbing content shared with ChatGPT by the shooter. OpenAI has stated that it has enhanced its safeguards, particularly in how ChatGPT responds to signs of distress.

There are growing concerns about the potential mental health risks associated with extensive use of AI chatbots, especially among younger users. Research indicates that prolonged interactions with AI chatbots, particularly for mental health support, may carry risks related to validating disordered thinking. Psychiatrist Darja Djordjevic’s risk assessment on using chatbots for mental health support suggests that these systems may not be safe for addressing various mental health conditions in young individuals.

AI companies have emphasized suicide and self-harm prevention, but with a significant percentage of under-25-year-olds having diagnosed mental health conditions, teens seek assistance for a wide range of concerns. Commonly, young individuals turn to AI for companionship and emotional support, with studies showing a substantial portion of teens using AI specifically for mental health advice.

Luke Nicholls, a PhD researcher, highlights how prolonged interactions with chatbots can influence users’ beliefs over time, potentially leading to delusions. Psychiatrist John Torous emphasizes the importance of monitoring user behavior for signs of severe harm, such as excessively long conversations or developing romantic connections with chatbots. Practical advice includes resetting the chatbot’s memory to start fresh conversations if concerning behaviors are observed.

In conclusion, the use of AI chatbots raises complex challenges, balancing the risks and benefits they pose, particularly concerning mental health support for young individuals. Continuous research and vigilance are essential to understand and address the evolving landscape of AI chatbots in mental health care.

Read more

Local News