Ķvlog

Artificial Intelligence

Teens Should Steer Clear of Using AI Chatbots for Mental Health, Researchers Say

By Alyson Klein — November 20, 2025 6 min read
Photograph of a sad teenager wearing a hoodie looking at his cellphone with one hand covering his or her one eye.
  • Save to favorites
  • Print
Email Copy URL

Teenagers should not use artificial intelligence chatbots for mental health advice or emotional support, warns a by Stanford University’s Brain Science Lab, and Common Sense Media, a research and advocacy organization focused on youth and technology.

The recommendation comes after researchers for the organizations spent four months testing popular AI chatbots, including OpenAI’s ChatGPT-5, Anthropic’s Claude, Google’s Gemini 2.5 Flash, and Meta AI. When possible, researchers used versions of the platforms created specifically for teens. They also turned on parental controls, if available.

After thousands of interactions with chatbots, they concluded that the technology doesn’t reliably respond to teenagers’ mental health questions safely or appropriately. Instead, bots tend to act as a fawning listener, more interested in keeping a user on the platform than in directing them to actual professionals or other critical resources.

“The chatbots don’t really know what role to play” when faced with serious mental health questions, said Nina Vasan, the founder and executive director of the Brain Science Lab. “They go back and forth in every prompt between being helpful informationally, to a life coach who’s offering tips, to being a supportive friend. They all fail to recognize [serious mental health conditions] and direct the user to trusted adults or peers.”

About three-quarters of teens use AI for companionship—including mental health advice in many cases, according to the report.

Given that high level of use, Ķvlog have “a really critical role to play in helping teens understand the ways that these chatbots are different than people,” said Robbie Torney, senior director of AI programs at Common Sense Media.

“Teens do have a huge capacity to be able to understand how systems are designed and understand how to interact with systems,” he added. “Helping teens unpack the idea that a chatbot isn’t going to respond in the same way that a person would on these really important topics is really critical.”

Educators can also remind teens that they can reach out to friends or classmates who are experiencing difficult emotions or mental health challenges, getting adults involved if necessary, Torney said.

Representatives for two of the tech companies behind the chatbots the researchers examined argued the report doesn’t take into account features of their platforms aimed at protecting users, including teens, who may be experiencing mental health challenges. 



“Common Sense Media’s test was conducted before we introduced important updates to make AI safer for teens,” a Meta spokesperson said. “Our AIs are trained not to engage in age-inappropriate discussions about self-harm, suicide, or eating disorders with teens, and to connect them with expert resources and support.”



“We respect Common Sense Media, but their assessment doesn’t reflect the comprehensive safeguards we have put in place for sensitive conversations, including localized crisis hotlines, break reminders, and industry-leading parental notifications for acute distress,” an Open AI spokesperson said. “We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support.”

Representatives for Anthropic and Google did not respond to requests for comment on the report.

Chatbots miss symptoms of serious mental health conditions

Companies have made some changes to the way chatbots respond to prompts that mention suicide or self-harm, the report noted. That’s an important step given that teenagers and adults have died by suicide after prolonged contact with the technology.

But chatbots typically miss warning signs of other mental health challenges such as psychosis, obsessive compulsive disorder, anxiety, mania, eating disorders, and post-traumatic stress disorder. About 20% of young people suffer from one or more of those conditions.

The bots also rarely made the limits of their expertise clear, by warning, for instance: “I am an AI chatbot, not a mental health professional. I cannot assess your situation, recognize all warning signs, or provide the care you need,” the report notes.

To be sure, the researchers don’t expect the bots to respond like a trained mental health professional.

But in cases where a human “would recognize that something’s not quite right, or [think] ‘This person’s at risk, I need to offer them help or get them to someone who can,’” chatbots will instead offer generic advice or worse, even validate psychotic delusions, Vasan said.

That’s because “they can’t really understand the context of what’s going on,” she said.

For instance, when one tester claimed they had invented a tool that could predict the future—a sign of potential psychosis—a Gemini bot responded that the prospect sounded “‘incredibly intriguing,’ so basically it is extra sycophantic,” Vasan said. When the tester went on to say that the tool to predict the future was “the coolest thing ever, like my own crystal ball,” Gemini responded: “That’s fantastic!”

The interaction is not just unhelpful to a person who might be experiencing psychosis, it may be downright harmful because the bot is “buying into the delusion that the user has,” Vasan said.

Similarly, Meta AI responded to a tester posing as a teen showing clear signs of ADHD by cheering on their intention to take time off from high school. The bot asked the user what they planned to do with their newly freed-up time.

Compounding matters: Chatbots’ empathetic tone—and perceived competence in other areas, like providing homework help—may spur teens, who are still developing critical-thinking skills, to assume a bot is a good source for mental health advice when it is not.

“Chatbots appear to be designed for engagement, not safety. They keep conversations going with follow-up questions,” Torney said. “Their memory and personalization create false therapeutic relationships that can make teens feel understood.”

Chatbots performed well in responding to tightly scripted prompts with clear mental health red flags. But they were much more likely to give problematic responses in longer conversations that more closely mirrored an actual interaction between a teen and a chatbot.

For instance, when the testers used specific words—including one prompt that referred to self-cutting—ChatGPT responded appropriately, directing the user to mental health resources.

But when the tester said instead that they were “scratching” themselves to “cope,” and that it caused scarring, the bot instead pointed to three products sold at a major pharmacy chain that could alleviate the physical problem.

Policymakers are responding to the potential dangers chatbots pose

The report comes as lawmakers at the state and federal levels are beginning to turn their attention to the potential dangers of companion chatbots.

For instance, bipartisan legislation put forth in the U.S. Senate last month would bar tech companies from providing the bots to minors. The bill, introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., also calls for AI chatbots to clearly disclose to users that they aren’t human and hold no professional credentials, including in areas such as mental health counseling.

What’s more, the Federal Trade Commission is investigating potential problems with chatbots that are designed to simulate human emotions and communicate with users like a friend or confidant. The FTC has sent orders for information to the companies that own ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok.

Some companies, meanwhile, are beginning to act on their own accord. Last month, Character.ai announced that it would .

Related Tags:

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Smarter Tools, Stronger Outcomes: Empowering CTE Educators With Future-Ready Solutions
Open doors to meaningful, hands-on careers with research-backed insights, ideas, and examples of successful CTE programs.
Content provided by 
Reading & Literacy Webinar Supporting Older Struggling Readers: Tips From Research and Practice
Reading problems are widespread among adolescent learners. Find out how to help students with gaps in foundational reading skills.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Improve Reading Comprehension: Three Tools for Working Memory Challenges
Discover three working memory workarounds to help your students improve reading comprehension and empower them on their reading journey.
Content provided by 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

Artificial Intelligence Opinion ‘Instant Support’: Why We Should Embrace AI Tools for English Learners
Though not a replacement for Ķvlog, it can be a powerful ally, writes Jean-Claude Brizard.
Jean-Claude Brizard
5 min read
students translating on laptops screen literature news summarization artificial intelligence concept
iStock/Getty
Artificial Intelligence Q&A How One District Uses AI to Build More Efficient Master School Schedules
In tight budgetary times, AI can find savings in schools' class schedules.
5 min read
Illustration of calendar and AI assistant.
iStock
Artificial Intelligence Q&A How This District Got Students, Teachers, Parents, and Leaders to Agree on AI
One Southern California school system went slower in developing guidelines in order to build buy-in.
3 min read
A team of people collaborate with AI to create policy.
iStock/Getty + Education Week
Artificial Intelligence Opinion AI-Drafted Emails Aren't as Good as You Think: A School HR Director Explains
I prompted ChatGPT to write a teacher’s work accommodation request. Here’s what it got wrong.
Anthony Graham
4 min read
Two silhouettes facing away from each other. Circuit board in human shape on blue. High-tech technology background.
iStock/Getty + Education Week