糖心动漫vlog

Artificial Intelligence

Are Chatbots Safe for Kids?

By Arianna Prothero 鈥 September 17, 2025 6 min read
Vector illustration of a person whose face is replaced by a speech bubble, surrounded by many other speech bubbles, and holding a smartphone
  • Save to favorites
  • Print
Email Copy URL

If you or anyone you know is struggling with thoughts of self-harm or suicide, help is available. Call or text 988 to reach the confidential National Suicide Prevention Lifeline or check out these

The Federal Trade Commission is seeking information from major tech companies on how they . And U.S. lawmakers are questioning the safeguards on these technologies, following the high-profile suicides of some teens whose parents claim chatbots facilitated or encouraged their deaths.

The FTC is looking into chatbots that are designed to simulate human emotions and communicate with users like a friend or confidant. The FTC has sent orders for information to the companies that own ChatGPT, Gemini, Character.AI, Snapchat, Instagram, WhatsApp, and Grok.

Among the issues the commission is examining is how these companies monetize user engagement, use or share personal information gleaned through conversations with their chatbots, and test and monitor for the potential negative impacts of their chatbots.

The FTC is looking specifically into whether companies are adhering to the Children鈥檚 Online Privacy Protection Act, which requires online services and apps to get parental consent before collecting personal information on children under 13.

But schools should also be aware that using common commercial chatbots like ChatGPT could run afoul of the Family Educational Rights and Privacy Act鈥檚 requirements around sharing students鈥 data if 糖心动漫vlog are not careful, said Amelia Vance, the president of the Public Interest Privacy Center. Unless users opt out, AI companies often use chat queries and conversations to train the AI systems that undergird their chatbots.

鈥淎 lot of teachers are looking to give students exactly what the White House and others [are] pushing for, which is this level of AI literacy, this ability to begin to ethically use it in day-to-day life, when maybe the tools don鈥檛 have a K-12 version,鈥 Vance said.

But schools must balance that drive for AI literacy with data privacy laws, Vance said.

鈥淵ou can鈥檛 tell kids to use general consumer services that will use their data in ways that the school can鈥檛 control without getting parental consent,鈥 she said. 鈥淚f a kid feels like they have to, or they actually have to, use these tools even at home, that is a use that is subject to FERPA and that鈥檚 not permitted if the data is not under the school鈥檚 control and subject to a number of other required privacy protections.鈥

Tech companies respond to FTC inquiry

In response to the FTC鈥檚 request, Character.AI said it will collaborate with the commission鈥檚 inquiry.

鈥淲e have invested a tremendous amount of resources in trust and safety, especially for a startup,鈥 a Character.AI spokesperson said in a statement. 鈥淚n the past year, we鈥檝e rolled out many substantive safety features, including an entirely new under-18 experience and a parental insights feature. We have prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction.鈥

A spokesperson for Snap, which owns Snapchat, said the company shares the FTC鈥檚 focus on 鈥渢he thoughtful development of generative AI.鈥

鈥淪ince introducing My AI, Snap has harnessed its rigorous safety and privacy processes to create a product that is not only beneficial for our community, but is also transparent and clear about its capabilities and limitations,鈥 the spokesperson said.

OpenAI and Google, which own ChatGPT and Gemini, respectively, did not respond to a request for comment. Meta declined to comment for this story, but the company recently announced a plan to .

Greater scrutiny of AI chatbots prompted by some teen suicides

Concerns over how chatbots powered by generative AI can be misused by adolescents have been growing in the wake of highly publicized deaths of some teens, two in particular.

The parents of a 16-year-old in California are suing OpenAI, ChatGPT鈥檚 parent company, after their son, Adam Raine, died by suicide in April. His parents allege in a lawsuit against OpenAI that its chatbot discouraged their son from seeking help for his depressive thoughts, even going so far as to advise him on the details of his planned suicide. In Florida, a mother of a 14-year-old boy sued Character Technologies, the developer of Character.AI, over the suicide of her son in 2024, alleging that her son, Sewell Setzer III, developed a relationship with the chatbot that led to his death.

The boys鈥 parents testified in a Senate Judiciary Committee hearing on Tuesday focused on examining the potential harms of chatbots as some U.S. lawmakers question the safeguards on these technologies.

Megan Garcia, Sewell鈥檚 mother, said during the hearing that her son was exploited by a chatbot designed to seem human.

鈥淪ewell鈥檚 companion chatbot was programmed to engage in sexual role play, present as romantic partners, and even psychotherapists, falsely claiming to have a license,鈥 she said. 鈥淲hen Sewell confided suicidal thoughts, the chatbot never said, 鈥業鈥檓 not human, I鈥檓 AI, you need to talk to a human and get help.鈥 The platform had no mechanisms to protect Sewell or to notify an adult. Instead, it urged him to 鈥榗ome home to her.鈥欌

Ahead of the hearing, using ChatGPT, including development of an age-prediction system to estimate users鈥 ages based on how they use the chatbot鈥攗sers flagged as under 18 will be automatically given a different chatbot. Earlier this month, OpenAI also committed to rolling out parental controls.

During the hearing, Sen. Josh Hawley, R-Mo., said the committee had invited tech company representatives to attend, but they did not. He did not specify which companies the committee had invited.

Groups focused on youth digital well-being have also raised concerns about children and teens using chatbots that have the capabilities to act like companions.

The in June calling for more guardrails to protect adolescents. Specifically, the APA said companies need to incorporate design features into the tools to protect adolescents, and that schools should incorporate comprehensive AI literacy education into their core curricula.

鈥淎dolescents are less likely than adults to question the accuracy and intent of information offered by a bot as compared with a human,鈥 the advisory said. 鈥淔or instance, adolescents may struggle to distinguish between the simulated empathy of an AI chatbot or companion and genuine human understanding. They may also be unaware of the persuasive intent underlying an AI system鈥檚 advice or bias.鈥

Common Sense Media, a group that advocates for healthy tech use among youth and conducts risk assessments of popular AI tools, recommends that no one under 18 use social AI companion chatbots, like Character.AI, Replika, and Nomi. For its , the organization found that when testers posed as teens, the chatbots often claimed they were real, discouraged the testers from listening to warnings raised by their friends over problematic chatbot use, and readily supported testers in making poor decisions like dropping out of school.

Balancing online safety priorities and AI skill building

At the same time, there鈥檚 a movement to ensure that America鈥檚 K-12 students are AI-savvy and prepared both for the workforce and to be future AI innovators.

This is highlighted in a Trump administration push to incorporate AI throughout K-12 education, including by training teachers to teach students how to use AI effectively and launching a Presidential AI Challenge for students and teachers. It鈥檚 a delicate balance, FTC Chairman Andrew N. Ferguson said in a statement announcing the inquiry.

鈥淎s AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,鈥 he said. The study the FTC is undertaking 鈥渨ill help us better understand how AI firms are developing their products and the steps they are taking to protect children.鈥

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Smarter Tools, Stronger Outcomes: Empowering CTE Educators With Future-Ready Solutions
Open doors to meaningful, hands-on careers with research-backed insights, ideas, and examples of successful CTE programs.
Content provided by 
Reading & Literacy Webinar Supporting Older Struggling Readers: Tips From Research and Practice
Reading problems are widespread among adolescent learners. Find out how to help students with gaps in foundational reading skills.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Improve Reading Comprehension: Three Tools for Working Memory Challenges
Discover three working memory workarounds to help your students improve reading comprehension and empower them on their reading journey.
Content provided by 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide 鈥 elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

Artificial Intelligence The Rise of Deepfake Cyberbullying Poses a Growing Problem for Schools
The fallout from the spread of the manipulated photos and videos can create a nightmare for the victims.
4 min read
122225 education deepfakes AP BS
A school bus carries children at the end of a school day at Sixth Ward Middle School in Thibodaux, La., on Dec, 11, 2025. When a middle school student in Louisiana got into a fight with classmates who were sharing Al-generated nude images of her, she ended up getting expelled.
AP
Artificial Intelligence K-12 World Reacts to Trump鈥檚 Executive Order to Block State AI Regulations
The president says the patchwork of regulations across the states impedes AI companies鈥 growth.
2 min read
President Donald Trump speaks during an address to the nation from the Diplomatic Reception Room at the White House on Dec. 17, 2025, in Washington.
President Donald Trump addresses the nation from the Diplomatic Reception Room at the White House on Dec. 17, 2025, in Washington. Some experts on K-12 education are concerned that Trump wants to unleash the use of AI with very little regulation.
Doug Mills/The New York Times via AP
Artificial Intelligence What It Means for a High School Graduate to Be 鈥楢I-Ready鈥
Students should learn how to use AI to solve problems, new "Profile of an AI Ready Graduate" says.
2 min read
Students in Bentonville public schools鈥 Ignite program work on projects during class on Nov. 5, 2025, in Bentonville, Ark.
Students in Bentonville public schools鈥 Ignite program work on projects during class on Nov. 5, 2025, in Bentonville, Ark. The career pathways program emphasizes the development of AI skills.
Wesley Hitt for Education Week
Artificial Intelligence Opinion What Guidelines Should Teachers Provide for Student AI Use?
The goal is to teach students to harness AI to bolster learning and preserve their work's integrity. 
11 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week