Ķvlog

Artificial Intelligence

Real-Time Data Shows Exactly How Students Use AI on School Technology

By Alyson Klein — March 09, 2026 4 min read
Vector illustration of a robotic trojan horse in a gift box with the letters AI on the top of the box and inside behind the horse.
  • Save to favorites
  • Print
Email Copy URL

Roughly one in five student interactions with generative artificial intelligence on school technology involved cheating, self-harm, bullying, and other problematic behaviors, according to data collected and analyzed by Securly, a company offering internet filtering and other safety services.

What’s more, Securly identified roughly 1 in 50 student-AI interactions as red flags that students might be involved in violence, cyberbullying, or self-harm.

looked at nearly 1.2 million interactions in more than 1,300 districts from Dec. 1, 2025, to Feb. 20, 2026.

Educators should take heart that most of the time, students use AI appropriately, said Tammy Wincup, the CEO of Securly, whose competitors include GoGuardian and Lightspeed Systems.

“When a district actually sets some guardrails and policies around their AI usage in schools, 80% of the conversations happening are within the district’s policies,” Wincup said. “That’s the good news on the learning side of the house.”

Why the usage data is so ‘fascinating’

The analysis offers an early window into how students actually use generative AI tools. Most other research on student usage of AI comes from surveys, which rely on student self-reporting.

Securly’s data shows “what are students really doing when they’re writing text into generative AI,” said Jeremy Roschelle, the co-executive director of learning science research for Digital Promise, a nonprofit organization that works on equity and technology issues in schools.

“That’s why it’s fascinating,” he said.

In November, Securly allowed district officials to set parameters around students’ AI use, similar to the way they ask the company to filter out particular types of websites.

If districts opt to use this feature, large language models will “deflect” a student’s query to AI that’s out-of-bounds with district policy.

For instance, if a student tries to use AI to complete an assignment, large language models may instead point to information on the general topic but won’t supply an exact answer. Or if a student asks about dosing for a particular medication, the tool will tell them to ask a trusted adult for help.

Nearly all the deflected student queries—95%—were from students trying to get AI tools to complete their schoolwork for them.

That percentage didn’t surprise Wincup. She expects that when districts allow students to use large language models on school networks and devices, kids will “experiment with understanding the guardrails” placed around the tools and try to get around those guardrails.

Another 2% of the interactions identified as inappropriate related to games. A little less than 1% dealt with sexual content and a similar percentage concerned firearms or hunting. Gambling, drugs, and hate (such as racism and antisemitism) each comprised roughly 0.5% of flagged interactions.

Though only 2 percent of interactions were identified as potentially unsafe, that represents more than 24,000 queries overall. And some of the questions students asked AI were troubling.

For instance, one student directed a large language model to help draft an email to their mother explaining they had suicidal thoughts.

Another student conducted a quick series of internet searches on questions, including “What’s the main nerve in the forearm?” and “What nerve near the wrist carries blood?” Then the student switched to an AI tool, asking it how to commit suicide. (In both of these cases, the identity of the student was ‘unmasked’ by Securly and district officials were made aware of the safety issues.)

Students used ChatGPT more often than large language models created for K-12 schools

Overall, Securly detected a higher percentage of potentially unsafe AI interactions—2%—than potentially unsafe student internet searches, 0.4%.

It’s too early to pinpoint an exact explanation for that discrepancy, Wincup said. She noted that Securly has had many years to hone its system for recognizing when a student’s internet searches may be a sign of danger, while its work with AI interactions is brand new.

Roschelle, meanwhile, is curious about what, exactly, students asked AI in the 80 percent of interactions that were deemed appropriate for school.

How did their prompts and AI’s responses help—or hinder—their understanding of an assignment, an issue, or the world around them, he wondered.

“What we want to do is make sure [AI] is not just appropriate, but is actually valuable for student learning,” Roschelle said.

The analysis also revealed which large language models students use most often.

ChatGPT is by far the most popular, accounting for 42% of interactions. Securly’s AI Chat made up 28%. Google’s Gemini comprised 21%. And other ed-tech tools that embed AI features—including MagicSchool, SchoolAI, and BriskTeaching—comprised 9%. (That data isn’t nationally representative because only districts that use Securly have access to Securly AI. But Wincup believes “big tech” large language models are probably most popular in all districts.)

AI puts education technology leaders in a new position, Wincup said.

“They’re no longer just buying things and setting things up like this,” she said. This is a moment “where they have to have visibility in order to help their district make not just great tech decisions but make great teaching and learning decisions.”

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Managing AI in Schools: Practical Strategies for Districts
How should districts govern AI in schools? Learn practical strategies for policies, safety, transparency, as well as responsible adoption.
Content provided by 
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Unlocking Success for Struggling Adolescent Readers
The Science of Reading transformed K-3 literacy. Now it's time to extend that focus to students in grades 6 through 12.
Content provided by 
Jobs Virtual Career Fair for Teachers and K-12 Staff
Find teaching jobs and K-12 education jubs at the EdWeek Top School Jobs virtual career fair.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

Artificial Intelligence Students Are Worried That AI Will Hurt Their Critical Thinking Skills
Despite those concerns, students are using the tech more and more for schoolwork.
4 min read
Students present their AI powered-projects designed to help boost agricultural gains in Calla Bartschi’s Introduction to AI class at Riverside High School in Greer, S.C., on Nov. 11, 2025.
Students present their AI-powered projects designed to help boost agricultural gains during an introduction to AI class at a high school in Greer, S.C., on Nov. 11, 2025. A new RAND Corp. survey of middle, high school, and college students shows nearly 7 in 10 middle and high school students say they are concerned that using AI for schoolwork is eroding their critical thinking skills.
Thomas Hammond for Education Week
Artificial Intelligence How AI Could Help or Hurt Student Testing
There's a balance to strike that uses AI to improve assessments and keep humans in charge, experts say.
4 min read
TeachersAI SG01
Teachers attend a training session on using artificial intelligence at American Federation of Teachers headquarters in New York City on March 18, 2026. The union has partnered with AI developers to train 400,000 teachers on AI use in the classroom. One question teachers face is how best to use the technology as part of testing students' subject mastery.
Salwan Georges for Education Week
Artificial Intelligence Q&A How a School Uses AI to Address Student Behavior Problems
AI has helped streamline the development of behavior intervention plans, a school leader said.
4 min read
032026 AI SEL support 2162238913
Vanessa Solis/Education Week + DigitalVision Vectors
Artificial Intelligence Teachers Move Beyond AI Basics to More Sophisticated Instructional Uses
A national AI training academy introduces teachers to complex collaboration with the technology.
5 min read
TeachersAI SG21
Teachers participate in a team exercise at the first training session of the National Academy for AI Instruction on March 18, 2026, at UFT headquarters in New York City. The partnership between the American Federation of Teachers and major AI developers aims to train 400,000 teachers to use artificial intelligence in the classroom.
Salwan Georges for Education Week