When a bag of Doritos triggers a police response, something in our approach to school safety has gone terribly wrong. Last month in Baltimore, an artificial intelligence system mistook a bag of chips for a gun, leading officers to and force him to his knees. Only after the arrest did they notice the crumpled chip bag.
This was a moment that could have ended in tragedy. Many Black families would have been reminded of , who was killed while holding Skittles, or closer to home in Baltimore, , who died in police custody.
This incident isn’t an anomaly or a glitch. It’s a symptom of a growing, and troubling, trend: schools becoming test sites for unproven AI technology.
From safety plans to surveillance systems
I began my career as a teacher.
I have seen how quickly “safety initiatives” can turn into surveillance. I have witnessed students body-slammed, arrested, and suspended for behavior that should have been met with support. Over time, metal detectors, school resource officers, and now AI-based “threat detection” tools have transformed too many school hallways into something resembling a security checkpoint, not a place of learning.
Security companies promise that AI can , and police are expanding their use of . But these programs often . These systems are trained on the same biased data that have resulted in well-known policing injustices. They capture our societal biases and use them to form the underlying basis of their algorithms. The results are systems that sanitize bias by translating it into “just numbers.”
When one digs further into the research behind many of these “unbiased” systems—often billed as fail-safes—their credibility starts to crumble. Gun-detection tools are prone to , especially in varied lighting or crowded spaces. This is alarming for anyone who has stepped foot in a crowded school hallway during class exchanges with students darting left and right.
Meanwhile, facial-recognition systems . Again, this should be concerning as an equity and safety standpoint.
It is difficult to challenge these disparities when we treat the biased algorithms that produce them as “neutral.” This was the case in Baltimore as the developer, , insisted the system “functioned as intended.” If pointing guns at a child with a snack is success, then we must question these systems.
The opportunity cost of tech-first safety
District leaders are under extraordinary pressure from rising concerns about gun violence, community calls for safer schools, and and grants that make new security technology appealing. But we should ask: Why are unproven surveillance tools being piloted in schools before they are tested anywhere else?
We should be especially wary of these tools being piloted in schools serving Black and Latino students. It continues a long history of perpetuating the myth that children of color are more violent and require enhanced security measures to prevent them from becoming “.”
Schools are supposed to be places where young people learn, feel supported, and make mistakes without fear of criminalization. But every dollar for an AI camera system is a dollar not spent on or . These are school safety solutions that have real evidence behind them.
Educators must ask: Why are government grant dollars being eagerly thrown at companies pitching AI counselors, AI therapists, and now AI systems before hiring more human school counselors or more human therapists?
Technology cannot build relationships with students. It cannot mediate conflict on the playground. And it cannot heal historical trauma or repair trust. At the end of the day, school staff will always know their students’ quirks, personalities, and how to build trust far quicker than an AI system will.
A safer path forward
School leadership should treat AI safety tools, and the hype that surrounds them, with a healthy dose of skepticism. Ideally, these systems should be a last resort, and communities should instead prioritize alternatives like increasing school counselors and support staff. Further, any AI security system requires public transparency and community input. There must also be human oversight in real time to review life-threatening alerts, assess bias and civil rights implications, and follow clear protocols to prevent unnecessary escalation. In short, communities must demand responsibility before being sold unproven technology.
This isn’t a story about a malfunctioning camera, a single bad decision, or one school. It’s about the moral cost of turning the schoolyard into a marketplace for security companies and treating students, especially students of color, not as learners but as threats.
Schools keep children safe not through algorithms but through adults who know them, support them, and believe in their potential.
When technology becomes a substitute for human empathy and accountability, we are not protecting students. We are abandoning them.