AI policies are important to give clarity to staff and students about when it’s appropriate to use the technology and how to communicate with families and the broader community about its use in school. Policies also need to demystify artificial intelligence.
But as it is, nearly half of teachers, principals, and district leaders say their school or district does not have an AI policy, according to a survey by the EdWeek Research Center.
Another 16% said their school or district’s current policy does not establish meaningful guardrails about how to use the technology for instructional purposes. Only two states—Ohio and Tennessee—now require school districts to have comprehensive AI policies, according to Education Week’s AI policy tracker.
So, where should education leaders start when crafting a policy that is both practical and flexible enough to evolve with this fast-changing technology?
Education Week spoke with district leaders at the forefront of drafting AI policies, as well as a national expert, a teacher, and a principal for a recent special report on AI in schools. Following are five best practices gleaned from their insights.
1. Seek community input before crafting AI policies
A lot of AI policies are about student use, but many teachers, principals, and district leaders are also using the tools. Questions and concerns about ÌÇÐ͝Âþvlog’ use of the technology need to be addressed, too.
Tracey Metcalfe Rowley, the senior director of educational technology and online learning for the Tucson Unified school district in Arizona, told Education Week that her district formed a task force of 40 people to develop its AI policy. The task force included a wide variety of district employees, from teachers and principals to people working in human resources and transportation.
One school that’s built AI guidance with significant input from teachers is La Vista High School in Fullerton, Calif. La Vista math teacher Al Rabanera said it is important for district AI policies to prioritize teachers’ perspectives over tech-company executives, because teachers know better what’s best for students.
2. Build maximum flexibility into any AI policy
AI guidance must be clear in how it recommends ÌÇÐ͝Âþvlog and students use artificial intelligence in teaching and learning, but it must be flexible enough to adapt to technological advances, which are happening fast in the AI world.
There are a couple key ways to strike that balance. Tucson Unified, for example, has supplemental guidelines along with its AI policy that are easier to update on the fly than the board-approved AI policy. While the policy is succinct and focused on responsible and ethical AI use for students, parents, and staff, the guidelines focus on the details of daily teacher and student use.
The district has also committed to updating its AI policy on an annual basis to ensure it remains relevant.
In the Arlington public schools in Virginia, the district has opted not to have a policy but rather a continuously updated framework that’s published on the district’s website, said Jacqueline Firster, the supervisor of digital learning and innovation. The district has streamlined the updating process: Department leaders can put in a request for a change to the guidance to the district’s AI steering committee, whose members are all deputized to update the website.
3. Don’t forget appropriate use and data-protection guidelines
There are a couple of boxes every AI policy should check, said Bree Dusseault, the principal and managing director at the Center on Reinventing Public Education at Arizona State University. To begin with, a policy should define appropriate AI use for students, specifically what kind of use crosses into plagiarism or cheating. Even if this information isn’t detailed in the policy, it should be clearly communicated through some form of official guidance for both ÌÇÐ͝Âþvlog and students.
A policy should also outline rules for ensuring student data are protected. If a district isn’t dictating which AI tools staff can use, a policy should include guidelines about how to choose AI-enabled tools to use in school.
4. AI policies should address the downsides of the technology
An AI policy should include language about the potential for AI to generate biased responses. Generative AI technology is trained on massive amounts of data that often include biased or inaccurate information, and that can bleed into the technology’s outputs. That means everyone in a school building should be on guard for this issue.
For example, if you ask a generative AI image-generator tool for pictures of doctors, it might only produce images of white men, suggesting that only white men are doctors. AI companies are working to address this problem and have made strides, but it remains an ongoing challenge. A recent risk assessment report from Common Sense Media found more subtle examples of bias in AI outputs that are harder to detect. And some research has found that AI plagiarism detectors are more likely to incorrectly label essays written by English learners as written by AI.
Another big equity issue is access to AI tools. If students can use AI to complete homework assignments and others can’t, then fairness issues will become a big problem.
5. Pair the policy with professional development
AI policies will have limited value if students and staff don’t know how to incorporate the technology into their work. The Tucson district, for example, offers basic professional development for employees just starting to experiment with the technology, as well as more specialized training for specific positions.
Training is also important to ensure that teachers and other school and district employees are not accidentally doing something wrong or even illegal when using AI, such as pasting identifiable student information into a generative AI chatbot.