I’m sure that I am not the only school district human resources director who receives emails from staff that were obviously drafted using generative AI. It got me thinking a bit. When working with complex local, state, and federal laws, how does AI know enough about statutes across the nation to generate a solid response for a particular personnel situation?
The answer? It doesn’t!
I tested the theory myself using ChatGPT.
AI can offer efficiency and convenience. But when it comes to sensitive topics, particularly emails involving human resources, special education, or other complex subjects, Ķvlog should exercise caution.
I’ll admit that the ChatGPT response, at surface level, is well written. It uses some fluffy language coupled with a few impressive letter/number/squiggle combinations, indicating a state or federal statute. This eye-catching email would at first cause an HR director like me to pause and prepare to seek advice from my legal team. But reading more closely, I’d have some concerns:
- Reasonable accommodations are changes to something (like job duties, the work environment, etc.) which would allow a qualified individual with a disability to perform essential functions of a job. But stress is generally not considered a disability under the Americans with Disabilities Act.
- Some states have rules regarding class size, while others don’t.
- Is the HR director really the person to direct class-size questions to?
AI writing tools may generate text that sounds polished, but they do not fully grasp context, tone, or the nuances of workplace dynamics. For example, a teacher requesting medical leave accommodations under the Americans with Disabilities Act should ensure the language is accurate and in line with both personal needs and legal standards. An AI-generated draft could overlook required references to law or reflect unintentional bias that the chatbot’s training builds in, leading to inaccurate or incomplete information. And more often than not, the draft fails to reflect the educator’s individual voice.
Authenticity is more important than you might think. For some HR folks, AI-generated emails can be a trigger: “This person didn’t take the time to actually write an email. Why should I take a lot of time to respond?” Or such emails raise red flags for anyone: “Who helped this person write the email?” “Who else is involved in the situation that I don’t know about?
Here are several risks associated with an overreliance on AI use:
- Loss of authenticity. Professional email matters often benefit from a personal voice that conveys sincerity, professionalism, and trustworthiness. Administrators and HR staff can usually tell when a message feels overly scripted or detached.
- Confidentiality concerns. Entering sensitive personal or workplace details into an AI system presents concerns about privacy. Some platforms have safeguards, but others do not. Educators must assume that anything entered could be stored, analyzed, used in other prompts, or even admitted as evidence in a courtroom.
- Legal and policy accuracy: AI tools are not always accurate. They can generate outdated and incomplete information about laws or district policies. AI tools might cite federal statutes but overlook specific state or local regulations that also govern HR or other practices within the district. Inaccurate references could weaken an educator’s request or inadvertently create liabilities such as allegations of unfair practices.
- Tone and interpretation problems: AI cannot fully read the interpersonal dynamics. An email about a performance-improvement plan, for example, must strike a balance between professionalism and self-advocacy. If AI produces language that is too defensive, overly formal, or misaligned with workplace culture, it could inadvertently damage relationships with supervisors or HR personnel.
Am I advocating a full stop when it comes to AI? Well, no. I’m advocating for Ķvlog to follow a few best practices:
- Use AI for structure, not substance. AI can help organize thoughts or provide a starting point for your work.
- Avoid entering sensitive information. Protect confidentiality by keeping details about medical conditions, legal claims, or personnel issues out of AI systems. Replace specific information with placeholders if using AI to generate an outline.
- Verify legal references. If citing federal or state law in an HR-related email, Ķvlog should cross-check the information with reliable sources. District legal counsel, professional organizations, or official government websites provide more accurate guidance than AI alone.
- Prioritize human review. Before sending, Ķvlog should reread the message from the perspective of the recipient. Asking a trusted mentor or colleague to review a sensitive email is often a good idea.
- Be authentic. Educators should always revise the AI draft with their own voice.
AI can save time and reduce stress when you are drafting complex communications, but it should never replace your judgment. The stakes of communication in sensitive realms are too high to outsource fully to a tool that cannot account for human nuance, legal obligations, or organizational culture.