Ķvlog

Opinion
Artificial Intelligence Opinion

Bloom’s Taxonomy Needs an Update for the AI Age

How to reimagine the classic framework of learning objectives
By Jeffrey Schoonover — April 10, 2026 5 min read
Concept of AI, Digital brain with ai chip on generate bar. AI created generate art, text, video, and audio with prompt. Big data visualization and machine learning. Vector illustration.
  • Save to favorites
  • Print
Email Copy URL

Since ChatGPT’s public release in November 2022, the rapid advancement of generative artificial intelligence is reshaping the landscape of teaching and learning. Tools that instantly generate text, images, and other products have created an environment where thinking and creativity can be easily outsourced to machines, often leaving Ķvlog questioning the authenticity of student work and asking which cognitive skills will be most important to us as learners and doers. Teachers are right to be concerned about our future as creative, independent thinkers and problem solvers.

Bloom’s Taxonomy has long been a tool Ķvlog could use to identify levels of cognitive demand in the classroom. Originally developed in 1956 and revised in 2002, the framework provides Ķvlog with shared language for curriculum and assessment design. It organizes learning from lower-order to higher-order thinking skills, starting with foundational skills like remembering and understanding and progressing through the increasingly complex ones of applying, analyzing, evaluating, and creating.

Opinion Diagram 1720x1150 Blooms Taxonomy AI

However, generative AI’s ever-growing presence raises important questions for Ķvlog: Does this hierarchical framework still reflect the mental skills teachers should be cultivating in their students ? And should we perhaps abandon Bloom’s framework altogether?

Before generative AI, creation—synthesizing ideas from one’s own knowledge and experiences into a final product—was designated the pinnacle of cognitive complexity. Now, a human author needs only an effective prompt to almost instantly create text, images, video, code, or data analysis. Creation occurs early in the process rather than as a culminating step.

In fact, the traditional model of moving from the lower-order thinking skills to the higher-order ones does not align with how today’s learners interact with generative AI. Students flexibly move back and forth among the levels as they reflect on what they have so far and generate new iterations through additional prompting.

In generative AI environments, the most challenging cognitive tasks are deciding what to ask, how to structure questions, when to trust or question the outputs, and how to integrate AI-generated content into original work. This kind of thinking involves planning (designing clear prompts, setting constraints, and anticipating errors), monitoring (checking outputs for accuracy, bias, and relevance), and evaluating (critiquing outputs and revising prompts). When the human-machine collaboration is done well, students remain active decisionmakers in their learning, balancing human reasoning and AI assistance to produce meaningful outcomes.

The traditional model of moving from the lower-order thinking skills to the higher-order ones does not align with how today’s learners interact with genAI.

That orchestration shares similarities with traditional revision and collaborative work, for which learners have always moved fluidly among creating, evaluating, and refining, revealing that Bloom’s hierarchical climb was never the complete picture of how learning actually works. AI assistance introduces unique challenges, however. The speed, scale, and black-box nature of AI-generated content require students to manage a collaborator that can instantly produce polished work without revealing its reasoning, making the metacognitive oversight both more essential and more difficult than in human collaboration.

In this new world, the skills of remembering and understanding become continuous prerequisites. Learners repeatedly draw on factual and conceptual knowledge to check facts and integrate information throughout the cycles of creation and evaluation.
Rather than a pyramid, a better way to show the relationships among the cognitive skills in a generative AI context is a vertical helix. This spiral represents continuous cycles of judgment, revision, and synthesis as learners develop expertise in both content and human-AI collaboration. Learners cycle repeatedly through the stages, each iteration adding complexity and precision.

To see how this works in practice, consider the following classroom scenario. A 7th grade student researching the Underground Railroad needs to write an argument about why Harriet Tubman should be featured in a new museum exhibit. He starts by reviewing his notes and primary sources from class (remember/understand) about Tubman’s life, the dangers she faced, and the impact she had on others. He writes an initial outline and draft. He then crafts a prompt, asking the assistant to review his work while citing the assignment’s requirements: “Review this draft argument for why Harriet Tubman deserves being featured in a museum exhibit about the Underground Railroad. The assignment requires three reasons supported by historical facts. Does my draft meet these requirements? What historical details could I add?”

The AI produces feedback and suggestions (create), but when the student analyzes the output (evaluate/analyze), he notices (remember/understand) that the AI included factual errors about the number of enslaved people Tubman helped free and the reward offered for her capture. He revises his prompt to, “Help me strengthen my three reasons with specific facts about the number of trips Tubman made, the number of people she freed, and the actual award amount. Help me strengthen my use of persuasive voice if necessary.”

The student applies the feedback to a revised draft, including accurate details of Tubman’s role in freeing slaves. He weaves together his own arguments, the AI’s factual corrections and suggested improvements, direct quotes from Tubman the student found independently, and his personal reflection.

see also

Conceptual image of dice with question marks on them with A.I. faded in background.
Andrii Yalanskyi/iStock/Getty

As generative AI becomes increasingly central to students’ futures, Ķvlog must balance helping students develop strong foundational skills independent of AI while preparing them to work effectively alongside these tools. This requires intentional pedagogical strategies that call for students to first build competence without AI, then progress to strategic human-AI collaboration in which they evaluate, question, refine, and integrate AI assistance into their own reasoning and original work.

Bloom’s Taxonomy still offers Ķvlog a framework for thinking about cognitive demand, but the model can better reflect the realities of learning in a generative AI environment. The answer is not to abandon Bloom altogether but to reimagine it to emphasize iterative learning cycles of judgment, critique, and synthesis.

Teachers embracing this reality must design tasks that make thinking visible by requiring students to evaluate outputs, identify errors or biases, refine prompts, and synthesize AI assistance with their own reasoning. When we equip students with both traditional competencies and AI literacy, we prepare them not as passive consumers of technology but as skilled directors of this human-machine collaboration, which is exactly what their future requires.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Unlocking Success for Struggling Adolescent Readers
The Science of Reading transformed K-3 literacy. Now it's time to extend that focus to students in grades 6 through 12.
Content provided by 
Jobs Virtual Career Fair for Teachers and K-12 Staff
Find teaching jobs and K-12 education jubs at the EdWeek Top School Jobs virtual career fair.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Climb: A New Framework for Career Readiness in the Age of AI
Discover practical strategies to redefine career readiness in K–12 and move beyond credentials to develop true capability and character.
Content provided by 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

Artificial Intelligence Opinion Is Your School’s Approach to AI Too Flexible?
It’s tempting to prioritize adaptability when dealing with AI tools. It can also be a mistake.
Laura Arnett
3 min read
040726 opinion Arnett principal is in hendrie fs
F. Sheehan/Education Week via Canva
Artificial Intelligence Opinion Can AI Support Student Learning? Depends Who You Ask
Ed tech is supposed to give teachers more time to mentor. It’s not clear if it does.
7 min read
The United States Capitol building as a bookcase filled with red, white, and blue policy books in a Washington DC landscape.
Luca D'Urbino for Education Week
Artificial Intelligence Letter to the Editor Artificial Intelligence: Reality Versus Hype
"AI is a deeper manifestation of the pernicious trend to let technology co-opt human agency."
1 min read
Education Week opinion letters submissions
Gwen Keraval for Education Week
Artificial Intelligence Video Reading Is Hard to Teach. Can AI Help?
Artificial intelligence might be able to drive cars, treat diseases, and train your front door to recognize your face. But can it help kids learn how to read?
1 min read