Since ChatGPT’s public release in November 2022, the rapid advancement of generative artificial intelligence is reshaping the landscape of teaching and learning. Tools that instantly generate text, images, and other products have created an environment where thinking and creativity can be easily outsourced to machines, often leaving Ķvlog questioning the authenticity of student work and asking which cognitive skills will be most important to us as learners and doers. Teachers are right to be concerned about our future as creative, independent thinkers and problem solvers.
Bloom’s Taxonomy has long been a tool Ķvlog could use to identify levels of cognitive demand in the classroom. Originally developed in 1956 and revised in 2002, the framework provides Ķvlog with shared language for curriculum and assessment design. It organizes learning from lower-order to higher-order thinking skills, starting with foundational skills like remembering and understanding and progressing through the increasingly complex ones of applying, analyzing, evaluating, and creating.
However, generative AI’s ever-growing presence raises important questions for Ķvlog: Does this hierarchical framework still reflect the mental skills teachers should be cultivating in their students ? And should we perhaps abandon Bloom’s framework altogether?
Before generative AI, creation—synthesizing ideas from one’s own knowledge and experiences into a final product—was designated the pinnacle of cognitive complexity. Now, a human author needs only an effective prompt to almost instantly create text, images, video, code, or data analysis. Creation occurs early in the process rather than as a culminating step.
In fact, the traditional model of moving from the lower-order thinking skills to the higher-order ones does not align with how today’s learners interact with generative AI. Students flexibly move back and forth among the levels as they reflect on what they have so far and generate new iterations through additional prompting.
In generative AI environments, the most challenging cognitive tasks are deciding what to ask, how to structure questions, when to trust or question the outputs, and how to integrate AI-generated content into original work. This kind of thinking involves planning (designing clear prompts, setting constraints, and anticipating errors), monitoring (checking outputs for accuracy, bias, and relevance), and evaluating (critiquing outputs and revising prompts). When the human-machine collaboration is done well, students remain active decisionmakers in their learning, balancing human reasoning and AI assistance to produce meaningful outcomes.
The traditional model of moving from the lower-order thinking skills to the higher-order ones does not align with how today’s learners interact with genAI.
That orchestration shares similarities with traditional revision and collaborative work, for which learners have always moved fluidly among creating, evaluating, and refining, revealing that Bloom’s hierarchical climb was never the complete picture of how learning actually works. AI assistance introduces unique challenges, however. The speed, scale, and black-box nature of AI-generated content require students to manage a collaborator that can instantly produce polished work without revealing its reasoning, making the metacognitive oversight both more essential and more difficult than in human collaboration.
In this new world, the skills of remembering and understanding become continuous prerequisites. Learners repeatedly draw on factual and conceptual knowledge to check facts and integrate information throughout the cycles of creation and evaluation.
Rather than a pyramid, a better way to show the relationships among the cognitive skills in a generative AI context is a vertical helix. This spiral represents continuous cycles of judgment, revision, and synthesis as learners develop expertise in both content and human-AI collaboration. Learners cycle repeatedly through the stages, each iteration adding complexity and precision.
To see how this works in practice, consider the following classroom scenario. A 7th grade student researching the Underground Railroad needs to write an argument about why Harriet Tubman should be featured in a new museum exhibit. He starts by reviewing his notes and primary sources from class (remember/understand) about Tubman’s life, the dangers she faced, and the impact she had on others. He writes an initial outline and draft. He then crafts a prompt, asking the assistant to review his work while citing the assignment’s requirements: “Review this draft argument for why Harriet Tubman deserves being featured in a museum exhibit about the Underground Railroad. The assignment requires three reasons supported by historical facts. Does my draft meet these requirements? What historical details could I add?”
The AI produces feedback and suggestions (create), but when the student analyzes the output (evaluate/analyze), he notices (remember/understand) that the AI included factual errors about the number of enslaved people Tubman helped free and the reward offered for her capture. He revises his prompt to, “Help me strengthen my three reasons with specific facts about the number of trips Tubman made, the number of people she freed, and the actual award amount. Help me strengthen my use of persuasive voice if necessary.”
The student applies the feedback to a revised draft, including accurate details of Tubman’s role in freeing slaves. He weaves together his own arguments, the AI’s factual corrections and suggested improvements, direct quotes from Tubman the student found independently, and his personal reflection.
As generative AI becomes increasingly central to students’ futures, Ķvlog must balance helping students develop strong foundational skills independent of AI while preparing them to work effectively alongside these tools. This requires intentional pedagogical strategies that call for students to first build competence without AI, then progress to strategic human-AI collaboration in which they evaluate, question, refine, and integrate AI assistance into their own reasoning and original work.
Bloom’s Taxonomy still offers Ķvlog a framework for thinking about cognitive demand, but the model can better reflect the realities of learning in a generative AI environment. The answer is not to abandon Bloom altogether but to reimagine it to emphasize iterative learning cycles of judgment, critique, and synthesis.
Teachers embracing this reality must design tasks that make thinking visible by requiring students to evaluate outputs, identify errors or biases, refine prompts, and synthesize AI assistance with their own reasoning. When we equip students with both traditional competencies and AI literacy, we prepare them not as passive consumers of technology but as skilled directors of this human-machine collaboration, which is exactly what their future requires.