Last week, I sat in an education conference listening to a keynote speaker who was absolutely unequivocal about it: students must learn prompt engineering or they will be left behind. The speaker was passionate, convincing even, about how this was the essential skill for the next generation. And as I sat there, I found myself thinking: really? Is this truly the skill we should be racing to embed in every curriculum?
Lately, I keep hearing that prompt engineering, the ability to write clever and precise instructions for AI, is the new super skill every young person needs to master. The idea is that those who can “talk to the machine” will be the ones who thrive in the age of generative AI.
And I get it. For now, it is true. Anyone who spends time with AI knows that the way you ask matters. A well-structured prompt can turn an average response into something remarkable. I have seen entire professional development sessions focused on how to write the perfect prompt.
But I keep wondering if this is really a future skill or simply a transitional one.
We have been here before. About a decade ago, coding was the next great literacy. We were told that all students needed to learn to code or they would be left behind. And while understanding logic, pattern recognition and computational thinking remains valuable, few would now argue that every student must become a programmer. The tools evolved. The interfaces changed. Knowing how to code shifted from a universal requirement to an optional asset.
I suspect the same will happen with prompting. The models are already becoming much more forgiving. Early versions of AI required carefully worded instructions and detailed context. But each new generation of large language models has become better at interpreting vague or natural language. They are now more context aware, more visual and better aligned with human intent. The need for carefully engineered prompts is already beginning to fade.
Even the interfaces are changing. Most people will not type directly into chatbots in the future. They will use AI features inside tools such as Google Docs, Canva or Notion that quietly handle the prompting behind the scenes. The software will translate our natural requests such as “summarize this,” “improve the tone,” or “make it more visual” into optimized prompts automatically. Just as we no longer type code to open a file, we will not need to craft perfect prompts to get great AI output.
There may be a split happening here. For most of us, prompting will become invisible, handled by the interface layer. But specialized roles might still require deep prompt engineering expertise for critical systems or highly creative work where nuance matters. It could mirror how we still have systems programmers even though most people never write a line of code.
Modern AI systems are also being trained on millions of examples of strong instructions and responses. They have learned the meta-skill of interpreting intent. Clear and simple language now produces excellent results.
So if the technical part of prompting is becoming less necessary, what remains essential? The human part. Knowing what to ask. Evaluating whether the answer is right. Recognizing when a response is insightful, biased, or incomplete. The real differentiator will be judgment, not phrasing. The skill will not be in writing prompts but in thinking critically about what those prompts produce.
There is something deeper here too. The enduring skill might be what we could call AI collaboration literacy—the ability to iterate with AI, to recognize when you are not getting what you need, and to adjust your approach, not just your words. It is less about engineering the perfect prompt and more about developing a productive working relationship with these tools.
It reminds me of the evolution from coding to clicking. Early computer users had to memorize complex commands. Now, we all navigate computers intuitively. Prompt engineering feels like today’s command line, a temporary bridge to a more natural future.
So yes, teaching students to think like prompt engineers has value. It helps them be clear, curious and reflective. But perhaps the goal is not to create great prompters. It is to create great thinkers who can:
-
Articulate clear goals and constraints
-
Recognize the difference between excellent and mediocre output
-
Maintain healthy skepticism and verification habits
-
Understand when AI is the right tool versus when another approach works better
-
Iterate and refine their collaboration with AI systems
These capabilities feel more durable regardless of how the interfaces evolve.
Maybe I am wrong. Maybe prompt engineering will become a lasting communication skill. But before we rush to build it into every curriculum, it is worth asking whether we are chasing a moving target, and whether we should focus instead on the deeper cognitive skills that will matter no matter how we end up talking to machines.
As always, I share these ideas not because I have the answers but because I am still thinking them through. I would love to hear how others are thinking about this from where they sit.
The image at the top of this post was generated through AI. Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

I think you’re on the right track with the idea of creating “ great thinkers”, and how to use the tools we need to deal with all aspects of knowledge, and life. Critical thinking, the ability to analyze, no matter what you’re doing, to make something work for YOU ( what? No italics available?) is crucial to good education, whether it is in the field of dealing with AI, or our own writing, or auto mechanics, or sports, we need to be able to figure out how to do our best to get the best results. Ask any kid in the auto shop how to make any engine run sweetly, and it all revolves on analysis, whether it’s using the diagnostic machines, or listening for exactly the right noise.
I recently was chatting with one of my new choir members who teaches college English. Their department doesn’t allow the use of AI for student writing, ( no surprise there) but she teaches the skills needed to differentiate the mediocre from the superlative results when using the tools. I passed on your last post to her.
You mention the ability to use “ prompts”. This is, I think, also part of analytic thinking, and leads to better research. Some of us are better at finding information on the internet, because we have a more creative, or better analyzed, ability to request information, which is key to delving into the myriad sources of knowledge available to us.
Critical thinking, or “ figuring out how to make something work better” resonates with young people. After I did my soapbox speech on the ability to analyze, I could get my students to learn grammar, even as far as parsing sentences, as they knew this was a valuable tool, but only that. I did I tell them I didn’t expect them to remember all the rules when they were full adults, but it would help them use language to their advantage in communicating in any way. When they complained about doing a character analysis in a book, I pointed out how it might save one of their romantic relationships, and doing the analyzing on a book character wouldn’t get them hurt.
Thanks for another though-provoking post!
Wendy Vitter
Hi Wendy – Great stuff here. Engines, essays, relationships… your examples capture that beautifully.
I really like your point about prompts being an extension of that same analytic thinking. Whether we’re searching the internet, diagnosing a problem, or asking an AI for help, it all comes back to being curious, intentional, and clear about what we’re trying to learn.
And the story about your colleague teaching students how to judge AI-generated work — that feels like where so much of the future lies. Less about banning or blindly adopting tools, and more about helping learners build the discernment to understand quality, nuance, and truth.
20 years later and still connected. Lots of love for Team Riverside!
This connected so much to some thinking I did today while leading my hybrid Dungeons and Dragons Group (3 online students, 5 in person ranging from grade 2-9) and thinking that prompt engineering is exactly what these learners are practicing when they are encountering “me” and what I generate – Role Playing Games (RPGs) are a great way to humanize the practice of prompt engineering! https://technolandy.com/2025/11/19/day-54-of-2025-26-a-new-media-for-language-arts-and-practice-for-chrkennedy-prompt-engineering-and-a-shoutout-to-brennanleemulligan/
Thanks Ian for sharing this. I love the connection you make to your hybrid Dungeons and Dragons group. It is such a great example of how learners already practice the core elements of prompting. Every time they interact with you they are figuring out what they want, how to ask for it through their actions and dialogue, and how to interpret the results.
Role playing games make that whole process visible. Intention, interaction, and outcome are right there in front of them. It is a very human way to build the same habits of mind we talk about with AI.
I really appreciate you adding this example.