Last week, I sat in an education conference listening to a keynote speaker who was absolutely unequivocal about it: students must learn prompt engineering or they will be left behind. The speaker was passionate, convincing even, about how this was the essential skill for the next generation. And as I sat there, I found myself thinking: really? Is this truly the skill we should be racing to embed in every curriculum?
Lately, I keep hearing that prompt engineering, the ability to write clever and precise instructions for AI, is the new super skill every young person needs to master. The idea is that those who can “talk to the machine” will be the ones who thrive in the age of generative AI.
And I get it. For now, it is true. Anyone who spends time with AI knows that the way you ask matters. A well-structured prompt can turn an average response into something remarkable. I have seen entire professional development sessions focused on how to write the perfect prompt.
But I keep wondering if this is really a future skill or simply a transitional one.
We have been here before. About a decade ago, coding was the next great literacy. We were told that all students needed to learn to code or they would be left behind. And while understanding logic, pattern recognition and computational thinking remains valuable, few would now argue that every student must become a programmer. The tools evolved. The interfaces changed. Knowing how to code shifted from a universal requirement to an optional asset.
I suspect the same will happen with prompting. The models are already becoming much more forgiving. Early versions of AI required carefully worded instructions and detailed context. But each new generation of large language models has become better at interpreting vague or natural language. They are now more context aware, more visual and better aligned with human intent. The need for carefully engineered prompts is already beginning to fade.
Even the interfaces are changing. Most people will not type directly into chatbots in the future. They will use AI features inside tools such as Google Docs, Canva or Notion that quietly handle the prompting behind the scenes. The software will translate our natural requests such as “summarize this,” “improve the tone,” or “make it more visual” into optimized prompts automatically. Just as we no longer type code to open a file, we will not need to craft perfect prompts to get great AI output.
There may be a split happening here. For most of us, prompting will become invisible, handled by the interface layer. But specialized roles might still require deep prompt engineering expertise for critical systems or highly creative work where nuance matters. It could mirror how we still have systems programmers even though most people never write a line of code.
Modern AI systems are also being trained on millions of examples of strong instructions and responses. They have learned the meta-skill of interpreting intent. Clear and simple language now produces excellent results.
So if the technical part of prompting is becoming less necessary, what remains essential? The human part. Knowing what to ask. Evaluating whether the answer is right. Recognizing when a response is insightful, biased, or incomplete. The real differentiator will be judgment, not phrasing. The skill will not be in writing prompts but in thinking critically about what those prompts produce.
There is something deeper here too. The enduring skill might be what we could call AI collaboration literacy—the ability to iterate with AI, to recognize when you are not getting what you need, and to adjust your approach, not just your words. It is less about engineering the perfect prompt and more about developing a productive working relationship with these tools.
It reminds me of the evolution from coding to clicking. Early computer users had to memorize complex commands. Now, we all navigate computers intuitively. Prompt engineering feels like today’s command line, a temporary bridge to a more natural future.
So yes, teaching students to think like prompt engineers has value. It helps them be clear, curious and reflective. But perhaps the goal is not to create great prompters. It is to create great thinkers who can:
-
Articulate clear goals and constraints
-
Recognize the difference between excellent and mediocre output
-
Maintain healthy skepticism and verification habits
-
Understand when AI is the right tool versus when another approach works better
-
Iterate and refine their collaboration with AI systems
These capabilities feel more durable regardless of how the interfaces evolve.
Maybe I am wrong. Maybe prompt engineering will become a lasting communication skill. But before we rush to build it into every curriculum, it is worth asking whether we are chasing a moving target, and whether we should focus instead on the deeper cognitive skills that will matter no matter how we end up talking to machines.
As always, I share these ideas not because I have the answers but because I am still thinking them through. I would love to hear how others are thinking about this from where they sit.
The image at the top of this post was generated through AI. Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.


