Feeds:
Posts
Comments

Archive for November, 2025

Last week, I sat in an education conference listening to a keynote speaker who was absolutely unequivocal about it: students must learn prompt engineering or they will be left behind. The speaker was passionate, convincing even, about how this was the essential skill for the next generation. And as I sat there, I found myself thinking: really? Is this truly the skill we should be racing to embed in every curriculum?

Lately, I keep hearing that prompt engineering, the ability to write clever and precise instructions for AI, is the new super skill every young person needs to master. The idea is that those who can “talk to the machine” will be the ones who thrive in the age of generative AI.

And I get it. For now, it is true. Anyone who spends time with AI knows that the way you ask matters. A well-structured prompt can turn an average response into something remarkable. I have seen entire professional development sessions focused on how to write the perfect prompt.

But I keep wondering if this is really a future skill or simply a transitional one.

We have been here before. About a decade ago, coding was the next great literacy. We were told that all students needed to learn to code or they would be left behind. And while understanding logic, pattern recognition and computational thinking remains valuable, few would now argue that every student must become a programmer. The tools evolved. The interfaces changed. Knowing how to code shifted from a universal requirement to an optional asset.

I suspect the same will happen with prompting. The models are already becoming much more forgiving. Early versions of AI required carefully worded instructions and detailed context. But each new generation of large language models has become better at interpreting vague or natural language. They are now more context aware, more visual and better aligned with human intent. The need for carefully engineered prompts is already beginning to fade.

Even the interfaces are changing. Most people will not type directly into chatbots in the future. They will use AI features inside tools such as Google Docs, Canva or Notion that quietly handle the prompting behind the scenes. The software will translate our natural requests such as “summarize this,” “improve the tone,” or “make it more visual” into optimized prompts automatically. Just as we no longer type code to open a file, we will not need to craft perfect prompts to get great AI output.

There may be a split happening here. For most of us, prompting will become invisible, handled by the interface layer. But specialized roles might still require deep prompt engineering expertise for critical systems or highly creative work where nuance matters. It could mirror how we still have systems programmers even though most people never write a line of code.

Modern AI systems are also being trained on millions of examples of strong instructions and responses. They have learned the meta-skill of interpreting intent. Clear and simple language now produces excellent results.

So if the technical part of prompting is becoming less necessary, what remains essential? The human part. Knowing what to ask. Evaluating whether the answer is right. Recognizing when a response is insightful, biased, or incomplete. The real differentiator will be judgment, not phrasing. The skill will not be in writing prompts but in thinking critically about what those prompts produce.

There is something deeper here too. The enduring skill might be what we could call AI collaboration literacy—the ability to iterate with AI, to recognize when you are not getting what you need, and to adjust your approach, not just your words. It is less about engineering the perfect prompt and more about developing a productive working relationship with these tools.

It reminds me of the evolution from coding to clicking. Early computer users had to memorize complex commands. Now, we all navigate computers intuitively. Prompt engineering feels like today’s command line, a temporary bridge to a more natural future.

So yes, teaching students to think like prompt engineers has value. It helps them be clear, curious and reflective. But perhaps the goal is not to create great prompters. It is to create great thinkers who can:

  • Articulate clear goals and constraints

  • Recognize the difference between excellent and mediocre output

  • Maintain healthy skepticism and verification habits

  • Understand when AI is the right tool versus when another approach works better

  • Iterate and refine their collaboration with AI systems

These capabilities feel more durable regardless of how the interfaces evolve.

Maybe I am wrong. Maybe prompt engineering will become a lasting communication skill. But before we rush to build it into every curriculum, it is worth asking whether we are chasing a moving target, and whether we should focus instead on the deeper cognitive skills that will matter no matter how we end up talking to machines.

As always, I share these ideas not because I have the answers but because I am still thinking them through. I would love to hear how others are thinking about this from where they sit.

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »


Inspired by the recent Learning Forward BC conversation on human flourishing and AI.

Last week, I spent three hours tweaking a PowerPoint presentation I already had help with. At the same time, I had to decline a visit to an elementary class exploring AI tools. The irony? While I was perfecting slides, they were shaping the very future I was supposed to be leading them toward.

If we are honest, most of us superintendents spend far too much of our energy doing work that does not require the full force of our humanity. We draft versions of the same report again and again for different audiences. We shuffle through data systems, chase signatures, and repackage findings. It is necessary work, but is it what we were called to?

At a recent Learning Forward BC event on The Intersection of Human Flourishing and AI, that question hit home. We were exploring how technology might liberate, not limit, our humanity in education. It made me wonder: What if AI could take over significant portions of our work as leaders? What would we hand over, and what would we fight to keep?

Why This Matters for Leaders

I have written a lot on this blog about how AI is reshaping the work of teachers and students. But we need to look just as critically at our own work as superintendents and senior leaders. If we expect educators to rethink assessment, planning and feedback in an AI-rich world, then we must also examine the way we lead, communicate and make decisions.

The truth is that the same technology that can help a teacher personalize learning or a student write an essay can also help a superintendent analyze data, summarize reports or draft correspondence. AI is not only changing classrooms. It is changing the nature of leadership itself.

And yes, I am sure some superintendents might already be wondering if a chatbot could replace them at board meetings. But since I know my trustees often read this blog, I will not take the chance of testing that particular joke here.

The Question That Changes Everything

The OECD’s (Organisation for Economic Co-operation and Development)  Education for Human Flourishing framework reminds us that our purpose in education is to equip people to lead meaningful and worthwhile lives, oriented toward the future. If that applies to students, it applies to our leadership too.

So whether it is 30 percent, 50 percent, or even 70 percent of what we currently do, the question becomes: What would we hand over to AI, and which tasks would we hold on to because they matter most?

What We Could Let Go Of

AI is already remarkably good at tasks that drain our time but not our meaning:

  • Drafting first versions of reports, memos and letters
  • Crunching and summarizing enrolment or survey data
  • Managing meeting notes, calendars, reminders and task lists
  • Building templates, presentations and standard job postings
  • Drafting policy or procedural documents for refinement

These are automation, not animation. They do not require empathy, judgment, or nuance, only accuracy and speed. That is AI’s strength.

What We Must Protect

What we must protect, deliberately, are the moments of human connection, purpose and complexity:

  • Sitting with a parent whose trust in the system has eroded
  • Listening deeply to a principal wrestling with burnout or vision
  • Reading the room in a board meeting and knowing what not to say
  • Inspiring staff to believe in something greater than their daily tasks
  • Recognizing a student’s spark when they realize someone believes in them

These are leadership moments: irreducible, unautomatable and profoundly essential.

Leading for Human Flourishing

The OECD highlights three human competencies that AI cannot fully replicate: adaptive problem-solving, ethical decision-making and aesthetic perception.

Adaptive problem-solving: When a community crisis hits and there is no playbook, whether a sudden school closure, a traumatic event, or a divided community, we respond with creativity born from experience and intuition.

Ethical decision-making: When budget cuts force impossible choices between programs, when we must balance individual needs against the collective good, when integrity demands the harder path, these moments require moral courage that no algorithm can calculate.

Aesthetic perception: Recognizing when a school’s culture shifts from compliance to inspiration, sensing the exact moment a resistant team begins to trust, and seeing beauty in a struggling student’s small victory. This is what makes leadership an art, not just a science.

AI can mimic these competencies, but it does not feel them. It may calculate empathy, but it cannot experience it or show it. As more of our routine tasks shift to AI, the invitation is clear: we reclaim the human half.

Creating a Culture of Yes

This is where AI becomes an enabler of possibility rather than a threat to purpose. When AI handles the bureaucratic “no” work, the forms, compliance checks and procedural barriers, we create space for the human “yes.”

Yes, I have time to visit your classroom.
Yes, let’s explore that innovative idea.
Yes, I can truly listen.

In a Culture of Yes, AI does not replace us. It liberates us to be more fully present for what matters. Every report AI drafts is a conversation we can have. Every dataset it analyzes is a relationship we can build. Every schedule it optimizes is a moment we can use to connect.

Getting Started

This is not about wholesale transformation tomorrow. It is about small experiments.

What one repetitive task could you delegate to AI this week? What human conversation would that free you to have?

Start simple:

Use AI to draft that routine memo, then spend the saved time walking the halls.

Let AI summarize survey data, then use your energy to discuss what it means with your team.

Have AI create the meeting agenda, then focus fully on reading the human dynamics in the room.

The goal is not efficiency for its own sake, but reclaiming time for what only we can do.

The Real Promise

The promise of AI in leadership is not efficiency, but rediscovery.

It is the chance to release ourselves from the burden of mechanical work and return to the heart of leadership: human connection, meaning and moral purpose.

Imagine walking into your office tomorrow knowing that the reports are drafted, the data analyzed and the calendar managed, all before your first coffee. Now you can spend your morning where it matters most: in classrooms, with people, making meaning.

Because in the end, the future of education will not belong to the most efficient systems. It will belong to the most human leaders, those who use every tool available to protect and amplify what makes us irreplaceably human.

A Question to End With

I wonder if my list looks like yours. What would you hand over to AI, and what would you hold tightly because it feels essentially human? I would be interested to hear how others are thinking about their human half.

 

 

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking

Read Full Post »

Across Canada, and in many other parts of the world, literacy screening is having a moment.

There is broad agreement that we need to be better at identifying students who may be at risk, and that we need to do this earlier. The push toward more consistent and universal literacy screeners makes a lot of sense: earlier identification leads to earlier intervention, and ultimately, better outcomes for kids.

But here’s the question that’s been nagging me: are we simply going to recycle the same kinds of screeners we have used for the last generation? Or can this be the moment to think differently about what screening could look like in an AI world?

What Screeners Do Well

Traditional screeners help us establish a baseline. They can tell us if a student is meeting expected benchmarks in areas like phonemic awareness, decoding, fluency and comprehension. They provide the data teachers need to take action.

The challenge is that screeners often leave a gap between assessment and action. A teacher receives a score and then has to translate that number into the “what’s next” for the student and their family. It’s useful, but not always immediate, personalized or engaging.

What AI Could Add

This is where I wonder if we are missing an opportunity. AI could allow us to rethink the very design of literacy screeners. Imagine if…

  • Texts were customized for cultural relevance. Instead of one-size-fits-all passages, AI could generate short reading texts tailored to the learner’s context, interests or community. A child on the North Shore might read about the Capilano River, while another in Surrey reads about the Pattullo Bridge reconstruction. For Indigenous learners, this could mean texts that reflect Indigenous ways of knowing and storytelling traditions, developed in partnership with local Nations. The text would still be controlled for vocabulary and difficulty, but it would feel more real and more personal.

  • Feedback was immediate and audience-specific. A student could receive a friendly message highlighting a win (“You read 80 words per minute—your smoothest word was ship”) and a tip for next time. Families could receive a plain-language summary with simple routines for home (“Read together for 10 minutes tonight; circle the words that start with sh”). Teachers could receive a strand-level profile with small-group suggestions, not just a number on a page.

  • Practice was built-in. Instead of waiting for the next lesson, a screener could instantly generate a few targeted practice items based on the patterns the student struggled with, turning assessment into a learning moment instantly.

What This Isn’t

To be clear, this isn’t about replacing teacher expertise or professional judgment. Teachers would still interpret results, make instructional decisions, and build the relationships that matter most.

And this isn’t about creating more data for data’s sake. It’s about making the data we already collect more immediately useful—for students, for families and for teachers.

Safeguards Matter

Of course, any AI use comes with important guardrails. Automated scores would need validation against human judgment, with teachers maintaining override authority. Generated texts would require review for accuracy, bias and cultural safety. Indigenous content, in particular, would need to be co-designed with local Nations and aligned with principles of data sovereignty, ensuring that AI tools serve rather than appropriate Indigenous knowledge.

Quality oversight would need to be built in from day one, with regular audits and continuous monitoring to prevent the kind of drift that could undermine both accuracy and equity.

A Narrow Window

Here’s what makes this moment unique: jurisdictions are investing in new screening initiatives right now. We have a narrow window to influence how these tools are designed. If we don’t explore these possibilities now, we risk locking in approaches that simply digitize yesterday’s thinking.

I am not a literacy expert. But as someone who has watched technology reshape almost every other part of our schools over the last two decades, I see a pattern. The organizations that thrive are the ones that ask not just “how can we do what we’ve always done, but faster?” but “what becomes possible now that wasn’t possible before?”

The Question We Should Be Asking

The push for literacy screening is the right one. The evidence on early identification and intervention is clear. But we also have a unique opportunity to do more than just import the same tools from the past.

What if, instead of only identifying students who need help, our screeners could also immediately provide that help?

What if they could engage families in ways that feel supportive rather than clinical?

What if they could give teachers not just data, but insight?

AI won’t replace the expertise of our teachers or the relationships that matter most. But it might make our tools more immediate, more relevant and more effective for every child.

The question isn’t whether we should innovate. The question is whether we will seize this moment to innovate thoughtfully—or let it pass by.

What new possibilities are you seeing in your corner of education? And how do we make sure we are not just replicating the past with shinier tools?

Thanks to West Vancouver District District Vice-Principal Mary Parackal who really pushed my thinking in creating this post around what might be possible with AI.

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »