Feeds:
Posts
Comments

Posts Tagged ‘AI’

In my year-end reflection last December, I found myself dwelling on something that might seem unremarkable: there wasn’t a lot of drama in BC education this past year. No major controversies. No political firestorms. No headlines.

And I wrote: that’s a good thing.

In a year when AI, politics, and social media all seemed determined to manufacture urgency, the absence of drama stood out. It felt almost countercultural to say it out loud, but it was true.

I’ve spent more than 500 posts championing innovation, asking “what if,” and pushing against “we’ve always done it this way.” I’ve written about AI, about rethinking assessment, about challenging assumptions. This blog is called Culture of Yes for a reason. I believe in trying things.

So let me be clear: this isn’t a retreat from any of that.

But here’s what I’ve come to believe in this work: steadiness is a strategy. And it might be the most underrated one we have.

This tension between innovation and improvement isn’t new. It’s been a sustaining conversation in education for much of this century, and I’ve returned to it in different ways on this blog. In 2011, I wrote about Valerie Hannon’s “split screen approach,” the idea that we need to improve the system of today while simultaneously designing the system of tomorrow. In 2013, I used the movie Groundhog Day to warn against simply repeating each year a little better, noting that we want to teach for 25 years, not for one year repeated 25 times. And in 2017, I explored the tension between getting better and getting different, and found that when we embrace doing things differently, traditional results often improve too.

So what’s changed in my thinking?

Maybe this: I’ve come to see steadiness not as the opposite of innovation, but as its prerequisite.

We live in a world that celebrates disruption. We reward the bold move, the big announcement, the pivot. In education, we talk constantly about reimagining and transformation. The language of change is everywhere.

And some of that is good. Schools should absolutely be places of wonder and joy and amazement. We should try new things. We should ask hard questions about whether what we’re doing is actually working.

But there’s a difference between innovation and improvement. Innovation asks, “What’s new?” Improvement asks, “What’s better?” Both matter. The problem is that improvement is quieter. It doesn’t photograph well. It rarely makes the newsletter.

Innovation introduces variance. Improvement reduces variance. Healthy systems need both.

I often come back to a phrase I’ve borrowed from others over the years: you don’t have to be sick to get better. That reframes the whole enterprise. We’re not in crisis mode. We’re not fixing something broken. We’re refining something that’s working, and that kind of work requires patience, repetition, and a willingness to resist the shiny thing.

What makes that kind of slow, steady improvement possible? Trust.

And trust, in a school system, is built through consistency. When the Board is consistent with its expectations, the executive team can plan. When the executive team is consistent, principals can lead. When principals are consistent, teachers can teach. When teachers are consistent, students can learn. That chain isn’t bureaucracy. It’s infrastructure. It is the solid ground that lets people take risks, because they know the foundation won’t shift beneath them.

Sometimes progress looks like not having to explain the same thing again.

I’ll admit something. Earlier in my leadership journey, I felt pressure to prove myself through visible wins. The flashy initiative. The big rollout. The thing you could point to and say, “I did that.” It’s natural. When you’re newer to a role, you want to show you belong there.

Somewhere along the way, that shifted. Maybe it’s experience. Maybe it’s just getting older. I have become more comfortable letting the work speak quietly. The best days in our schools aren’t the ones that make headlines. They are the ones where a student finally understands something that has been just out of reach. Where a teacher tries something new and it lands. Where a conversation in a hallway changes a kid’s trajectory.

None of that trends. All of it compounds.

So yes, I’ll keep advocating for wonder and joy and amazement in our schools. I’ll keep pushing us to ask whether we’re doing right by every student. But I have also made peace with something: the most important work often looks, from the outside, like nothing is happening at all.

Steadiness doesn’t make headlines. But it makes a difference.

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »

I have been thinking a lot about assistance lately. Who gets it, who does not, and why we suddenly get moralistic about it the moment the assistance comes from AI.

The spark for this post is Nick Potkalitsky’s Substack essay, “In Praise of Assistance.” It is one of those pieces that does not just add to the AI and writing conversation. It reframes it (Thanks to Adam Garry for pointing me towards it).

Nick starts from the now familiar worry about “cognitive offloading,” students delegating the thinking to a tool, and he agrees the concern is real. But then he names what often sits underneath the concern: not just pedagogy, but ideology.

He argues that the cognitive offloading critique rests on “a historical fiction: the autonomous learner.” Because if we are honest, most of us did not learn to write (or think, or revise) alone.

My own invisible advantage

In high school, I had a huge advantage: my dad was an English teacher, and he read every essay before I submitted it. Not just English essays. All of them, across every subject.

He did not write my essays. But he did what good teachers do. He asked questions I had not thought to ask. He pointed out where my logic sagged. He helped me tighten sentences. He coached me toward clarity.

That continued through university. And years later, when I became a newspaper columnist, he was still my first reader. Every column went to him before it went to my editor. He would call with suggestions, and I would decide what to keep and what to let go.

At the time, nobody called this cheating. We called it support. Nick puts it simply: “Students have always learned through assistance. From peers, from teachers, from resources…” 

We rarely worry students are “offloading” onto classmates in a discussion. We celebrate it. But when AI enters the picture, suddenly assistance becomes suspect.

That is the tension.

The question is not “help or no help”

When we talk about AI and writing, the debate often collapses into a binary: real writing (alone, unaided) versus fake writing (assisted, scaffolded).

But that binary does not match how writing actually works. It does not match how learning has actually  ever worked.

The better question is the one Nick keeps pointing us toward: what kind of assistance builds thinking, rather than replacing it?

That is where his essay becomes more than a defense of AI. It is a critique of an unspoken standard that has been unevenly distributed for a long time. The idea that “authentic struggle” is the price of admission to learning.

Nick names the class based reality bluntly: affluent students often have “small seminars, writing conferences, office hours, peer review sessions” while others are in systems where meaningful feedback barely exists. And then comes the sentence: “The outcome depends on whether we recognize assistance for what it is: not a threat to learning, but its precondition.”

What I have been writing toward

In October, I wrote “Modeling AI for Authentic Writing.”  If AI is here (it is), then our job is to model the kind of use that keeps the writer in control. In that post, I tried to move the conversation from “Don’t use AI” to “Show your decisions.”

Because the heart of authentic writing is not whether you had help. It is whether your thinking is present. What did you accept? What did you reject? Why? What did you learn in the revision?

I wrote then: “None of this replaces judgment. I accept or reject every change.”

For years, Tricia Buckley, and before her Sharon Pierce and Deb Podurgiel, have played a similar role here on this blog, reading every post before publication and offering feedback. The byline is still mine because the ideas, voice, and final choices are mine.

That is the point.

Assistance is not the enemy of learning. Abdication is.

What I want to add

There is a system design question underneath that I keep circling back to.

If we accept that all learning has always been assisted, what changes about how we run schools?

A few weeks ago I wrote about the tutoring revolution and found myself wrestling with a similar tension. For years, success in certain courses quietly required something extra: a tutor. Parents traded recommendations, students admitted they needed help, and the whole system ran on an unspoken understanding that school alone was not enough. At least not for everyone.

AI is changing that. But here is the part that worries me: the digital divide is no longer just about device access. It is about knowing how to use the tool well. A student with strong digital literacy might turn ChatGPT into a Socratic tutor. Another might never get past using it as a homework completion machine.

Nick writes about elite students who have always had access to “assistance made flesh.” The risk now is that we create a new version of the same divide. Some students learn to collaborate with AI in ways that deepen their thinking. Others use it to bypass thinking altogether. And if we are not intentional, digital confidence becomes the new proxy for privilege.

The question is not whether students will have AI assistance. They already do. The question is whether we will teach them to use it in ways that build capacity or let the gap widen on its own.

A Culture of Yes stance

A Culture of Yes does not mean saying yes to every tool or every shortcut.

It means saying yes to the conditions that help more people learn well.

So here is where I am landing, at least today.

Writing has always been assisted. The myth of the autonomous writer has always favoured students with the most support. AI can absolutely be used to bypass thinking. But it can also be used to invite thinking, especially where feedback is scarce.

Our job is to design and model practices where assistance makes thinking visible and growth possible.

Nick’s essay refuses the easy frame. It asks us to stop policing help and start building learning communities where help is normal, explicit, teachable, and more equitably available.

That feels like the kind of “yes” worth defending.

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »

For years, there has been a quiet understanding in many high schools that success in certain courses, especially senior math and sciences, required something extra. Not more effort or better attendance, but a tutor. Parents would trade recommendations, students would quietly admit they needed one, and tutoring centres would advertise that “everyone needs help.” In some, especially affluent communities, paid tutors became part of the culture, almost an unspoken prerequisite to keeping up.

That world may be coming to an end.

AI has entered the tutoring business, and it does not take nights or weekends off. For the first time, students have access to personalized, immediate feedback and explanations any time they need it. They can ask follow up questions without embarrassment, get alternative explanations and have complex problems broken into smaller steps. All of this is available for free, or for the price of a phone app. The model that tutoring companies built around scarcity and exclusivity is being replaced by abundance and accessibility.

It is not only about convenience. Tools like ChatGPT, Claude, and Magic School AI can act as math coaches, writing mentors and language partners. They remember the work, adapt to a student’s level, and adjust explanations when the learner gets stuck. The value proposition that human tutors once held, personalization, is becoming a default feature of modern AI systems.

Just last week, one of our Grade 12 students shared how she had been struggling with integration by parts in calculus. Instead of waiting for a weekly tutoring session, she worked through problems with an AI tutor at 11 p.m., asking it to explain the same concept three different ways until it clicked. “It never got frustrated when I asked the same question again,” she said. “And I could be honest about what I did not understand.”

When I first started drafting this piece, I was ready to declare the end of the tutoring era. The evidence seemed clear. The assumption that you need a tutor to survive Pre Calculus is being upended. For many students, the AI sitting quietly on their laptop or phone now fills that role, often better and more patiently than the Saturday morning sessions they once dreaded.

Then I started reading the research. And my thinking got more complicated.

What the Research Actually Shows

The October 2025 edition of AASA’s School Administrator magazine dedicates significant space to the state of tutoring in American schools. AASA is an American based organization, but the questions it raises cross borders easily. The tension between equitable access and quality instruction, the challenge of sustaining initiatives beyond initial funding, the promise and limits of technology in supporting learners: these are Canadian conversations too. The research may come from Texas and Massachusetts, but it speaks directly to what we are wrestling with in British Columbia and across the country.

Liz Cohen, in her article drawing from her book The Future of Tutoring: Lessons from 10,000 School District Tutoring Initiatives, documents an unprecedented expansion. Within a year of the pandemic’s onset, 10,000 U.S. school districts were offering some form of tutoring after years of almost none. By May 2024, 46 percent of public schools reported providing high dosage tutoring, and just 13 percent said they offered no tutoring at all.

Research from the Johns Hopkins Center for Research and Reform in Education, featured in this issue, offers evidence that virtual tutoring with human tutors can produce meaningful results. Grade one students assigned to Air Reading, a structured virtual tutoring program, four times a week for a semester gained nearly 1.6 additional months of learning. Those who attended at least 40 sessions saw even greater progress.

But here is the tension that caught my attention: the research consistently shows that the most effective tutoring models still rely on human tutors. Studies on AI tutoring directly with students remain in early stages, and even the most promising work positions AI as supporting human tutors rather than replacing them

I had to sit with that for a while.

The Hybrid That Works

One case study which helped my framing was learning about the work happening in Ector County ISD in Texas. In partnership with Stanford University, they developed something called Tutor CoPilot. It uses AI not to tutor students directly, but to coach human tutors in real time, suggesting questions to ask, concepts to revisit, hints to offer.

The results are striking: students whose tutors used the AI prompts scored 14 percentage points higher than those whose tutors did not. The AI shifted tutors toward stronger pedagogy, guiding student thinking rather than simply giving away answers. And here is the part that matters most for equity: the greatest benefits went to less experienced tutors. The tool essentially democratized tutoring quality, helping novice tutors perform nearly as well as veterans.

This is not AI replacing humans. This is AI and humans amplifying each other.

What AI Cannot Yet Do

Cohen’s research surfaces something that pure AI cannot yet replicate. The success of tutoring, she argues, is deeply rooted in human relationships. It helps young people feel they matter. It builds motivation through productive struggle in a high support, high standards environment Cohen (This podcast is also a good background on Cohen’s work).

There will still be families who seek human tutors, especially for accountability or emotional connection. Some students need the structure of showing up, the social pressure of not wanting to disappoint someone, or simply the reassurance of a person saying “you’ve got this.” AI has not yet mastered the art of knowing when a student needs a break, a pep talk, or someone to believe in them.

The question is whether it will, and how soon.

The New Digital Divide

For schools, this raises urgent questions. Do we teach students how to use AI tutors effectively? How do we ensure that all students, not only the digitally confident, benefit from these new tools?

The digital divide is no longer just about device access. It is also about knowing how to prompt effectively, when to question an AI response, and how to use these tools for learning rather than answer getting. A student with strong digital literacy might turn ChatGPT into a Socratic tutor. Another might never get past using it as a homework completion machine. If we are not careful, digital confidence becomes the new proxy for privilege, only with different packaging.

There is another issue to face. If every student has a tutor at all hours, what does authentic assessment look like? How do we measure understanding when the line between getting help and getting answers is blurred? This is not a reason to resist change. It is a reason to rethink what we are measuring and why.

What I Got Wrong, and What I Got Right

The shift is cultural as much as it is technological. For years, tutoring companies helped reinforce the idea that school alone was not enough. Now, AI is challenging that notion and putting powerful learning tools directly in the hands of students. I was right about that.

But the real revolution may not be the end of tutoring. It may be its transformation.

This changes the teacher’s role as well. When information delivery and step by step support are available on demand, teachers become something more valuable. They become learning architects who design rich tasks. They become coaches who know when to push and when to support. They become mentors who help students navigate not only content, but the process of learning itself. The human element does not disappear. It becomes more essential, only with a different focus.

We may soon look back on the tutoring era the way we look at encyclopedias and phone books. Useful for their time, but unnecessary once the world changed. Or we may find that the future looks more like Ector County: AI and humans working together, each amplifying what the other does best.

Maybe what we should have wanted all along was not a system where extra help was a luxury, but one where every student has access to the support they need, when they need it, in the form that works best for them. Whether that form is human, AI, or some combination we have not yet imagined.

The question is not whether this change is coming. The question is whether we will shape it with intention, or let it happen to us.

Thanks to Liz Hill and Andrew Holland with whom I had recent conversations that helped inspire this post.

 

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking

Read Full Post »

Last week, I sat in an education conference listening to a keynote speaker who was absolutely unequivocal about it: students must learn prompt engineering or they will be left behind. The speaker was passionate, convincing even, about how this was the essential skill for the next generation. And as I sat there, I found myself thinking: really? Is this truly the skill we should be racing to embed in every curriculum?

Lately, I keep hearing that prompt engineering, the ability to write clever and precise instructions for AI, is the new super skill every young person needs to master. The idea is that those who can “talk to the machine” will be the ones who thrive in the age of generative AI.

And I get it. For now, it is true. Anyone who spends time with AI knows that the way you ask matters. A well-structured prompt can turn an average response into something remarkable. I have seen entire professional development sessions focused on how to write the perfect prompt.

But I keep wondering if this is really a future skill or simply a transitional one.

We have been here before. About a decade ago, coding was the next great literacy. We were told that all students needed to learn to code or they would be left behind. And while understanding logic, pattern recognition and computational thinking remains valuable, few would now argue that every student must become a programmer. The tools evolved. The interfaces changed. Knowing how to code shifted from a universal requirement to an optional asset.

I suspect the same will happen with prompting. The models are already becoming much more forgiving. Early versions of AI required carefully worded instructions and detailed context. But each new generation of large language models has become better at interpreting vague or natural language. They are now more context aware, more visual and better aligned with human intent. The need for carefully engineered prompts is already beginning to fade.

Even the interfaces are changing. Most people will not type directly into chatbots in the future. They will use AI features inside tools such as Google Docs, Canva or Notion that quietly handle the prompting behind the scenes. The software will translate our natural requests such as “summarize this,” “improve the tone,” or “make it more visual” into optimized prompts automatically. Just as we no longer type code to open a file, we will not need to craft perfect prompts to get great AI output.

There may be a split happening here. For most of us, prompting will become invisible, handled by the interface layer. But specialized roles might still require deep prompt engineering expertise for critical systems or highly creative work where nuance matters. It could mirror how we still have systems programmers even though most people never write a line of code.

Modern AI systems are also being trained on millions of examples of strong instructions and responses. They have learned the meta-skill of interpreting intent. Clear and simple language now produces excellent results.

So if the technical part of prompting is becoming less necessary, what remains essential? The human part. Knowing what to ask. Evaluating whether the answer is right. Recognizing when a response is insightful, biased, or incomplete. The real differentiator will be judgment, not phrasing. The skill will not be in writing prompts but in thinking critically about what those prompts produce.

There is something deeper here too. The enduring skill might be what we could call AI collaboration literacy—the ability to iterate with AI, to recognize when you are not getting what you need, and to adjust your approach, not just your words. It is less about engineering the perfect prompt and more about developing a productive working relationship with these tools.

It reminds me of the evolution from coding to clicking. Early computer users had to memorize complex commands. Now, we all navigate computers intuitively. Prompt engineering feels like today’s command line, a temporary bridge to a more natural future.

So yes, teaching students to think like prompt engineers has value. It helps them be clear, curious and reflective. But perhaps the goal is not to create great prompters. It is to create great thinkers who can:

  • Articulate clear goals and constraints

  • Recognize the difference between excellent and mediocre output

  • Maintain healthy skepticism and verification habits

  • Understand when AI is the right tool versus when another approach works better

  • Iterate and refine their collaboration with AI systems

These capabilities feel more durable regardless of how the interfaces evolve.

Maybe I am wrong. Maybe prompt engineering will become a lasting communication skill. But before we rush to build it into every curriculum, it is worth asking whether we are chasing a moving target, and whether we should focus instead on the deeper cognitive skills that will matter no matter how we end up talking to machines.

As always, I share these ideas not because I have the answers but because I am still thinking them through. I would love to hear how others are thinking about this from where they sit.

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »


Inspired by the recent Learning Forward BC conversation on human flourishing and AI.

Last week, I spent three hours tweaking a PowerPoint presentation I already had help with. At the same time, I had to decline a visit to an elementary class exploring AI tools. The irony? While I was perfecting slides, they were shaping the very future I was supposed to be leading them toward.

If we are honest, most of us superintendents spend far too much of our energy doing work that does not require the full force of our humanity. We draft versions of the same report again and again for different audiences. We shuffle through data systems, chase signatures, and repackage findings. It is necessary work, but is it what we were called to?

At a recent Learning Forward BC event on The Intersection of Human Flourishing and AI, that question hit home. We were exploring how technology might liberate, not limit, our humanity in education. It made me wonder: What if AI could take over significant portions of our work as leaders? What would we hand over, and what would we fight to keep?

Why This Matters for Leaders

I have written a lot on this blog about how AI is reshaping the work of teachers and students. But we need to look just as critically at our own work as superintendents and senior leaders. If we expect educators to rethink assessment, planning and feedback in an AI-rich world, then we must also examine the way we lead, communicate and make decisions.

The truth is that the same technology that can help a teacher personalize learning or a student write an essay can also help a superintendent analyze data, summarize reports or draft correspondence. AI is not only changing classrooms. It is changing the nature of leadership itself.

And yes, I am sure some superintendents might already be wondering if a chatbot could replace them at board meetings. But since I know my trustees often read this blog, I will not take the chance of testing that particular joke here.

The Question That Changes Everything

The OECD’s (Organisation for Economic Co-operation and Development)  Education for Human Flourishing framework reminds us that our purpose in education is to equip people to lead meaningful and worthwhile lives, oriented toward the future. If that applies to students, it applies to our leadership too.

So whether it is 30 percent, 50 percent, or even 70 percent of what we currently do, the question becomes: What would we hand over to AI, and which tasks would we hold on to because they matter most?

What We Could Let Go Of

AI is already remarkably good at tasks that drain our time but not our meaning:

  • Drafting first versions of reports, memos and letters
  • Crunching and summarizing enrolment or survey data
  • Managing meeting notes, calendars, reminders and task lists
  • Building templates, presentations and standard job postings
  • Drafting policy or procedural documents for refinement

These are automation, not animation. They do not require empathy, judgment, or nuance, only accuracy and speed. That is AI’s strength.

What We Must Protect

What we must protect, deliberately, are the moments of human connection, purpose and complexity:

  • Sitting with a parent whose trust in the system has eroded
  • Listening deeply to a principal wrestling with burnout or vision
  • Reading the room in a board meeting and knowing what not to say
  • Inspiring staff to believe in something greater than their daily tasks
  • Recognizing a student’s spark when they realize someone believes in them

These are leadership moments: irreducible, unautomatable and profoundly essential.

Leading for Human Flourishing

The OECD highlights three human competencies that AI cannot fully replicate: adaptive problem-solving, ethical decision-making and aesthetic perception.

Adaptive problem-solving: When a community crisis hits and there is no playbook, whether a sudden school closure, a traumatic event, or a divided community, we respond with creativity born from experience and intuition.

Ethical decision-making: When budget cuts force impossible choices between programs, when we must balance individual needs against the collective good, when integrity demands the harder path, these moments require moral courage that no algorithm can calculate.

Aesthetic perception: Recognizing when a school’s culture shifts from compliance to inspiration, sensing the exact moment a resistant team begins to trust, and seeing beauty in a struggling student’s small victory. This is what makes leadership an art, not just a science.

AI can mimic these competencies, but it does not feel them. It may calculate empathy, but it cannot experience it or show it. As more of our routine tasks shift to AI, the invitation is clear: we reclaim the human half.

Creating a Culture of Yes

This is where AI becomes an enabler of possibility rather than a threat to purpose. When AI handles the bureaucratic “no” work, the forms, compliance checks and procedural barriers, we create space for the human “yes.”

Yes, I have time to visit your classroom.
Yes, let’s explore that innovative idea.
Yes, I can truly listen.

In a Culture of Yes, AI does not replace us. It liberates us to be more fully present for what matters. Every report AI drafts is a conversation we can have. Every dataset it analyzes is a relationship we can build. Every schedule it optimizes is a moment we can use to connect.

Getting Started

This is not about wholesale transformation tomorrow. It is about small experiments.

What one repetitive task could you delegate to AI this week? What human conversation would that free you to have?

Start simple:

Use AI to draft that routine memo, then spend the saved time walking the halls.

Let AI summarize survey data, then use your energy to discuss what it means with your team.

Have AI create the meeting agenda, then focus fully on reading the human dynamics in the room.

The goal is not efficiency for its own sake, but reclaiming time for what only we can do.

The Real Promise

The promise of AI in leadership is not efficiency, but rediscovery.

It is the chance to release ourselves from the burden of mechanical work and return to the heart of leadership: human connection, meaning and moral purpose.

Imagine walking into your office tomorrow knowing that the reports are drafted, the data analyzed and the calendar managed, all before your first coffee. Now you can spend your morning where it matters most: in classrooms, with people, making meaning.

Because in the end, the future of education will not belong to the most efficient systems. It will belong to the most human leaders, those who use every tool available to protect and amplify what makes us irreplaceably human.

A Question to End With

I wonder if my list looks like yours. What would you hand over to AI, and what would you hold tightly because it feels essentially human? I would be interested to hear how others are thinking about their human half.

 

 

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking

Read Full Post »

Across Canada, and in many other parts of the world, literacy screening is having a moment.

There is broad agreement that we need to be better at identifying students who may be at risk, and that we need to do this earlier. The push toward more consistent and universal literacy screeners makes a lot of sense: earlier identification leads to earlier intervention, and ultimately, better outcomes for kids.

But here’s the question that’s been nagging me: are we simply going to recycle the same kinds of screeners we have used for the last generation? Or can this be the moment to think differently about what screening could look like in an AI world?

What Screeners Do Well

Traditional screeners help us establish a baseline. They can tell us if a student is meeting expected benchmarks in areas like phonemic awareness, decoding, fluency and comprehension. They provide the data teachers need to take action.

The challenge is that screeners often leave a gap between assessment and action. A teacher receives a score and then has to translate that number into the “what’s next” for the student and their family. It’s useful, but not always immediate, personalized or engaging.

What AI Could Add

This is where I wonder if we are missing an opportunity. AI could allow us to rethink the very design of literacy screeners. Imagine if…

  • Texts were customized for cultural relevance. Instead of one-size-fits-all passages, AI could generate short reading texts tailored to the learner’s context, interests or community. A child on the North Shore might read about the Capilano River, while another in Surrey reads about the Pattullo Bridge reconstruction. For Indigenous learners, this could mean texts that reflect Indigenous ways of knowing and storytelling traditions, developed in partnership with local Nations. The text would still be controlled for vocabulary and difficulty, but it would feel more real and more personal.

  • Feedback was immediate and audience-specific. A student could receive a friendly message highlighting a win (“You read 80 words per minute—your smoothest word was ship”) and a tip for next time. Families could receive a plain-language summary with simple routines for home (“Read together for 10 minutes tonight; circle the words that start with sh”). Teachers could receive a strand-level profile with small-group suggestions, not just a number on a page.

  • Practice was built-in. Instead of waiting for the next lesson, a screener could instantly generate a few targeted practice items based on the patterns the student struggled with, turning assessment into a learning moment instantly.

What This Isn’t

To be clear, this isn’t about replacing teacher expertise or professional judgment. Teachers would still interpret results, make instructional decisions, and build the relationships that matter most.

And this isn’t about creating more data for data’s sake. It’s about making the data we already collect more immediately useful—for students, for families and for teachers.

Safeguards Matter

Of course, any AI use comes with important guardrails. Automated scores would need validation against human judgment, with teachers maintaining override authority. Generated texts would require review for accuracy, bias and cultural safety. Indigenous content, in particular, would need to be co-designed with local Nations and aligned with principles of data sovereignty, ensuring that AI tools serve rather than appropriate Indigenous knowledge.

Quality oversight would need to be built in from day one, with regular audits and continuous monitoring to prevent the kind of drift that could undermine both accuracy and equity.

A Narrow Window

Here’s what makes this moment unique: jurisdictions are investing in new screening initiatives right now. We have a narrow window to influence how these tools are designed. If we don’t explore these possibilities now, we risk locking in approaches that simply digitize yesterday’s thinking.

I am not a literacy expert. But as someone who has watched technology reshape almost every other part of our schools over the last two decades, I see a pattern. The organizations that thrive are the ones that ask not just “how can we do what we’ve always done, but faster?” but “what becomes possible now that wasn’t possible before?”

The Question We Should Be Asking

The push for literacy screening is the right one. The evidence on early identification and intervention is clear. But we also have a unique opportunity to do more than just import the same tools from the past.

What if, instead of only identifying students who need help, our screeners could also immediately provide that help?

What if they could engage families in ways that feel supportive rather than clinical?

What if they could give teachers not just data, but insight?

AI won’t replace the expertise of our teachers or the relationships that matter most. But it might make our tools more immediate, more relevant and more effective for every child.

The question isn’t whether we should innovate. The question is whether we will seize this moment to innovate thoughtfully—or let it pass by.

What new possibilities are you seeing in your corner of education? And how do we make sure we are not just replicating the past with shinier tools?

Thanks to West Vancouver District District Vice-Principal Mary Parackal who really pushed my thinking in creating this post around what might be possible with AI.

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »

How I draft, edit, and stay human in the loop

For years I believed my advantage was “writing.” Lately I’ve realized the real edge was not keystrokes, it was ideas, structure, and voice. AI has not erased those. If anything, it has made them more important. Rather than pretend we are still in a pen and paper world, I have been trying to model what authentic writing looks like now.

We do not protect writing by banning the tools everyone already has. We protect writing by showing what thoughtful use looks like, and by being transparent about our process.

What I am hearing, especially in humanities

Last week, a high school English teacher stopped me. “I can tell when something has been AI generated,” he said, “but I cannot tell when they have collaborated with it thoughtfully. And I do not know what to do with that.”

He is not alone. Across our humanities departments, teachers are working on the fly, trying to maintain academic integrity while recognizing that the old gatekeeping moves, ban the tool and police the draft, do not hold when every student has ChatGPT in their pocket. The fear is real. Are we farming out the exact skills we are supposed to be teaching?

I do not think the answer is choosing between integrity and innovation. It is redefining what integrity looks like when the tools have changed.

How I actually write

I still start the old fashioned way, an outline, a thesis, a few proof points, and usually one sentence I think could be the closer. From there, I treat AI like a colleague, not a ghostwriter.

  • Editing help. I ask for a clarity pass, tighten verbs, fix hedging, and check whether my headings are parallel. Here is what I actually typed for this piece: “Revise for clarity and concision. Keep a conversational, hopeful tone similar to my other blog posts. Offer two options for the opening sentence.” I kept one, rejected the other, and moved on.

  • Skeptic check. “What would a fair skeptic say after reading this” It surfaces blind spots before I hit publish.

  • Reports and formatting. For formal documents, I use AI to turn tables into charts, crunch numbers, and reshape dense text into something readable.

  • Speeches. I keep a base grad speech and add school specific stories and names. AI helps blend those elements while keeping the message consistent.

None of this replaces judgment. I accept or reject every change. If a suggestion dulls my voice, it is out. That is the standard. My judgment stays in control. I also disclose what I did, every time. A short note at the end of a post goes a long way with our community and models the behavior we ask of students.

What I encourage for classrooms and staff rooms

The most helpful shift has been moving from “Do not use AI” to “Show your decisions.”

  • Model, then mirror. I demo my messy paragraph, ask AI for a clarity edit, then accept or reject in real time while explaining why. Students should bring their draft, try the same process, and compare choices.

  • Assess the thinking. Rubrics weight claims, evidence, organization, and audience impact, not who placed the comma.

  • Make the process visible. Version histories in Docs or Word, plus brief process notes that list tools used, prompts asked, and choices made, make learning visible and deter abdication of thinking.

  • Cite the workflow. Not to catch people out, but to name steps we can teach.

Guardrails that keep the work honest

  • No blank page outsourcing. Start with your outline, thesis, or key points.

  • Ask precise questions. “Cut 10 percent without losing meaning. Keep my conversational tone.”

  • Verify facts. If AI offers a claim, check it before it lands in public.

  • Always disclose. If a tool shaped meaning or form, say how.

Is this just cheating with better branding

I have never believed collaboration was cheating. When I wrote a newspaper column, my dad, a retired English teacher, was my unofficial copy desk. He proofread, edited, and offered suggestions on every draft. The byline was still mine because the ideas, voice, and final choices were mine.

Tricia Buckley, and before her Sharon Pierce and Deb Podurgiel, all staff in West Vancouver Schools, have read every blog post here before they were published and provided feedback.

AI sits in that same category for me, a helper, not a ghostwriter, and always subject to human judgment. What changed with AI is speed, scale, and availability. I can get feedback at 11 p.m., run ten drafts in twenty minutes, and the tool is always on. What did not change is my judgment, my responsibility for choices and my name on the work.

If the goal is proving you can type unaided, then yes, tools muddy the waters. Our goal in schools is thinking for real audiences. We have always used supports, outlines, spellcheckers, style guides, writing partners, rubrics and colleagues. The standard should be integrity and evidence of learning, not tool abstinence.

Equity

AI is a ramp, not a shortcut.

It helps stuck writers get moving, the student staring at a blank page who needs a sentence to react to, or the English language learner who can articulate ideas verbally but struggles with syntax. AI can generate that first sentence, and suddenly the student has something to revise, reject, or build on. For strong writers, it is a way to go deeper, test alternate structures, get a skeptic to read, or polish a conclusion without losing momentum.

The equity move is not banning tools for everyone. It is teaching how to use them responsibly, and ensuring access to good instruction is not the new dividing line. When we teach tool literacy, we level up. When we ban tools students already have, we make the learning invisible.

Prompts that actually help

  • Clarity pass: “Revise for clarity and concision. Keep a conversational, hopeful tone. Offer two options for the opening sentence.”

  • Skeptic lens: “List the strongest fair minded critiques of this piece and one concrete improvement for each.”

  • Structure check: “Are these headings parallel? Tell me how to fix them without changing the ideas.”

  • Audience flip: “Rewrite the conclusion as guidance to parents in about 120 words.”

  • Report polish: “Turn this table into three plain language insights and a simple chart title. Flag any numbers that look inconsistent.”

What I tell our community

  • We are pro-writing and pro-truth. We will use modern tools and we will say when we did.

  • We value voice. Your voice should be recognizable across drafts and tools.

  • We lead with learning. If a tool helps learning, we will teach it. If it replaces thinking, we will not.

If you want more

Last week I facilitated a Hot Topic discussion, “The Future of Writing in an AI World,” at the Canadian K12 School Leadership Summit on Generative AI

North Star

I can spend my time lamenting that writing once felt like my competitive edge, or I can double down on the edge that still matters, clear thinking, vivid stories and the courage to be transparent about how we work. That is the blended human and AI writing world I want to model for students and staff.

The teacher who stopped me in the hallway was right to be uncertain. We are all figuring this out in real time. I would rather figure it out in the open, and model a messy and honest process, than pretend the tools do not exist.

AI transparency note: I drafted this post myself, then used ChatGPT and Claude for a clarity edit and a skeptic read. I accepted some wording suggestions and rejected others to preserve voice. The image at the top of the post was created through a series of prompts using Claude.

 
 

.

Read Full Post »

I recently gave a virtual talk on AI in schools which forced me to solidify my current thinking and I tried to make some direct linkages to the Culture of Yes belief. I have included the video at the bottom, and this post is an adaption of the talk:

This summer, AI in education has gone from a quiet undercurrent to a headline wave. Major corporations have announced new AI powered tools for classrooms. Governments, particularly in the United States, have released statements, strategies, and funding commitments to “prepare schools for the AI era.” There is a growing sense, both excitement and urgency, that this technology will profoundly reshape learning.

As we head into the fall, the question for me is not whether AI will change education. It already has. The real question is: Will we guide this change with wisdom, or will it guide us?

Where We Are:

We are in a moment of intense attention and investment. For the first time in history, students have instant access to a form of intelligence that can write, create, and problem solve alongside them. The conversation has shifted from “Should AI be in schools?” to “How do we use it well?”

The opportunities are extraordinary, and so are the risks. In our rush to adopt tools, we can easily mistake activity for progress. AI is not a magic box. It reflects the data and the biases we feed it. Without careful integration, we risk amplifying inequities instead of closing them.

At the same time, teachers are navigating new pressures: learning unfamiliar tools while managing existing workloads, and working with students who arrive with vastly different levels of AI experience and access.

What I Hope:

In West Vancouver, our innovation priorities are as bold as they are deliberate: AI and physical literacy. Together, they reflect our belief that the future belongs to students who are digitally fluent, physically confident and deeply human.

My hope for AI is that it:

Amplifies human wisdom rather than replacing human intelligence.

Delivers personalized learning that has long been promised but rarely achieved.

Serves as a force for equity, not by assuming all students need the same thing, but by providing each student with the individualized support they need, regardless of their school’s resources or their family’s circumstances.

Frees up teachers’ time for what matters most: relationships, mentorship and inspiration.

In a Culture of Yes, we approach these possibilities with openness while remaining thoughtful about implementation.

What We Need to Do:

Focus on the Shift: From Memory to Meaning

For over a century, schools rewarded students who could store and retrieve information. AI changes that rote memorization game. We must now prioritize what students do with the knowledge — how they apply it, question it, and create from it.

Equip Students as Creators, Not Just Consumers

In a Culture of Yes, we say yes to new possibilities while maintaining academic integrity. AI becomes a collaborator for composing music, designing solutions to local challenges and exploring ethical dilemmas we have never faced before, not a replacement for student thinking.
Imagine a Grade 9 student co writing a play with AI, then performing it with peers, learning as much about collaboration and creativity as they do about technology.

Develop New Literacies

AI literacy is more than knowing how to use a tool. It is the ability to:

Prompt effectively and creatively.

Evaluate outputs for accuracy and bias.

Reflect on whether AI use aligns with human goals and values, and recognize when not to use it.

Understand the difference between AI assistance and AI dependence.

Lead Through Diffusion, Not Mandate

A Culture of Yes means saying yes to teacher curiosity and experimentation. The best AI integration spreads from teacher to teacher, classroom to classroom, through shared practice and professional learning, not top down directives that ignore classroom realities.  When your colleague in the classroom next to you has something exciting to share, you are keen to listen to them. 

Keep Humanity at the Core

AI can provide information, but only people provide inspiration. AI can offer feedback, but only people offer hope. We must ensure that every learning experience remains fundamentally about human connection and growth.

Looking Ahead

The age of AI is not coming, it is here. As educators, leaders, and communities, we face a choice that will shape the next generation’s relationship with both technology and learning itself.

A Culture of Yes means we choose:

Curiosity over fear

Collaboration over competition

Wisdom over efficiency

Human potential over technological convenience

If we embrace this approach, saying yes to AI’s possibilities while saying yes to our students’ humanity, we will not just reimagine learning. We will create classrooms where technology serves human flourishing, where every student can thrive, and where the future we are building together reflects our highest aspirations for education.

The conversation about AI in education is just beginning. As we step into this new school year, I invite you to share your hopes, your experiments, and your questions. We learn best when we learn together.

 

Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »

This summer, I read a book.

Not a blog post. Not a podcast transcript. Not a long-form article on Substack. A real book. A paper one. With pages. About a hundred of them.

It’s the first book I’ve finished in longer than I care to admit. Somewhere along the way, my attention span got swept up in an endless stream of digital content—quick hits, hot takes, clever clips, and smart commentary that rarely lasts more than a few minutes. And I’ve told myself it’s the same. That reading a dozen thought-provoking pieces online is just as good as reading one book cover to cover.

But this was different. And, honestly, it was hard.

I had to put my phone in another room. I had to sit in silence. I had to fight the urge to check notifications or Chat GPT a passing reference. It felt almost foreign. But also… refreshing. Grounding. Satisfying.

The book was about education and artificial intelligence, written by a colleague whose thinking I admire. I have been meaning to read it for a while. I picked it up partly out of curiosity, partly out of professional obligation. But as I worked to stay focused on pages discussing how AI is reshaping schools, student agency and even our attention spans, the irony wasn’t lost on me. Here I was, struggling to sustain focus, something the book suggests may be eroding in the digital age.

Is that a bad thing? I’m still not sure. Maybe our brains are adapting to new ways of thinking. But there was something undeniably satisfying about the deep, slow engagement that a book demands. A different kind of thinking. A different kind of learning.

And, perhaps most importantly, a different kind of accomplishment.

I know some may say I’m showing my age still believing books matter. But if that’s the case, I’ll own it. Because this experience reminded me there’s still something powerful in sitting still, slowing down and immersing yourself in one sustained idea.

And here’s the twist: I think I might read another.

The joys of summer.

Read Full Post »


How do we equip young people for a world that might advance beyond our expectations?

Mat Balez is a local West Vancouver parent. We have had several great conversations in recent years — and just last week, we sat down to talk about Replit, the future of coding, and how fast the world is changing for our kids. I find myself wondering more and more: How do we prepare children for a world that is changing faster than our educational systems?

Mat is one of those people I really value in my professional orbit — someone outside the day-to-day education system who from time-to-time sends me articles and ideas that push my thinking.

Recently, he posted a tweet thread that’s been sitting with me. It starts with a bold premise: Let’s assume superintelligence is going to happen within the next decade.

Then comes the question that matters most to people like us: What does that mean for how we raise and educate our kids?

Of course, there are valid debates about the timeline for superintelligence. Some experts suggest it could be several decades away, while others point to the exponential progress we are seeing as evidence of a shorter horizon. Regardless of whether it arrives in ten years or thirty, the direction is clear —  the implications for education are worth considering now.

Mat outlines five big ideas:

Teach AI “super literacy”

Make independent thinkers

Invest in scarcity

Preserve human connection

Double down on the basics

It’s a strong list — one worth amplifying and building on. And as someone who thinks a lot about learning, change, and leadership, I see it as both a roadmap and an invitation.

1. Teach AI Super Literacy
Mat’s right: AI is fast becoming a foundational skill. Not just for those working in tech, but for all of us navigating modern life.

But AI literacy needs to go beyond technical fluency. It’s not enough to know how to use the tools — we also need to understand their implications. What’s trustworthy? What’s ethical? What’s human?

We are raising kids who won’t just use AI — they’ll live in it. And the goal isn’t to be better than other humans at AI. The goal is to be more human in an AI-saturated world.

In the classroom: In some schools, students are beginning to analyze AI-generated essays — for example, essays on climate change — using critical literacy frameworks. In small groups, they identify factual inaccuracies, spot potential biases, and discuss what the AI missed in terms of local context and human impact. These kinds of activities mark a shift: we are not just teaching kids to write, but to think critically about how ideas are generated — and by whom.

2. Make Independent Thinkers
This one hit especially hard. As AI gets better at producing answers, our job becomes helping students ask better questions.

Let’s teach them to think deeply, hold multiple ideas in tension, and resist the temptation to outsource all their thinking to machines. Let’s create learning environments where students develop the confidence — and the discipline — to work through ambiguity and challenge their own assumptions.

If the car can drive itself, we still need to remember (and learn!) how to steer.

In the classroom: Some teachers are experimenting with “first principles challenges” — problems students must tackle without digital tools. The goal isn’t to romanticize pre-digital learning, but to strengthen foundational reasoning and decision-making skills. These exercises help students better understand when to rely on AI — and when to trust themselves.

3. Invest in Scarcity
Mat uses this phrase to point us toward the qualities that remain uniquely human: creativity, emotional intelligence, trust and leadership.

It’s a powerful reminder that as automation rises, it’s not just what we do that will matter — it’s how we relate, how we empathize, how we build community.

We often talk about preparing students for the jobs of the future. What if we also prepared them for the relationships of the future?

That said, a small caution: I don’t think we should frame these traits as competitive advantages. Scarcity doesn’t need to become the next educational buzzword. These qualities matter not because they are rare, but because they make us whole.

4. Preserve Human Connection
There’s a quiet crisis unfolding in this generation — one of disconnection and loneliness. It is something I have written about before as I discussed Jonathan Haidt’s latest book, the Anxious Generation.

As educators, we are in a position to protect what’s most essential: belonging, relationship, connection. Whether through daily check-ins, deep collaboration, or simply being fully present, we can model and foster real human interaction.

Technology is accelerating, but connection still happens at a human pace.

5. Double Down on the Basics
This is a beautiful reminder not to lose the thread. Despite all the disruption, there’s a lot that still works — and still matters.

Reading, writing, listening, speaking, thinking, moving. Respect, responsibility, kindness. These aren’t nostalgic ideas. They’re timeless ones.

So yes, let’s bring in the new. But let’s not forget what got us here.

Aligning With Our Commitments
Looking at Mat’s framework through the lens of our West Vancouver Schools commitments, I see powerful alignment. His emphasis on AI literacy and independent thinking directly supports our commitment to fostering innovation. The focus on doubling down on the basics reinforces our pledge to ensure strong foundations in essential skills. And perhaps most importantly, his call to preserve human connection reminds us that “all means all” — in a technological world, we must ensure no student loses access to the human relationships that make learning and life meaningful.

What would happen if we approached AI not as a replacement for human teaching, but as a catalyst for reimagining what human teachers can focus on? And how might we create spaces where students learn to view technology not as an inevitable force to surrender to, but as a set of tools they have agency to shape?

Getting Started: First Steps for Schools and Districts
For school leaders wondering where to begin, I’d suggest starting with a community conversation. Bring together educators, parents, students, and use local tech professionals as resources to explore these ideas together. What does AI literacy mean in your context? What human capacities do you most want to nurture?

From there, consider forming a small innovation team — not just tech enthusiasts, but a diverse group across roles and with different comfort levels of these changes.  Their job isn’t to overhaul everything at once, but to identify meaningful, strategic entry points for these ideas.

Most importantly, create space for teacher learning. In my visits to schools, teachers and other staff are eager to engage with these shifts, but they need time, support and permission to experiment. 

So What Else?
Mat ends his thread with a call to continue the conversation — and I think that’s where the real opportunity lies.

The future will be shaped by those who are curious, grounded and willing to learn. But those voices won’t always come from inside our institutions. Sometimes the most important thinking is already happening — at the dinner table, in community conversations or in the inbox from a thoughtful parent like Mat.

We just have to keep listening. And keep showing up — ready to rethink, ready to collaborate and ready to lead with both head and heart.

I’m reminded that in education, we need to keep moving. To stay relevant, we must remain curious about the world changing so quickly around us. Whether we embrace all of these changes is open for discussion, but we should certainly be talking about them. One great piece of leadership advice I received long ago was that leaders in education need to see around corners so they can be the first to know what is coming next — conversations with people like Mat help me do exactly that.

Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »

Older Posts »