Feeds:
Posts
Comments

Archive for the ‘Technology’ Category

I have been thinking a lot about assistance lately. Who gets it, who does not, and why we suddenly get moralistic about it the moment the assistance comes from AI.

The spark for this post is Nick Potkalitsky’s Substack essay, “In Praise of Assistance.” It is one of those pieces that does not just add to the AI and writing conversation. It reframes it (Thanks to Adam Garry for pointing me towards it).

Nick starts from the now familiar worry about “cognitive offloading,” students delegating the thinking to a tool, and he agrees the concern is real. But then he names what often sits underneath the concern: not just pedagogy, but ideology.

He argues that the cognitive offloading critique rests on “a historical fiction: the autonomous learner.” Because if we are honest, most of us did not learn to write (or think, or revise) alone.

My own invisible advantage

In high school, I had a huge advantage: my dad was an English teacher, and he read every essay before I submitted it. Not just English essays. All of them, across every subject.

He did not write my essays. But he did what good teachers do. He asked questions I had not thought to ask. He pointed out where my logic sagged. He helped me tighten sentences. He coached me toward clarity.

That continued through university. And years later, when I became a newspaper columnist, he was still my first reader. Every column went to him before it went to my editor. He would call with suggestions, and I would decide what to keep and what to let go.

At the time, nobody called this cheating. We called it support. Nick puts it simply: “Students have always learned through assistance. From peers, from teachers, from resources…” 

We rarely worry students are “offloading” onto classmates in a discussion. We celebrate it. But when AI enters the picture, suddenly assistance becomes suspect.

That is the tension.

The question is not “help or no help”

When we talk about AI and writing, the debate often collapses into a binary: real writing (alone, unaided) versus fake writing (assisted, scaffolded).

But that binary does not match how writing actually works. It does not match how learning has actually  ever worked.

The better question is the one Nick keeps pointing us toward: what kind of assistance builds thinking, rather than replacing it?

That is where his essay becomes more than a defense of AI. It is a critique of an unspoken standard that has been unevenly distributed for a long time. The idea that “authentic struggle” is the price of admission to learning.

Nick names the class based reality bluntly: affluent students often have “small seminars, writing conferences, office hours, peer review sessions” while others are in systems where meaningful feedback barely exists. And then comes the sentence: “The outcome depends on whether we recognize assistance for what it is: not a threat to learning, but its precondition.”

What I have been writing toward

In October, I wrote “Modeling AI for Authentic Writing.”  If AI is here (it is), then our job is to model the kind of use that keeps the writer in control. In that post, I tried to move the conversation from “Don’t use AI” to “Show your decisions.”

Because the heart of authentic writing is not whether you had help. It is whether your thinking is present. What did you accept? What did you reject? Why? What did you learn in the revision?

I wrote then: “None of this replaces judgment. I accept or reject every change.”

For years, Tricia Buckley, and before her Sharon Pierce and Deb Podurgiel, have played a similar role here on this blog, reading every post before publication and offering feedback. The byline is still mine because the ideas, voice, and final choices are mine.

That is the point.

Assistance is not the enemy of learning. Abdication is.

What I want to add

There is a system design question underneath that I keep circling back to.

If we accept that all learning has always been assisted, what changes about how we run schools?

A few weeks ago I wrote about the tutoring revolution and found myself wrestling with a similar tension. For years, success in certain courses quietly required something extra: a tutor. Parents traded recommendations, students admitted they needed help, and the whole system ran on an unspoken understanding that school alone was not enough. At least not for everyone.

AI is changing that. But here is the part that worries me: the digital divide is no longer just about device access. It is about knowing how to use the tool well. A student with strong digital literacy might turn ChatGPT into a Socratic tutor. Another might never get past using it as a homework completion machine.

Nick writes about elite students who have always had access to “assistance made flesh.” The risk now is that we create a new version of the same divide. Some students learn to collaborate with AI in ways that deepen their thinking. Others use it to bypass thinking altogether. And if we are not intentional, digital confidence becomes the new proxy for privilege.

The question is not whether students will have AI assistance. They already do. The question is whether we will teach them to use it in ways that build capacity or let the gap widen on its own.

A Culture of Yes stance

A Culture of Yes does not mean saying yes to every tool or every shortcut.

It means saying yes to the conditions that help more people learn well.

So here is where I am landing, at least today.

Writing has always been assisted. The myth of the autonomous writer has always favoured students with the most support. AI can absolutely be used to bypass thinking. But it can also be used to invite thinking, especially where feedback is scarce.

Our job is to design and model practices where assistance makes thinking visible and growth possible.

Nick’s essay refuses the easy frame. It asks us to stop policing help and start building learning communities where help is normal, explicit, teachable, and more equitably available.

That feels like the kind of “yes” worth defending.

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »

This marks the 11th year of my One Word tradition. Eleven years. When I started this practice back in 2016, I was 42 years old and hungry. Literally, that was my word. Hungry. I wanted to compete, to stay curious, to keep pushing. And here I am, a decade later, still hungry but now asking different questions about what that means.

Before I get to 2026, let me say this about 2025 and “Thrive.” It delivered. In a year where it would have been easy to retreat into cynicism or exhaustion, I chose to flourish instead. I wrote more than I have in years, and it never felt like a chore. I ran every single day. I spent my summer coaching basketball with young athletes who remind me why I do this work. I leaned into AI not as a threat but as an invitation to rethink learning. I found great satisfaction in work and with those I work with.   Thrive was about sustaining momentum and finding joy in that momentum. It worked.

So what comes next?

This word is not about doing more. It is about feeling more, without losing momentum.

My word for 2026 is Alive.

Why Alive?

I turn 53 this year. Regular readers know I feel my age more than ever (I keep bringing it up), and I mean that in both the best and most humbling ways. There are strands of grey in my hair that were not there five years ago. My recovery from long runs takes longer than it used to. I notice things now that I never noticed before: the way my knees feel on cold mornings, the reading glasses I now keep in three different places, the names that take an extra second (or sometimes minute) to retrieve.

And yet.

I am not checking out. My body may be changing, but my commitment to showing up has not. My run streak will cross 2,000 days in 2026. I will keep coaching. I will keep writing. I will keep appearing in classrooms and  conference rooms with intention and energy, even when generating that energy requires more deliberate effort than it used to.

A friend of mine, Anthony, texted me recently. He is not in education; he is a successful entrepreneur. His message was simple: “Call me.” He does that sometimes. When I did, he started right in. “You know what makes us different? No matter what happens today, we show up tomorrow and attack the day. We don’t get stuck in what happened. We just keep moving forward.”

That is what Alive means to me. Not ignoring the hard stuff. Not pretending the grey hair and the sore knees do not exist. But choosing, every single day, to show up and engage anyway.

Alive is my answer to a world that feels increasingly numb. In a time filled with cynics and critics, with doom-scrolling and disengagement, I am choosing to stay fully present. To feel things. To remain curious when it would be easier to become jaded. To stay optimistic when pessimism seems more sophisticated.

Being alive means more than existing. It means showing up with your whole self, not some protected, half version. It means being willing to be changed by what you encounter.

Building on a Decade of Words

When I look back at my words over the past decade, I see a story. Each word was right for its moment, and together they form something larger than any single year.

The early years were about drive: Hungry (2016), Hope (2017), Relevance (2018), Delight (2019).

The middle years were about resilience: Hustle (2020), Optimism (2021), Focus (2022), Coached (2023).

The recent years have been about integration: Accelerate (2024), Thrive (2025).

And now, 2026: Alive.

Alive feels like a synthesis of all of it. You cannot be truly alive without hunger and hope. You cannot be alive without relevance and delight. You cannot be alive without focus and the willingness to be coached. Being alive requires both acceleration and the wisdom to know what thriving actually looks like.

Alive in a Changing World

We are living through one of the most significant shifts in how humans learn and work. AI is not coming; it is here. And I want to be fully alive to what that means, not as a passive observer but as an active participant shaping how we integrate these tools in our schools.

But here is what I keep coming back to:

The more powerful the technology becomes, the more important the human elements are.

Connection. Curiosity. Creativity. Compassion. These are not things AI can replicate. They are the things that make us alive.

In 2026, I want to be alive to both realities. I want to keep exploring what AI can do for learning while never losing sight of what only humans can do for each other. I want to be in classrooms watching teachers and students navigate this new landscape together. I want to ask good questions and resist easy answers. I want to model what it looks like to embrace change without abandoning what matters most.

Alive in Body and Relationship

For me, being alive has always been connected to physical movement. My run streak is not about athletic achievement. It is about presence. Every morning when I lace up my shoes and step outside, I am choosing to be alive to that day. Rain or shine, tired or energized, home or traveling. The streak is a daily declaration: I am here. I am engaged.

In 2026, I will keep running. I will keep coaching basketball. I will keep prioritizing the habits that have carried me this far: 10,000 steps, daily movement, attention to what I put in my body (OK – this last one needs to be better).

But being alive is also about the people around me. My family. My colleagues. The educators I work alongside. Relationships require the same consistency as run streaks. You show up. You do the work. You stay curious about the people next to you, even when you think you know them completely.

Alive and Hopeful

I know the world can feel heavy right now. There is no shortage of reasons to disengage, to protect yourself, to lower your expectations. Cynicism is easy. Hope is harder.

But I keep choosing hope. Not naive hope that ignores reality, but stubborn hope that insists on possibility anyway. Hope that believes education can be better. Hope that trusts young people to rise to challenges we cannot yet imagine. Hope that sees AI as a tool for human flourishing rather than replacement.

Being alive means staying open to wonder. It means maintaining the curiosity that has driven my career and my writing. It means refusing to let age or experience calcify into certainty. The older I get, the more I realize how much I do not know. And that feels like a gift, not a limitation.

All In

So yes, I may be greyer. I may be slower in some ways. But I am all in on 2026.

All in on learning.
All in on family.
All in on health.
All in on this beautiful, complicated, rapidly changing world.

Alive is not a passive state. It is a choice, made daily, sometimes hourly. It is the choice to engage rather than withdraw. To feel rather than numb. To hope rather than despair. To keep saying yes.

That is the Culture of Yes I have been writing about for 16 years now. And it turns out, it has always been about being fully, stubbornly, joyfully alive.

What word will guide your 2026?

And want a second opinion on picking a word,  here is what Daniel Pink said this week about the power of the one word process.  


Previous One Word Posts:

2016: Hungry

2017: Hope

2018: Relevance

2019: Delight

2020: Hustle

2021: Optimism

2022: Focus

2023: Coached

2024: Accelerate

2025: Thrive

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »

For years, there has been a quiet understanding in many high schools that success in certain courses, especially senior math and sciences, required something extra. Not more effort or better attendance, but a tutor. Parents would trade recommendations, students would quietly admit they needed one, and tutoring centres would advertise that “everyone needs help.” In some, especially affluent communities, paid tutors became part of the culture, almost an unspoken prerequisite to keeping up.

That world may be coming to an end.

AI has entered the tutoring business, and it does not take nights or weekends off. For the first time, students have access to personalized, immediate feedback and explanations any time they need it. They can ask follow up questions without embarrassment, get alternative explanations and have complex problems broken into smaller steps. All of this is available for free, or for the price of a phone app. The model that tutoring companies built around scarcity and exclusivity is being replaced by abundance and accessibility.

It is not only about convenience. Tools like ChatGPT, Claude, and Magic School AI can act as math coaches, writing mentors and language partners. They remember the work, adapt to a student’s level, and adjust explanations when the learner gets stuck. The value proposition that human tutors once held, personalization, is becoming a default feature of modern AI systems.

Just last week, one of our Grade 12 students shared how she had been struggling with integration by parts in calculus. Instead of waiting for a weekly tutoring session, she worked through problems with an AI tutor at 11 p.m., asking it to explain the same concept three different ways until it clicked. “It never got frustrated when I asked the same question again,” she said. “And I could be honest about what I did not understand.”

When I first started drafting this piece, I was ready to declare the end of the tutoring era. The evidence seemed clear. The assumption that you need a tutor to survive Pre Calculus is being upended. For many students, the AI sitting quietly on their laptop or phone now fills that role, often better and more patiently than the Saturday morning sessions they once dreaded.

Then I started reading the research. And my thinking got more complicated.

What the Research Actually Shows

The October 2025 edition of AASA’s School Administrator magazine dedicates significant space to the state of tutoring in American schools. AASA is an American based organization, but the questions it raises cross borders easily. The tension between equitable access and quality instruction, the challenge of sustaining initiatives beyond initial funding, the promise and limits of technology in supporting learners: these are Canadian conversations too. The research may come from Texas and Massachusetts, but it speaks directly to what we are wrestling with in British Columbia and across the country.

Liz Cohen, in her article drawing from her book The Future of Tutoring: Lessons from 10,000 School District Tutoring Initiatives, documents an unprecedented expansion. Within a year of the pandemic’s onset, 10,000 U.S. school districts were offering some form of tutoring after years of almost none. By May 2024, 46 percent of public schools reported providing high dosage tutoring, and just 13 percent said they offered no tutoring at all.

Research from the Johns Hopkins Center for Research and Reform in Education, featured in this issue, offers evidence that virtual tutoring with human tutors can produce meaningful results. Grade one students assigned to Air Reading, a structured virtual tutoring program, four times a week for a semester gained nearly 1.6 additional months of learning. Those who attended at least 40 sessions saw even greater progress.

But here is the tension that caught my attention: the research consistently shows that the most effective tutoring models still rely on human tutors. Studies on AI tutoring directly with students remain in early stages, and even the most promising work positions AI as supporting human tutors rather than replacing them

I had to sit with that for a while.

The Hybrid That Works

One case study which helped my framing was learning about the work happening in Ector County ISD in Texas. In partnership with Stanford University, they developed something called Tutor CoPilot. It uses AI not to tutor students directly, but to coach human tutors in real time, suggesting questions to ask, concepts to revisit, hints to offer.

The results are striking: students whose tutors used the AI prompts scored 14 percentage points higher than those whose tutors did not. The AI shifted tutors toward stronger pedagogy, guiding student thinking rather than simply giving away answers. And here is the part that matters most for equity: the greatest benefits went to less experienced tutors. The tool essentially democratized tutoring quality, helping novice tutors perform nearly as well as veterans.

This is not AI replacing humans. This is AI and humans amplifying each other.

What AI Cannot Yet Do

Cohen’s research surfaces something that pure AI cannot yet replicate. The success of tutoring, she argues, is deeply rooted in human relationships. It helps young people feel they matter. It builds motivation through productive struggle in a high support, high standards environment Cohen (This podcast is also a good background on Cohen’s work).

There will still be families who seek human tutors, especially for accountability or emotional connection. Some students need the structure of showing up, the social pressure of not wanting to disappoint someone, or simply the reassurance of a person saying “you’ve got this.” AI has not yet mastered the art of knowing when a student needs a break, a pep talk, or someone to believe in them.

The question is whether it will, and how soon.

The New Digital Divide

For schools, this raises urgent questions. Do we teach students how to use AI tutors effectively? How do we ensure that all students, not only the digitally confident, benefit from these new tools?

The digital divide is no longer just about device access. It is also about knowing how to prompt effectively, when to question an AI response, and how to use these tools for learning rather than answer getting. A student with strong digital literacy might turn ChatGPT into a Socratic tutor. Another might never get past using it as a homework completion machine. If we are not careful, digital confidence becomes the new proxy for privilege, only with different packaging.

There is another issue to face. If every student has a tutor at all hours, what does authentic assessment look like? How do we measure understanding when the line between getting help and getting answers is blurred? This is not a reason to resist change. It is a reason to rethink what we are measuring and why.

What I Got Wrong, and What I Got Right

The shift is cultural as much as it is technological. For years, tutoring companies helped reinforce the idea that school alone was not enough. Now, AI is challenging that notion and putting powerful learning tools directly in the hands of students. I was right about that.

But the real revolution may not be the end of tutoring. It may be its transformation.

This changes the teacher’s role as well. When information delivery and step by step support are available on demand, teachers become something more valuable. They become learning architects who design rich tasks. They become coaches who know when to push and when to support. They become mentors who help students navigate not only content, but the process of learning itself. The human element does not disappear. It becomes more essential, only with a different focus.

We may soon look back on the tutoring era the way we look at encyclopedias and phone books. Useful for their time, but unnecessary once the world changed. Or we may find that the future looks more like Ector County: AI and humans working together, each amplifying what the other does best.

Maybe what we should have wanted all along was not a system where extra help was a luxury, but one where every student has access to the support they need, when they need it, in the form that works best for them. Whether that form is human, AI, or some combination we have not yet imagined.

The question is not whether this change is coming. The question is whether we will shape it with intention, or let it happen to us.

Thanks to Liz Hill and Andrew Holland with whom I had recent conversations that helped inspire this post.

 

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking

Read Full Post »

Last week, I sat in an education conference listening to a keynote speaker who was absolutely unequivocal about it: students must learn prompt engineering or they will be left behind. The speaker was passionate, convincing even, about how this was the essential skill for the next generation. And as I sat there, I found myself thinking: really? Is this truly the skill we should be racing to embed in every curriculum?

Lately, I keep hearing that prompt engineering, the ability to write clever and precise instructions for AI, is the new super skill every young person needs to master. The idea is that those who can “talk to the machine” will be the ones who thrive in the age of generative AI.

And I get it. For now, it is true. Anyone who spends time with AI knows that the way you ask matters. A well-structured prompt can turn an average response into something remarkable. I have seen entire professional development sessions focused on how to write the perfect prompt.

But I keep wondering if this is really a future skill or simply a transitional one.

We have been here before. About a decade ago, coding was the next great literacy. We were told that all students needed to learn to code or they would be left behind. And while understanding logic, pattern recognition and computational thinking remains valuable, few would now argue that every student must become a programmer. The tools evolved. The interfaces changed. Knowing how to code shifted from a universal requirement to an optional asset.

I suspect the same will happen with prompting. The models are already becoming much more forgiving. Early versions of AI required carefully worded instructions and detailed context. But each new generation of large language models has become better at interpreting vague or natural language. They are now more context aware, more visual and better aligned with human intent. The need for carefully engineered prompts is already beginning to fade.

Even the interfaces are changing. Most people will not type directly into chatbots in the future. They will use AI features inside tools such as Google Docs, Canva or Notion that quietly handle the prompting behind the scenes. The software will translate our natural requests such as “summarize this,” “improve the tone,” or “make it more visual” into optimized prompts automatically. Just as we no longer type code to open a file, we will not need to craft perfect prompts to get great AI output.

There may be a split happening here. For most of us, prompting will become invisible, handled by the interface layer. But specialized roles might still require deep prompt engineering expertise for critical systems or highly creative work where nuance matters. It could mirror how we still have systems programmers even though most people never write a line of code.

Modern AI systems are also being trained on millions of examples of strong instructions and responses. They have learned the meta-skill of interpreting intent. Clear and simple language now produces excellent results.

So if the technical part of prompting is becoming less necessary, what remains essential? The human part. Knowing what to ask. Evaluating whether the answer is right. Recognizing when a response is insightful, biased, or incomplete. The real differentiator will be judgment, not phrasing. The skill will not be in writing prompts but in thinking critically about what those prompts produce.

There is something deeper here too. The enduring skill might be what we could call AI collaboration literacy—the ability to iterate with AI, to recognize when you are not getting what you need, and to adjust your approach, not just your words. It is less about engineering the perfect prompt and more about developing a productive working relationship with these tools.

It reminds me of the evolution from coding to clicking. Early computer users had to memorize complex commands. Now, we all navigate computers intuitively. Prompt engineering feels like today’s command line, a temporary bridge to a more natural future.

So yes, teaching students to think like prompt engineers has value. It helps them be clear, curious and reflective. But perhaps the goal is not to create great prompters. It is to create great thinkers who can:

  • Articulate clear goals and constraints

  • Recognize the difference between excellent and mediocre output

  • Maintain healthy skepticism and verification habits

  • Understand when AI is the right tool versus when another approach works better

  • Iterate and refine their collaboration with AI systems

These capabilities feel more durable regardless of how the interfaces evolve.

Maybe I am wrong. Maybe prompt engineering will become a lasting communication skill. But before we rush to build it into every curriculum, it is worth asking whether we are chasing a moving target, and whether we should focus instead on the deeper cognitive skills that will matter no matter how we end up talking to machines.

As always, I share these ideas not because I have the answers but because I am still thinking them through. I would love to hear how others are thinking about this from where they sit.

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »


Inspired by the recent Learning Forward BC conversation on human flourishing and AI.

Last week, I spent three hours tweaking a PowerPoint presentation I already had help with. At the same time, I had to decline a visit to an elementary class exploring AI tools. The irony? While I was perfecting slides, they were shaping the very future I was supposed to be leading them toward.

If we are honest, most of us superintendents spend far too much of our energy doing work that does not require the full force of our humanity. We draft versions of the same report again and again for different audiences. We shuffle through data systems, chase signatures, and repackage findings. It is necessary work, but is it what we were called to?

At a recent Learning Forward BC event on The Intersection of Human Flourishing and AI, that question hit home. We were exploring how technology might liberate, not limit, our humanity in education. It made me wonder: What if AI could take over significant portions of our work as leaders? What would we hand over, and what would we fight to keep?

Why This Matters for Leaders

I have written a lot on this blog about how AI is reshaping the work of teachers and students. But we need to look just as critically at our own work as superintendents and senior leaders. If we expect educators to rethink assessment, planning and feedback in an AI-rich world, then we must also examine the way we lead, communicate and make decisions.

The truth is that the same technology that can help a teacher personalize learning or a student write an essay can also help a superintendent analyze data, summarize reports or draft correspondence. AI is not only changing classrooms. It is changing the nature of leadership itself.

And yes, I am sure some superintendents might already be wondering if a chatbot could replace them at board meetings. But since I know my trustees often read this blog, I will not take the chance of testing that particular joke here.

The Question That Changes Everything

The OECD’s (Organisation for Economic Co-operation and Development)  Education for Human Flourishing framework reminds us that our purpose in education is to equip people to lead meaningful and worthwhile lives, oriented toward the future. If that applies to students, it applies to our leadership too.

So whether it is 30 percent, 50 percent, or even 70 percent of what we currently do, the question becomes: What would we hand over to AI, and which tasks would we hold on to because they matter most?

What We Could Let Go Of

AI is already remarkably good at tasks that drain our time but not our meaning:

  • Drafting first versions of reports, memos and letters
  • Crunching and summarizing enrolment or survey data
  • Managing meeting notes, calendars, reminders and task lists
  • Building templates, presentations and standard job postings
  • Drafting policy or procedural documents for refinement

These are automation, not animation. They do not require empathy, judgment, or nuance, only accuracy and speed. That is AI’s strength.

What We Must Protect

What we must protect, deliberately, are the moments of human connection, purpose and complexity:

  • Sitting with a parent whose trust in the system has eroded
  • Listening deeply to a principal wrestling with burnout or vision
  • Reading the room in a board meeting and knowing what not to say
  • Inspiring staff to believe in something greater than their daily tasks
  • Recognizing a student’s spark when they realize someone believes in them

These are leadership moments: irreducible, unautomatable and profoundly essential.

Leading for Human Flourishing

The OECD highlights three human competencies that AI cannot fully replicate: adaptive problem-solving, ethical decision-making and aesthetic perception.

Adaptive problem-solving: When a community crisis hits and there is no playbook, whether a sudden school closure, a traumatic event, or a divided community, we respond with creativity born from experience and intuition.

Ethical decision-making: When budget cuts force impossible choices between programs, when we must balance individual needs against the collective good, when integrity demands the harder path, these moments require moral courage that no algorithm can calculate.

Aesthetic perception: Recognizing when a school’s culture shifts from compliance to inspiration, sensing the exact moment a resistant team begins to trust, and seeing beauty in a struggling student’s small victory. This is what makes leadership an art, not just a science.

AI can mimic these competencies, but it does not feel them. It may calculate empathy, but it cannot experience it or show it. As more of our routine tasks shift to AI, the invitation is clear: we reclaim the human half.

Creating a Culture of Yes

This is where AI becomes an enabler of possibility rather than a threat to purpose. When AI handles the bureaucratic “no” work, the forms, compliance checks and procedural barriers, we create space for the human “yes.”

Yes, I have time to visit your classroom.
Yes, let’s explore that innovative idea.
Yes, I can truly listen.

In a Culture of Yes, AI does not replace us. It liberates us to be more fully present for what matters. Every report AI drafts is a conversation we can have. Every dataset it analyzes is a relationship we can build. Every schedule it optimizes is a moment we can use to connect.

Getting Started

This is not about wholesale transformation tomorrow. It is about small experiments.

What one repetitive task could you delegate to AI this week? What human conversation would that free you to have?

Start simple:

Use AI to draft that routine memo, then spend the saved time walking the halls.

Let AI summarize survey data, then use your energy to discuss what it means with your team.

Have AI create the meeting agenda, then focus fully on reading the human dynamics in the room.

The goal is not efficiency for its own sake, but reclaiming time for what only we can do.

The Real Promise

The promise of AI in leadership is not efficiency, but rediscovery.

It is the chance to release ourselves from the burden of mechanical work and return to the heart of leadership: human connection, meaning and moral purpose.

Imagine walking into your office tomorrow knowing that the reports are drafted, the data analyzed and the calendar managed, all before your first coffee. Now you can spend your morning where it matters most: in classrooms, with people, making meaning.

Because in the end, the future of education will not belong to the most efficient systems. It will belong to the most human leaders, those who use every tool available to protect and amplify what makes us irreplaceably human.

A Question to End With

I wonder if my list looks like yours. What would you hand over to AI, and what would you hold tightly because it feels essentially human? I would be interested to hear how others are thinking about their human half.

 

 

The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking

Read Full Post »

How I draft, edit, and stay human in the loop

For years I believed my advantage was “writing.” Lately I’ve realized the real edge was not keystrokes, it was ideas, structure, and voice. AI has not erased those. If anything, it has made them more important. Rather than pretend we are still in a pen and paper world, I have been trying to model what authentic writing looks like now.

We do not protect writing by banning the tools everyone already has. We protect writing by showing what thoughtful use looks like, and by being transparent about our process.

What I am hearing, especially in humanities

Last week, a high school English teacher stopped me. “I can tell when something has been AI generated,” he said, “but I cannot tell when they have collaborated with it thoughtfully. And I do not know what to do with that.”

He is not alone. Across our humanities departments, teachers are working on the fly, trying to maintain academic integrity while recognizing that the old gatekeeping moves, ban the tool and police the draft, do not hold when every student has ChatGPT in their pocket. The fear is real. Are we farming out the exact skills we are supposed to be teaching?

I do not think the answer is choosing between integrity and innovation. It is redefining what integrity looks like when the tools have changed.

How I actually write

I still start the old fashioned way, an outline, a thesis, a few proof points, and usually one sentence I think could be the closer. From there, I treat AI like a colleague, not a ghostwriter.

  • Editing help. I ask for a clarity pass, tighten verbs, fix hedging, and check whether my headings are parallel. Here is what I actually typed for this piece: “Revise for clarity and concision. Keep a conversational, hopeful tone similar to my other blog posts. Offer two options for the opening sentence.” I kept one, rejected the other, and moved on.

  • Skeptic check. “What would a fair skeptic say after reading this” It surfaces blind spots before I hit publish.

  • Reports and formatting. For formal documents, I use AI to turn tables into charts, crunch numbers, and reshape dense text into something readable.

  • Speeches. I keep a base grad speech and add school specific stories and names. AI helps blend those elements while keeping the message consistent.

None of this replaces judgment. I accept or reject every change. If a suggestion dulls my voice, it is out. That is the standard. My judgment stays in control. I also disclose what I did, every time. A short note at the end of a post goes a long way with our community and models the behavior we ask of students.

What I encourage for classrooms and staff rooms

The most helpful shift has been moving from “Do not use AI” to “Show your decisions.”

  • Model, then mirror. I demo my messy paragraph, ask AI for a clarity edit, then accept or reject in real time while explaining why. Students should bring their draft, try the same process, and compare choices.

  • Assess the thinking. Rubrics weight claims, evidence, organization, and audience impact, not who placed the comma.

  • Make the process visible. Version histories in Docs or Word, plus brief process notes that list tools used, prompts asked, and choices made, make learning visible and deter abdication of thinking.

  • Cite the workflow. Not to catch people out, but to name steps we can teach.

Guardrails that keep the work honest

  • No blank page outsourcing. Start with your outline, thesis, or key points.

  • Ask precise questions. “Cut 10 percent without losing meaning. Keep my conversational tone.”

  • Verify facts. If AI offers a claim, check it before it lands in public.

  • Always disclose. If a tool shaped meaning or form, say how.

Is this just cheating with better branding

I have never believed collaboration was cheating. When I wrote a newspaper column, my dad, a retired English teacher, was my unofficial copy desk. He proofread, edited, and offered suggestions on every draft. The byline was still mine because the ideas, voice, and final choices were mine.

Tricia Buckley, and before her Sharon Pierce and Deb Podurgiel, all staff in West Vancouver Schools, have read every blog post here before they were published and provided feedback.

AI sits in that same category for me, a helper, not a ghostwriter, and always subject to human judgment. What changed with AI is speed, scale, and availability. I can get feedback at 11 p.m., run ten drafts in twenty minutes, and the tool is always on. What did not change is my judgment, my responsibility for choices and my name on the work.

If the goal is proving you can type unaided, then yes, tools muddy the waters. Our goal in schools is thinking for real audiences. We have always used supports, outlines, spellcheckers, style guides, writing partners, rubrics and colleagues. The standard should be integrity and evidence of learning, not tool abstinence.

Equity

AI is a ramp, not a shortcut.

It helps stuck writers get moving, the student staring at a blank page who needs a sentence to react to, or the English language learner who can articulate ideas verbally but struggles with syntax. AI can generate that first sentence, and suddenly the student has something to revise, reject, or build on. For strong writers, it is a way to go deeper, test alternate structures, get a skeptic to read, or polish a conclusion without losing momentum.

The equity move is not banning tools for everyone. It is teaching how to use them responsibly, and ensuring access to good instruction is not the new dividing line. When we teach tool literacy, we level up. When we ban tools students already have, we make the learning invisible.

Prompts that actually help

  • Clarity pass: “Revise for clarity and concision. Keep a conversational, hopeful tone. Offer two options for the opening sentence.”

  • Skeptic lens: “List the strongest fair minded critiques of this piece and one concrete improvement for each.”

  • Structure check: “Are these headings parallel? Tell me how to fix them without changing the ideas.”

  • Audience flip: “Rewrite the conclusion as guidance to parents in about 120 words.”

  • Report polish: “Turn this table into three plain language insights and a simple chart title. Flag any numbers that look inconsistent.”

What I tell our community

  • We are pro-writing and pro-truth. We will use modern tools and we will say when we did.

  • We value voice. Your voice should be recognizable across drafts and tools.

  • We lead with learning. If a tool helps learning, we will teach it. If it replaces thinking, we will not.

If you want more

Last week I facilitated a Hot Topic discussion, “The Future of Writing in an AI World,” at the Canadian K12 School Leadership Summit on Generative AI

North Star

I can spend my time lamenting that writing once felt like my competitive edge, or I can double down on the edge that still matters, clear thinking, vivid stories and the courage to be transparent about how we work. That is the blended human and AI writing world I want to model for students and staff.

The teacher who stopped me in the hallway was right to be uncertain. We are all figuring this out in real time. I would rather figure it out in the open, and model a messy and honest process, than pretend the tools do not exist.

AI transparency note: I drafted this post myself, then used ChatGPT and Claude for a clarity edit and a skeptic read. I accepted some wording suggestions and rejected others to preserve voice. The image at the top of the post was created through a series of prompts using Claude.

 
 

.

Read Full Post »

Like many of you, I’ve been saying it for years:

We are more distracted than ever.

And most days, I still believe it.

I’ve felt it myself, scrolling instead of reading, checking my phone when I meant to be present, struggling to sustain focus on the kind of deep work that once came easily. I even poked fun at myself recently in a post about the rare event of finishing a book from start to finish (read that one here). And last year, I wrote about The Anxious Generation (read here), where I shared my own growing unease around technology and attention. That unease felt and still feels very real.

But  I recently listened to historian Daniel Immerwahr on ReThinking with Adam Grant (podcast transcript here), and it nudged my thinking in a new direction.

Immerwahr’s voice on the podcast was measured, as he challenged what feels like common sense. In his article, What If the Attention Crisis Is a Distraction?, he doesn’t deny that something has changed. But he questions whether our capacity to pay attention is actually shrinking.

His thesis? What we’re experiencing isn’t so much an attention crisis as an attention transition, a shift in what we pay attention to, not a collapse in our ability to focus.

As I listened, I thought about every school district meeting where we have discussed “student attention spans,” every workshop on “managing digital distractions.” Immerwahr’s historical perspective was both humbling and illuminating. Each generation, he explained, has had its own attention-based moral crisis. Long novels like War and Peace, now seen as the gold standard of deep focus, were once criticized for pulling readers away from “serious” pursuits. Even the in-home piano was considered a threat to literacy. The through-line wasn’t the technology itself, but our recurring anxiety about it.

“The age of distraction,” Immerwahr reminded listeners, “is also the age of obsession.”

That phrase challenges my beliefs. Because if our students are still capable of obsession—if they’re investing hours into Minecraft builds, anime story arcs, K-pop lore, or long-form YouTube video essays, then maybe our job as educators isn’t to fix their attention spans, but to better understand their motivations.

Maybe we need to stop asking, “Why can’t they focus?”
And start asking, “What are they choosing to focus on and why?”

Immerwahr’s framework challenged how I think about what I see in classrooms. When he talked about each era inventing its own “attention villains,” from novels to comic books to television to smartphones, I couldn’t help but reflect on how we have positioned technology in schools. We often treat student distraction as a deficit, something to be minimized or managed. We build rules around device use, worry about TikTok trends and lament that students won’t engage in “deep work.” But what if we are repeating the same historical pattern mistaking change for decline?

This reframe aligns with what many of us observe daily:

A student who zones out during a worksheet lights up during a design challenge.

A teen who “won’t read” a novel devours fan fiction late into the night.

A class that seems scattered in one setting becomes intensely focused in another.

Are these attention issues or attention mismatches?

Immerwahr’s perspective pushes us to think historically and humanely. He urges us to be cautious before declaring crises, reminding us that many past panics now look, in hindsight, a little overblown. That doesn’t mean our concerns aren’t valid. But it does mean we might benefit from approaching them with more perspective—and less panic.

This historical lens matters deeply in K–12 education. Because when we believe attention is disappearing, we tend to narrow learning: shorter tasks, simpler texts, more control. But if we believe attention is evolving, we can instead broaden learning: tap into student interests, create room for choice and voice, and build bridges between traditional and digital literacies.

I’m not suggesting we stop teaching focus. The ability to sustain attention, to read deeply, think critically, and sit with a problem, remains essential. But perhaps our traditional signals of engagement (a quiet room, a student holding a novel) no longer tell the full story. And if we cling too tightly to old definitions, we risk misreading what’s actually happening in classrooms.

So yes, I still worry about distraction.

Yes, I still believe in the power of silence, of getting lost in a book, of unplugged time to think.

But no, I no longer quick to agree we are in free fall.

We are not attention-starved. We’re attention-splintered.
And that’s not a crisis, it’s a challenge.

It invites us as educators, leaders and learners to design learning that earns attention, not demands it. To meet students where they are, and guide them toward where they can go. To remember that our job isn’t just to manage attention but to inspire it.


The image at the top of this post was generated through AI.  Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »


More than 20 years ago, I was principal at Riverside Secondary in Port Coquitlam. One of the rhythms of that time was our Wednesday morning study group. It was a structure I brought with me from my mentor, Gail Sumanik, and it quickly became part of the culture. Each week, before the day got underway, an informal group of us would gather for coffee, donuts and conversation. We read books.  Not always books about education, but always ones that got us thinking. They gave us a reason to slow down and talk about the work, the world and where things might be heading.

It was a simple ritual that helped us build connection, both professional and personal. It was a small community of curious people making space for big ideas.

One of the books we read was The World Is Flat by Thomas Friedman, published in 2005. At the time, it felt like a wake-up call. The central idea, that globalization and technology were flattening the world, was provocative and timely. We talked about what it might mean for education if, for example, the person taking your drive-thru order at McDonald’s was actually sitting in Bangalore, India. If work could be done from anywhere, shouldn’t learning evolve in the same way?

For us, The World Is Flat became a kind of roadmap for a hyper-connected, technology-driven future. We imagined students collaborating across continents, learning personalized through intelligent systems, and schools adapting to a rapidly changing, decentralized world.

Now, two decades later, I find myself thinking back to those Wednesday mornings. While Friedman imagined a world where geography would no longer matter, K–12 education has remained largely rooted in place. Our systems still rely on physical buildings, in-person relationships, and a pace of change far slower than the forces transforming business and industry.

Yes, there have been shifts, especially during the pandemic and since. Tools like video conferencing, AI tutors, and global collaboration projects have found a place in our schools. But the core structures of schooling still feel more analog than digital, more local than global. And for good reason. Schools are not just knowledge delivery systems. They are social, emotional, and cultural ecosystems where human development happens in all its messy complexity.

There is also another force at play that Friedman didn’t fully anticipate. Over the past decade, education in both K-12 and universities has become a focal point in the culture wars that have swept across North America. From debates over curriculum content to battles over which voices and perspectives belong in classrooms, schools have become highly contested spaces. In the United States, these issues have dominated headlines. In Canada, we have experienced some less intense but similar tensions.

These conflicts highlight a deeper truth. The reason education hasn’t “flattened” in the way other sectors have isn’t just about logistics or technology. It’s about values. It’s about identity. When communities are deeply divided over what children should learn and how they should learn it, the idea of borderless, globally standardized education doesn’t feel innovative. It feels threatening.

Friedman wasn’t wrong. Many of his predictions were accurate. But the application of those ideas to public education has been far more complicated than any of us imagined. Technology has made global connection possible. But local politics and cultural identity continue to shape what happens in classrooms.

This raises important questions. How global is our curriculum when communities are fighting to keep certain perspectives out? Are we preparing students to thrive in a borderless economy when education itself has become a site of border-drawing? Can we teach students to collaborate with peers halfway around the world when we can’t agree on what they should be learning across the hallway?

Maybe The World Is Flat wasn’t meant to be a blueprint. Maybe it was a provocation. A starting point. A challenge to think differently about the role of schools in a connected world—a world that would turn out to be far more complex, and far more contested, than we imagined.

Two decades later, we are still answering that challenge. The world may have flattened in many ways. But education remains deeply local, deeply human and unavoidably political.

Those Wednesday morning conversations feel more relevant than ever. Not because we found the answers, but because we learned to ask better questions.

Read Full Post »

I recently gave a virtual talk on AI in schools which forced me to solidify my current thinking and I tried to make some direct linkages to the Culture of Yes belief. I have included the video at the bottom, and this post is an adaption of the talk:

This summer, AI in education has gone from a quiet undercurrent to a headline wave. Major corporations have announced new AI powered tools for classrooms. Governments, particularly in the United States, have released statements, strategies, and funding commitments to “prepare schools for the AI era.” There is a growing sense, both excitement and urgency, that this technology will profoundly reshape learning.

As we head into the fall, the question for me is not whether AI will change education. It already has. The real question is: Will we guide this change with wisdom, or will it guide us?

Where We Are:

We are in a moment of intense attention and investment. For the first time in history, students have instant access to a form of intelligence that can write, create, and problem solve alongside them. The conversation has shifted from “Should AI be in schools?” to “How do we use it well?”

The opportunities are extraordinary, and so are the risks. In our rush to adopt tools, we can easily mistake activity for progress. AI is not a magic box. It reflects the data and the biases we feed it. Without careful integration, we risk amplifying inequities instead of closing them.

At the same time, teachers are navigating new pressures: learning unfamiliar tools while managing existing workloads, and working with students who arrive with vastly different levels of AI experience and access.

What I Hope:

In West Vancouver, our innovation priorities are as bold as they are deliberate: AI and physical literacy. Together, they reflect our belief that the future belongs to students who are digitally fluent, physically confident and deeply human.

My hope for AI is that it:

Amplifies human wisdom rather than replacing human intelligence.

Delivers personalized learning that has long been promised but rarely achieved.

Serves as a force for equity, not by assuming all students need the same thing, but by providing each student with the individualized support they need, regardless of their school’s resources or their family’s circumstances.

Frees up teachers’ time for what matters most: relationships, mentorship and inspiration.

In a Culture of Yes, we approach these possibilities with openness while remaining thoughtful about implementation.

What We Need to Do:

Focus on the Shift: From Memory to Meaning

For over a century, schools rewarded students who could store and retrieve information. AI changes that rote memorization game. We must now prioritize what students do with the knowledge — how they apply it, question it, and create from it.

Equip Students as Creators, Not Just Consumers

In a Culture of Yes, we say yes to new possibilities while maintaining academic integrity. AI becomes a collaborator for composing music, designing solutions to local challenges and exploring ethical dilemmas we have never faced before, not a replacement for student thinking.
Imagine a Grade 9 student co writing a play with AI, then performing it with peers, learning as much about collaboration and creativity as they do about technology.

Develop New Literacies

AI literacy is more than knowing how to use a tool. It is the ability to:

Prompt effectively and creatively.

Evaluate outputs for accuracy and bias.

Reflect on whether AI use aligns with human goals and values, and recognize when not to use it.

Understand the difference between AI assistance and AI dependence.

Lead Through Diffusion, Not Mandate

A Culture of Yes means saying yes to teacher curiosity and experimentation. The best AI integration spreads from teacher to teacher, classroom to classroom, through shared practice and professional learning, not top down directives that ignore classroom realities.  When your colleague in the classroom next to you has something exciting to share, you are keen to listen to them. 

Keep Humanity at the Core

AI can provide information, but only people provide inspiration. AI can offer feedback, but only people offer hope. We must ensure that every learning experience remains fundamentally about human connection and growth.

Looking Ahead

The age of AI is not coming, it is here. As educators, leaders, and communities, we face a choice that will shape the next generation’s relationship with both technology and learning itself.

A Culture of Yes means we choose:

Curiosity over fear

Collaboration over competition

Wisdom over efficiency

Human potential over technological convenience

If we embrace this approach, saying yes to AI’s possibilities while saying yes to our students’ humanity, we will not just reimagine learning. We will create classrooms where technology serves human flourishing, where every student can thrive, and where the future we are building together reflects our highest aspirations for education.

The conversation about AI in education is just beginning. As we step into this new school year, I invite you to share your hopes, your experiments, and your questions. We learn best when we learn together.

 

Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

Read Full Post »

I shared my “Word for 2025” last week, but I am still thinking about the year ahead.  I know in school-life the real new year starts in September, but January is a good time to reset and reassess. And just as we do that in school, I know it is happening in homes as well.

As we step into a new calendar year, it’s the perfect time for a refresh—a moment to reflect, reset, and renew our family routines. For parents navigating the complexities of raising children in today’s digital age, this moment feels particularly significant. And with the added layer of AI, this navigation is only getting more complex.

We live in a world where technology is seamlessly woven into the fabric of daily life. Devices offer connection, knowledge, and opportunities that previous generations could only dream of. Yet, they also pose challenges—especially for families trying to strike a balance between purposeful technology use and the very human need for physical activity, meaningful connections, and mindful living.

At school, we have been working hard to foster that balance. We’ve set limits on cell phone usage in schools across BC, not because we are anti-technology, but because we believe in purposeful use (Here is an infographic we have shared out this week in schools). What does this look like in practice? Students using computers to create digital portfolios of their work, collaborating on shared documents for group projects through Google Classroom, or using educational apps to practice math skills—all while maintaining dedicated time for physical activity, face-to-face discussions, and hands-on learning. These boundaries ensure our students’ well-being and physical literacy remain priorities. But this is not a task schools can do alone.  As I have written before, physical literacy and AI are side by side as key areas for innovation in West Vancouver.

Parents play a crucial role in shaping how their children navigate technology. As we rethink routines this January, let’s remember that our children are always watching. They notice when we set aside our phones during dinner, when we prioritize outdoor family activities, and when we engage in face-to-face conversations. Modeling thoughtful technology habits isn’t just important—it’s transformational.  At school events, it is often adults who demonstrate the poorest cell phone etiquette. 

Over the break a colleague of mine showed me an interesting iPhone feature.  Go to Settings, tap Screen Time>See All Activity. Scroll down to the area titled “pickups”.   This number is how many times you have picked up your phone that day.  It’s not just the kids who might be a little too attached to their screens.  Give it a try and be ready for a reality check!

This January reset calls for thoughtful conversations. Rather than banning or blindly embracing tech, engage your children with questions that promote intentional use: How does this app support your learning goals? What boundaries would help you balance screen time with other activities you enjoy? When do you feel most creative and focused while using technology? These conversations can help children develop critical thinking about their digital habits.

As a parent, this isn’t about perfection. It’s about progress—a commitment to staying engaged and aware as technology evolves. It’s about setting expectations that align with your family values, being open to learning from your kids, and creating a culture where tech is a tool, not a master. I know I used to HATE my kids playing video games, but now I realize they can often be a point of connection with friends out of school time.

In schools, we’ll continue to champion purposeful technology use while ensuring students’ physical and emotional development is front and centre. But as we know, what happens at home matters just as much. Together, we can guide our kids to be confident, capable, and thoughtful digital citizens.

Around our office, we have a walking club once a week at lunch and staff have started a run club to train for a race in March.  We are keenly aware that we need to model getting outside and modeling good practices with our own health. 

Here’s your January challenge: Choose one area of family technology use to reset. Maybe it’s establishing device-free dinner times, setting up a family charging station outside bedrooms, or planning weekly outdoor activities. Start small, stay consistent, and celebrate progress. This is the year to refresh, reset, and reimagine what it means to parent in the digital age.

I used both Chat GPT and Claude in the editing process and the image at the top of the post is also AI generated.

Read Full Post »

Older Posts »