I have been thinking a lot about assistance lately. Who gets it, who does not, and why we suddenly get moralistic about it the moment the assistance comes from AI.
The spark for this post is Nick Potkalitsky’s Substack essay, “In Praise of Assistance.” It is one of those pieces that does not just add to the AI and writing conversation. It reframes it (Thanks to Adam Garry for pointing me towards it).
Nick starts from the now familiar worry about “cognitive offloading,” students delegating the thinking to a tool, and he agrees the concern is real. But then he names what often sits underneath the concern: not just pedagogy, but ideology.
He argues that the cognitive offloading critique rests on “a historical fiction: the autonomous learner.” Because if we are honest, most of us did not learn to write (or think, or revise) alone.
My own invisible advantage
In high school, I had a huge advantage: my dad was an English teacher, and he read every essay before I submitted it. Not just English essays. All of them, across every subject.
He did not write my essays. But he did what good teachers do. He asked questions I had not thought to ask. He pointed out where my logic sagged. He helped me tighten sentences. He coached me toward clarity.
That continued through university. And years later, when I became a newspaper columnist, he was still my first reader. Every column went to him before it went to my editor. He would call with suggestions, and I would decide what to keep and what to let go.
At the time, nobody called this cheating. We called it support. Nick puts it simply: “Students have always learned through assistance. From peers, from teachers, from resources…”
We rarely worry students are “offloading” onto classmates in a discussion. We celebrate it. But when AI enters the picture, suddenly assistance becomes suspect.
That is the tension.
The question is not “help or no help”
When we talk about AI and writing, the debate often collapses into a binary: real writing (alone, unaided) versus fake writing (assisted, scaffolded).
But that binary does not match how writing actually works. It does not match how learning has actually ever worked.
The better question is the one Nick keeps pointing us toward: what kind of assistance builds thinking, rather than replacing it?
That is where his essay becomes more than a defense of AI. It is a critique of an unspoken standard that has been unevenly distributed for a long time. The idea that “authentic struggle” is the price of admission to learning.
Nick names the class based reality bluntly: affluent students often have “small seminars, writing conferences, office hours, peer review sessions” while others are in systems where meaningful feedback barely exists. And then comes the sentence: “The outcome depends on whether we recognize assistance for what it is: not a threat to learning, but its precondition.”
What I have been writing toward
In October, I wrote “Modeling AI for Authentic Writing.” If AI is here (it is), then our job is to model the kind of use that keeps the writer in control. In that post, I tried to move the conversation from “Don’t use AI” to “Show your decisions.”
Because the heart of authentic writing is not whether you had help. It is whether your thinking is present. What did you accept? What did you reject? Why? What did you learn in the revision?
I wrote then: “None of this replaces judgment. I accept or reject every change.”
For years, Tricia Buckley, and before her Sharon Pierce and Deb Podurgiel, have played a similar role here on this blog, reading every post before publication and offering feedback. The byline is still mine because the ideas, voice, and final choices are mine.
That is the point.
Assistance is not the enemy of learning. Abdication is.
What I want to add
There is a system design question underneath that I keep circling back to.
If we accept that all learning has always been assisted, what changes about how we run schools?
A few weeks ago I wrote about the tutoring revolution and found myself wrestling with a similar tension. For years, success in certain courses quietly required something extra: a tutor. Parents traded recommendations, students admitted they needed help, and the whole system ran on an unspoken understanding that school alone was not enough. At least not for everyone.
AI is changing that. But here is the part that worries me: the digital divide is no longer just about device access. It is about knowing how to use the tool well. A student with strong digital literacy might turn ChatGPT into a Socratic tutor. Another might never get past using it as a homework completion machine.
Nick writes about elite students who have always had access to “assistance made flesh.” The risk now is that we create a new version of the same divide. Some students learn to collaborate with AI in ways that deepen their thinking. Others use it to bypass thinking altogether. And if we are not intentional, digital confidence becomes the new proxy for privilege.
The question is not whether students will have AI assistance. They already do. The question is whether we will teach them to use it in ways that build capacity or let the gap widen on its own.
A Culture of Yes stance
A Culture of Yes does not mean saying yes to every tool or every shortcut.
It means saying yes to the conditions that help more people learn well.
So here is where I am landing, at least today.
Writing has always been assisted. The myth of the autonomous writer has always favoured students with the most support. AI can absolutely be used to bypass thinking. But it can also be used to invite thinking, especially where feedback is scarce.
Our job is to design and model practices where assistance makes thinking visible and growth possible.
Nick’s essay refuses the easy frame. It asks us to stop policing help and start building learning communities where help is normal, explicit, teachable, and more equitably available.
That feels like the kind of “yes” worth defending.
The image at the top of this post was generated through AI. Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.

As AI infiltrates online writing, so does the acronym TLDR. If that’s too long for you, use DR. Offload half-baked ideas to AI and you get nice sounding gibberish, not unlike the jargon found in a lot of education journals or scientific raspberry awards. If we school our children on such AI slop – or worse yet – on how to work with AI to produce it – then God help us all. Sorry for the cynicism, but that’s what I see at the moment.
Theo, thank you for engaging. I don’t mind the cynicism at all. AI is full of both risk and possibility, and I think you’re naming a real risk. The “nice sounding gibberish” you describe is exactly what happens when the tool does the thinking instead of the writer.
Where I’ve found value is using AI as a feedback helper, not a drafting machine. It asks questions, flags gaps, pushes back on fuzzy logic. The thinking still has to be mine. But I know we’re in a moment with a wide range of views on this, and surfacing those views through discussions like this one is how we figure out what actually works.
Thanks for the comment. Chris
A teacher and I just met with a student regarding their use of AI – and a good discussion about working with AI as a collaborator, not as a task completer. I loved their share how they say they draft, prompt, edit, re-create… then get teacher feedback and have their AI collaborator comment on the teacher feedback… I really enjoy how AI is pushing our thinking around the role of school (now that there are tools for people to learn whatever they want when they want – not when the curriculum says they ought to… or not…) and love your reminder about how there has always been inequities in what students get particular supports… and I like that Language Models are making us realize that perhaps the 17th century essay is NOT the best way to measure everyone’s learning and knowledge (now that everyone can get their ideas written at a readable level). Fun time to be in education!
Thanks, Ian. That student’s process is exactly what we should be aiming for. Draft, prompt, edit, re-create, get feedback, iterate. That’s not outsourcing thinking. That’s a sophisticated workflow that most adults would benefit from adopting. Fun times indeed!
Chris, I appreciate your contributions as we all figure it out. Roy Norris, Winnipeg, Manitoba. Shareski knows me, btw. If I’m ever back in North Van it’d be great to sit for a bit and marvel at it all. I appreciate your posts: cogent & clear.
Shared with my staff, and has provoked some great conversations… thank you.
This resonates strongly. The problem was never assistance, it was the pretense that learning ever happened in isolation. What matters is not whether help exists, but whether judgment, authorship, and decision-making stay with the human. AI makes the invisible scaffolding visible, which is uncomfortable only if we cling to the myth of the autonomous writer.
I really like, “invisible scaffolding visible.” Writing and learning were never solitary acts; they’ve always been shaped by people, ideas, and feedback around us.
Exactly. Learning has always been relational. AI just makes that web of influence easier to see and talk about openly.
[…] “In Praise of Assistance (And The Myth of the Unassisted Writer)” by Chris Kennedy […]