I have been thinking a lot about assistance lately. Who gets it, who does not, and why we suddenly get moralistic about it the moment the assistance comes from AI.
The spark for this post is Nick Potkalitsky’s Substack essay, “In Praise of Assistance.” It is one of those pieces that does not just add to the AI and writing conversation. It reframes it (Thanks to Adam Garry for pointing me towards it).
Nick starts from the now familiar worry about “cognitive offloading,” students delegating the thinking to a tool, and he agrees the concern is real. But then he names what often sits underneath the concern: not just pedagogy, but ideology.
He argues that the cognitive offloading critique rests on “a historical fiction: the autonomous learner.” Because if we are honest, most of us did not learn to write (or think, or revise) alone.
My own invisible advantage
In high school, I had a huge advantage: my dad was an English teacher, and he read every essay before I submitted it. Not just English essays. All of them, across every subject.
He did not write my essays. But he did what good teachers do. He asked questions I had not thought to ask. He pointed out where my logic sagged. He helped me tighten sentences. He coached me toward clarity.
That continued through university. And years later, when I became a newspaper columnist, he was still my first reader. Every column went to him before it went to my editor. He would call with suggestions, and I would decide what to keep and what to let go.
At the time, nobody called this cheating. We called it support. Nick puts it simply: “Students have always learned through assistance. From peers, from teachers, from resources…”
We rarely worry students are “offloading” onto classmates in a discussion. We celebrate it. But when AI enters the picture, suddenly assistance becomes suspect.
That is the tension.
The question is not “help or no help”
When we talk about AI and writing, the debate often collapses into a binary: real writing (alone, unaided) versus fake writing (assisted, scaffolded).
But that binary does not match how writing actually works. It does not match how learning has actually ever worked.
The better question is the one Nick keeps pointing us toward: what kind of assistance builds thinking, rather than replacing it?
That is where his essay becomes more than a defense of AI. It is a critique of an unspoken standard that has been unevenly distributed for a long time. The idea that “authentic struggle” is the price of admission to learning.
Nick names the class based reality bluntly: affluent students often have “small seminars, writing conferences, office hours, peer review sessions” while others are in systems where meaningful feedback barely exists. And then comes the sentence: “The outcome depends on whether we recognize assistance for what it is: not a threat to learning, but its precondition.”
What I have been writing toward
In October, I wrote “Modeling AI for Authentic Writing.” If AI is here (it is), then our job is to model the kind of use that keeps the writer in control. In that post, I tried to move the conversation from “Don’t use AI” to “Show your decisions.”
Because the heart of authentic writing is not whether you had help. It is whether your thinking is present. What did you accept? What did you reject? Why? What did you learn in the revision?
I wrote then: “None of this replaces judgment. I accept or reject every change.”
For years, Tricia Buckley, and before her Sharon Pierce and Deb Podurgiel, have played a similar role here on this blog, reading every post before publication and offering feedback. The byline is still mine because the ideas, voice, and final choices are mine.
That is the point.
Assistance is not the enemy of learning. Abdication is.
What I want to add
There is a system design question underneath that I keep circling back to.
If we accept that all learning has always been assisted, what changes about how we run schools?
A few weeks ago I wrote about the tutoring revolution and found myself wrestling with a similar tension. For years, success in certain courses quietly required something extra: a tutor. Parents traded recommendations, students admitted they needed help, and the whole system ran on an unspoken understanding that school alone was not enough. At least not for everyone.
AI is changing that. But here is the part that worries me: the digital divide is no longer just about device access. It is about knowing how to use the tool well. A student with strong digital literacy might turn ChatGPT into a Socratic tutor. Another might never get past using it as a homework completion machine.
Nick writes about elite students who have always had access to “assistance made flesh.” The risk now is that we create a new version of the same divide. Some students learn to collaborate with AI in ways that deepen their thinking. Others use it to bypass thinking altogether. And if we are not intentional, digital confidence becomes the new proxy for privilege.
The question is not whether students will have AI assistance. They already do. The question is whether we will teach them to use it in ways that build capacity or let the gap widen on its own.
A Culture of Yes stance
A Culture of Yes does not mean saying yes to every tool or every shortcut.
It means saying yes to the conditions that help more people learn well.
So here is where I am landing, at least today.
Writing has always been assisted. The myth of the autonomous writer has always favoured students with the most support. AI can absolutely be used to bypass thinking. But it can also be used to invite thinking, especially where feedback is scarce.
Our job is to design and model practices where assistance makes thinking visible and growth possible.
Nick’s essay refuses the easy frame. It asks us to stop policing help and start building learning communities where help is normal, explicit, teachable, and more equitably available.
That feels like the kind of “yes” worth defending.
The image at the top of this post was generated through AI. Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.












