Across Canada, and in many other parts of the world, literacy screening is having a moment.
There is broad agreement that we need to be better at identifying students who may be at risk, and that we need to do this earlier. The push toward more consistent and universal literacy screeners makes a lot of sense: earlier identification leads to earlier intervention, and ultimately, better outcomes for kids.
But here’s the question that’s been nagging me: are we simply going to recycle the same kinds of screeners we have used for the last generation? Or can this be the moment to think differently about what screening could look like in an AI world?
What Screeners Do Well
Traditional screeners help us establish a baseline. They can tell us if a student is meeting expected benchmarks in areas like phonemic awareness, decoding, fluency and comprehension. They provide the data teachers need to take action.
The challenge is that screeners often leave a gap between assessment and action. A teacher receives a score and then has to translate that number into the “what’s next” for the student and their family. It’s useful, but not always immediate, personalized or engaging.
What AI Could Add
This is where I wonder if we are missing an opportunity. AI could allow us to rethink the very design of literacy screeners. Imagine if…
-
Texts were customized for cultural relevance. Instead of one-size-fits-all passages, AI could generate short reading texts tailored to the learner’s context, interests or community. A child on the North Shore might read about the Capilano River, while another in Surrey reads about the Pattullo Bridge reconstruction. For Indigenous learners, this could mean texts that reflect Indigenous ways of knowing and storytelling traditions, developed in partnership with local Nations. The text would still be controlled for vocabulary and difficulty, but it would feel more real and more personal.
-
Feedback was immediate and audience-specific. A student could receive a friendly message highlighting a win (“You read 80 words per minute—your smoothest word was ship”) and a tip for next time. Families could receive a plain-language summary with simple routines for home (“Read together for 10 minutes tonight; circle the words that start with sh”). Teachers could receive a strand-level profile with small-group suggestions, not just a number on a page.
-
Practice was built-in. Instead of waiting for the next lesson, a screener could instantly generate a few targeted practice items based on the patterns the student struggled with, turning assessment into a learning moment instantly.
What This Isn’t
To be clear, this isn’t about replacing teacher expertise or professional judgment. Teachers would still interpret results, make instructional decisions, and build the relationships that matter most.
And this isn’t about creating more data for data’s sake. It’s about making the data we already collect more immediately useful—for students, for families and for teachers.
Safeguards Matter
Of course, any AI use comes with important guardrails. Automated scores would need validation against human judgment, with teachers maintaining override authority. Generated texts would require review for accuracy, bias and cultural safety. Indigenous content, in particular, would need to be co-designed with local Nations and aligned with principles of data sovereignty, ensuring that AI tools serve rather than appropriate Indigenous knowledge.
Quality oversight would need to be built in from day one, with regular audits and continuous monitoring to prevent the kind of drift that could undermine both accuracy and equity.
A Narrow Window
Here’s what makes this moment unique: jurisdictions are investing in new screening initiatives right now. We have a narrow window to influence how these tools are designed. If we don’t explore these possibilities now, we risk locking in approaches that simply digitize yesterday’s thinking.
I am not a literacy expert. But as someone who has watched technology reshape almost every other part of our schools over the last two decades, I see a pattern. The organizations that thrive are the ones that ask not just “how can we do what we’ve always done, but faster?” but “what becomes possible now that wasn’t possible before?”
The Question We Should Be Asking
The push for literacy screening is the right one. The evidence on early identification and intervention is clear. But we also have a unique opportunity to do more than just import the same tools from the past.
What if, instead of only identifying students who need help, our screeners could also immediately provide that help?
What if they could engage families in ways that feel supportive rather than clinical?
What if they could give teachers not just data, but insight?
AI won’t replace the expertise of our teachers or the relationships that matter most. But it might make our tools more immediate, more relevant and more effective for every child.
The question isn’t whether we should innovate. The question is whether we will seize this moment to innovate thoughtfully—or let it pass by.
What new possibilities are you seeing in your corner of education? And how do we make sure we are not just replicating the past with shinier tools?
Thanks to West Vancouver District District Vice-Principal Mary Parackal who really pushed my thinking in creating this post around what might be possible with AI.
The image at the top of this post was generated through AI. Various AI tools were used as feedback helpers (for our students this post would be a Yellow assignment – see link to explanation chart) as I edited and refined my thinking.
Leave a Reply