The Invisible Learner Problem
What AI Actually Needs to Solve in EdTech
There’s a student no one talks about enough in product roadmap meetings.
She’s 17, working part-time, and raising a younger sibling while her mom recovers from surgery. She enrolled in an online high school completion program six weeks ago. She logged in twice last week: once after work at 11pm, and once at 6am. She hasn’t touched her math module in eight days.
She is engaged, but she feels like she’s drowning.
For years, EdTech’s answer to students like this has been self-paced flexibility and personalized learning...a phrase so overused that somewhere between a Series B pitch deck and a product launch, it lost all value in describing a meaningful solution. We gave learners the ability to move at their own pace. But what we didn’t solve was how to reach them when their pace has stopped entirely.
AI is changing that calculus. But only if product managers are asking the right questions.
The Real Opportunity Isn’t Tutoring. It’s Presence.
The conversation around AI in education has fixated on tutoring tools. Conversational practice, instant feedback loops, and AI-powered writing coaches are all genuinely useful tools. But for the non-traditional learner population (online high schoolers, adult learners, workforce-development students) the more pressing crisis is persistence, not primary comprehension.
Non-traditional learners face what researchers call invisible barriers. These are things like unpredictable work schedules, housing instability, caregiving responsibilities, and the accumulated weight of prior educational failure. Traditional LMS platforms were not built for their lives because they inherently assume a stable, dedicated learner with a laptop, reliable internet, and no competing obligations.
AI’s most underutilized capability in this space isn’t the generation of content. Instead, what these learners need is pattern recognition applied to intervention timing. When does a learner need a nudge versus space? When is an eight-day absence a crisis versus a planned pause? Product managers building for this population should shift from treating engagement metrics as leading indicators of success and instead treat re-engagement after absence as the core product problem to solve.
What Good Looks Like
When an edtech organization builds from the non-traditional learner’s perspective rather than an idealized version of one, the outcomes tell the story. Learner-centric design and product build matter.
The challenge edtech institutions face in composition courses is instructive. At large scale institutions, in a self-paced asynchronous model, essay volume alone can be a significant product problem. Students submit work and wait. Feedback arrives late, comes back inconsistent across graders, and often misses the learner entirely when submissions are incomplete or off-topic. Frustration is compounded. Resubmission cycles lengthen. Students who are already one obstacle away from dropping out get handed another one.
Penn Foster Group, serving over 90,000 students through its online high school alone, implemented a solution that did not automate the human out of this process. They partnered with Learnosity to deploy an AI-powered writing tutor that delivers real-time, in-the-moment feedback, while keeping human instructors in the loop for final review. The result was faster response cycles, more consistent feedback, and a reduced friction point at exactly the moment students are most vulnerable to disengagement.
But arguably, more significant is what they’re doing with predictive analytics. Penn Foster is using behavioral data to identify struggling learners and reach out at the precise moment they’re most likely to respond. Before the eight-day absence becomes a withdrawal and the student becomes too overwhelmed to persist. This goes beyond building a product feature. This is a product philosophy. Intervention that arrives when someone is still reachable is worth any number of interventions that arrive after they’ve already left.
They’ve also been piloting LAADS, a new learning model built on learner-centered, skills-based, differentiated design, which addresses the structural question underneath all of this. If the content itself doesn’t fit the learner’s actual context and goals, no amount of AI intervention will compensate.
Three Principles Worth Building Around
For PMs working in edtech, the such predictive analysis models uphold critical principles that hold up regardless of platform or population:
Meet learners in their actual context, not your ideal one. Non-traditional students interact via mobile, during non-business hours, in short bursts. Features that assume a 45-minute focused session are features that will go unused. Micro-learning, asynchronous AI feedback, and voice interfaces aren’t nice-to-have features. These are access requirements for non-traditional learners.
Design for trust before capability. A 16-year-old who has failed out of traditional school doesn’t arrive at your platform with goodwill to burn. AI interactions that feel surveillant or automated erode trust fast. The product question shouldn’t be “how do we monitor disengagement?” Instead, ask “how do we make the learner feel seen without feeling watched?”
Treat the human advisor relationship as a product surface. AI should be augmenting the coach or advisor. The platform should aim to identify and surface the right signal at the right moment so a human can make a more informed decision and take specific action. PMs who default to full automation in this context are optimizing for cost savings at the expense of the population they claim to serve.
The Metrics Trap
Here’s an uncomfortable truth that doesn’t make it into conference keynotes: most edtech products are still measured on metrics that non-traditional learners were never designed to succeed on.
Completion rates, Daily Active Users, session length, Time To Complete: these are proxies borrowed from consumer apps and applied to education without thoughtful and thorough interrogation. A student who logs in for 12 minutes every other day and earns her credential in eight months is a success story that most dashboards would flag as at-risk.
AI gives product teams the computational power to build far more nuanced success models that account for life interruptions, non-linear progress, and the compounding effect of small wins. The question is whether leadership is willing to redefine what the product is supposed to accomplish.
For the non-traditional learner, the product isn’t a platform, it’s an opportunity.
Build accordingly.
What’s the biggest gap you see between how edtech products measure success and how non-traditional learners actually experience it? I’d love to hear from others working in this space.
Jessica Haley | Product & Program Management Leader | EdTech | Competency-Based & Online Learning



Presence, and meeting the individual student where they are. On a large scale that certainly speaks out for more tools.🧰 AI might be the only answer.
I think you’re uniquely qualified.
A lot to digest here, Jessica, but a few initial thoughts here.
You wrote:
"Non-traditional students interact via mobile, during non-business hours, in short bursts. Features that assume a 45-minute focused session are features that will go unused. Micro-learning, asynchronous AI feedback, and voice interfaces aren’t nice-to-have features. These are access requirements for non-traditional learners."
This seems one of your biggest concerns. I have to admit I don't have much edtech experience. But even within a 45-minute focused experience, don't most edtech systems allow multiple savepoints? I strive to understand your concern, but I'm also skeptical everything can be made bite-sized. Perhaps I'm not empathetic enough, but the non-traditional learner may just need to backtrack sometimes within a 45 minute session on a critical path and gut it out before full mastery completion on that module can be awarded. And yes maximize the number of five or ten minute modules you can.
In my what will be sixty years of living this year, I've seen tremendous strides in education to a mastery mindset from what was a mindset of teach and test the material with an expectation some will excel and some fail. Little did we poor literally old school students know we were aiding this grade-based assessment culture when we begged to be graded on a curve.
AI sounds like it will play an essential role in patiently offering feedback and additional personalized contextualized teaching to get students over cognitive obstacles "getting" the material. A human teacher can be frustrated trying different approach after different approach and still being presented a blank stare and an *I just don't get it." AI knows nothing of the concept of seemingly teaching to a stone wall.
I hope AI can get non-traditional students re-engaged more often when they risk giving up altogether. And I hope that module mastery can be stretched to the limits of self-paced learning even if it means it may take months (or years) to earn a milestone. And I hope pattern-matching works to better convey and contextualize material for edtech users. These all will benefit greater society as a whole and not just students.
If this is ignorant mush because I don't grasp your arguments in the least, please just kindly ignore it. Thanks!