David Deming, the dean of Harvard College, recently marshaled an array of evidence about the benefits of personalization as well as technology in a fantastic piece on the potential of AI to boost learning.
After reading the section of his piece titled “Personalization is incredibly effective for learning,” you’d wonder why we haven’t already achieved nirvana in schooling. The learning outcomes Deming cites are incredible.
You also might wonder why there’s so much pushback and vitriol against the use of technology in education in certain quarters right now. As Whiteboard Advisors’ Ben Wallerstein noted recently, a recent “Senate Commerce Committee hearing on the ‘impact of technology on America’s youth’ reflected a shift in the national discourse about technology in schools that’s been brewing for some time.” Indeed, a whole book by a neuroscientist now pins the decline in IQ scores since 2000 on technology.
Deming then dives into the challenges that counteract the academic studies he cites around personalization and technology and acknowledges that distraction and cognitive offloading have conspired to prevent these gains in the real world.
Yet there’s an even bigger structural challenge that Deming’s piece only touches—and I think speaks to some of the skepticism around the use of technology in education at the moment.
Imagine giving someone a computer armed with the most amazing tools for learning ever invented. But keep in mind that, as cognitive scientist Daniel Willingham often reminds us, learning is effortful and hard. The temptation to do something easier is constant.
Then recall that, on this very same device with incredible learning tools, you also have literally billions of potential distractions for every possible motivation and proclivity that exists in the world. In all aspects, it’s quite literally the opposite of the 1978-invention Speak & Spell.
What do you expect most kids would do?
We see this to some extent in the 5% problem that Laurence Holt has written about—the observation that while there are plenty of online-learning math programs with studies showing robust student outcomes, most of those studies show that only roughly 5% of students use the programs at their minimum recommended dose.
It’s yet another reason why educators need to take more seriously how students are engaging with technology. It’s why simply layering a technology solution—that also comes with access to lots of other non-educational options—over the existing schooling model will never produce the outsized gains some studies suggest are possible.
That’s been one of my longtime arguments, of course: the learning model matters more than the technology.
But it’s clearer to me that one of the things I’ve been missing—in the books Disrupting Class and Blended, for example—is that it’s also important that the learning model actively prevent or address and account for in some way all the potential distractions that come today with the technology we’re putting in front of students. That’s because these general-use technologies loaded with educational applications have serious downsides.
Although there are technology solutions to this problem, they are likely not enough given the ingenuity and hacking abilities of students looking to escape effortful work.
This speaks to one of the more intriguing things that Alpha Schools does when it gives students access to technology.
Importantly, Alpha doesn’t allow students to use their own smartphones during the “2-hour learning” block. Nor is the school simply issuing Chromebooks to 2nd graders and in essence saying, “Take these home with you. Here’s our technology-use policy. Please stay in Google Classroom—and enjoy.”
Instead, students using Alpha’s Timeback AI-powered platform have serious guardrails put around what they are doing on their devices. As MacKenzie Price told Diane Tavenner and me on our Class Disrupted podcast, students are not getting access to chatbots—which she terms “cheatbots”—or the myriad of other distractions. Instead, Timeback uses a vision model—”an AI tool… that tracks the screen and is actually watching to understand how a student is moving through this material.” That means it can also see when students deviate from what they should be doing.
So, technology is part of the solution. But this is likely only effective because the technology is integrated in a model whereby Alpha’s guides (they don’t call them teachers) actually have the time to look at the outputs of the vision model and see where students could be improving their performance and waste less time. It’s also because this technology is aligned with Alpha’s complex motivational model that revolves in large part around giving students their most precious commodity back to them—their time (hence the name Timeback for the platform), among other rewards.
In other words, the technology works because it’s harmonious with the learning model itself. Plugged into a traditional system, it’s not clear that students would care that Timeback knew they were exploring distractions outside of their learning applications. There would be little benefit to them for staying on task.
And if students hacked the tech, it’s not clear their teachers would know whether they were making the learning progress they should, or were instead part of the 95% skipping out on yet another learning application’s recommended dosage.
This is an important point. Some of today’s leading edtech may be theoretically “good.” But these tools aren’t typically given in isolation. They are instead typically layered on top of a traditional schooling model, not built to optimize learning in the first place.
No wonder students often default away from the hard work of actively learning to passively scrolling the billions of distractions in plain sight.
