As I noted a few weeks ago, education technology seems to be lagging our vision of building systems in which students advance upon mastery. In competency-based systems, students move through material at different paces; they may take longer to master some competencies than others. In some competency-based schools, students may fail to master a given performance standard and choose to come back to that standard later on, or students may be allowed to exert some choice over which standards they tackle when. Moreover, competency-based systems may offer multiple pathways to mastery, depending on students’ interests. The more individualized these systems become, the more different demands they stand to place on existing edtech platforms.
This seems to surface a bigger issue of how we categorize products. When we think about the tools in the edtech market, they typically fall into big categories such as student information systems (SIS), learning management systems (LMS), document storage or digital portfolio systems, and academic software. Other solutions include behavior management systems for students and talent management systems for teachers. The list goes on. These categories—particularly the first three—are not as much intuitive from an educational perspective as they are a vestige of the chronology of our system’s priorities and our technical capabilities. They are fixed against a timeline where compliance with state reporting or basic student tracking were the original jobs that we wanted technology platforms to perform. Layered over of these more basic functions, LMS and longitudinal tracking systems—both emerging out of the technology available to track students and the philosophical and pedagogical belief that this data would prove useful—came much later, only to be cobbled together on top of legacy SIS systems. And as online learning has improved over time, academic software has added yet another layer. Over time, as our concepts of both accountability and data-driven instruction have shifted, we’ve placed new demands on old systems; for example, tracking teacher effectiveness through longitudinal data systems.
Now that competency-based education is trying to pull the rug—time-based progression—out from under the traditional education system, competency-based tech demands are likewise tugging at the old categories in the edtech market. If “courses” and “schedules” and “grade levels” are called into question, suddenly what and how we track shifts. And with this shift, we may have an opportunity to redefine—or at least nudge—the edtech categories like SIS, LMS, etc., that we currently take as a given.
What does it mean to have the wrong categories? One way to understand this phenomenon concretely is to think about how technology platforms are currently doing multiple jobs for educators and students alike. For example, bulb is a digital portfolio start-up where students can store projects. Programs like this are increasingly popular in schools that are using project-based learning. But teachers uploading videos and “flipping” their classrooms are likewise using bulb’s platform. This suggests that a category like a “digital portfolio” product, as it’s thought of in many project-based schools, is far too narrow. Instead, the technology is used as a platform for sharing content, both in the traditional sense of “turning in” an assignment and “assigning” materials. The same can be seen for those systems using SIS systems as data warehouses. These platforms are increasingly “multi-use.”
Another way this manifests is in the world of imperfect “workarounds” that we currently observe in the edtech market, often due to the integration woes of having discrete products talk to each other. Many systems are locked into single SIS systems because over time they have built integrations on top of these systems to track student progress or measure teacher effectiveness. This suggests that our current categories of products that can integrate on top of an SIS are somewhat arbitrary—they are contingent on the legacy systems with which we are stuck, rather than the actual jobs that we want to get done.
Lastly, our categories may impute jobs to educators that they don’t actually want technology to do for them. Academic content delivery is a good example of a category that at this point has become too broad to really command meaningful supply. For example, in Station Rotation models, academic content can serve different purposes in different classrooms. In some classrooms, for example, Station Rotation is geared toward freeing up teacher time for small group instruction. As such, the content that those teachers want delivered online primarily needs to keep kids engaged and provide them with robust practice exercises while more meaningful instruction can happen face-to-face in the next station over. In other classrooms, online exercises may be aimed at providing the teacher with better data about where students should be placed and identifying gaps in their instruction. As such, software designed for Station Rotation models may need to do a variety of jobs that are not well defined in the current market.
Where does this leave us? On the one hand, researchers and edtech engineers alike need to spend more time observing use-cases in classrooms, schools, districts, and states to really understand the “job-to-be-done.” Without concrete input from our end users, we risk reifying categories that are mismatched with actual demand. On the other hand, this may mean that the market is in a place where we need more integrated solutions that cross—and, in turn, do away with—these existing categories so we can start fresh and nail those jobs. An integrated flexible platform for competency-based systems might be one such system.