In August, John Oliver dedicated a segment of Last Week Tonight to investigating bias in medicine, uncovering this gem of an interview from CBS in 2014, featuring correspondent Lesley Stahl interviewing neuroscientist Larry Cahill. Cahill explained that for decades, medical research has excluded women under the assumption that findings in males can be applied to both sexes. Scientists are just now discovering that they often can’t. Case in point: Ambien was the first drug to have a different recommended dose for men and women after scientists showed that women’s bodies metabolize Ambien differently, leaving more of the drug in their bodies the next morning and risking longer-lasting drowsiness. But until recently, the fact that female bodies reacted differently to drug trials was thought to be noise in the data, not a cue for further research. 

Cahill reflected that drawing conclusions based on study effects in men has long been the norm in medicine, leading to this exchange:

Lesley Stahl: If [they] want to understand me, they study you?

Larry Cahill: And here’s why they do that. Because there’s this assumption that you are me with pesky hormones.

The perspective that Cahill succinctly captures frames differences in female bodies as complications or anomalies, not notable circumstances worthy of study in their own right. This brings to mind the question: if “pesky hormones” was the mindset that scientists had about women in medical research, where do we struggle with an equivalent mindset in education? Do we imagine that some factors complicate broad-reaching findings on what works, when, in fact, they are critical to understanding what works in which circumstances? By focusing on what works for students on average, the education field often mistakes important anomalies for noise in the data—but more circumstance-based research and diverse, comprehensive data could help researchers move in a new, more informed direction.

From what works, to what works for whom

For good reason, the education sector is constantly oriented towards what’s shown to work (as opposed to what we imagine, hope, or pray might work). Randomized Control Trials (RCTs) have risen as the gold standard for research in education, but they typically only shed light on what is most likely to work on average, whereas educators need a deeper level of information to chart predictably effective paths for each student. As researcher Todd Rose argued in his book The End of Average, it’s a myth that the concept of “average” offers meaningful information, since people vary almost endlessly when it comes to how they learn—they’re full of “pesky hormones” (or more to the point in this case, pesky neurological, cognitive, and social differences).

For example, consider this brief from the What Works Clearinghouse describing an RCT study that analyzed the efficacy of an adaptive math software, finding that the software had “potentially positive effects on mathematics achievement for elementary school students.” Although a thorough, well-funded study like this signals promise, findings like these do not reliably tell us why some portion of students or certain classes likely didn’t fare as well whereas others fared far better. 

An effective research agenda for student-centered learning should move beyond merely identifying correlations of what works on average, and should articulate and test theories about how and why certain educational interventions work in different circumstances for different students. 

Reimagining anomalies as opportunities

Part of moving beyond the average will require a new focus on the development of theories that explain why certain approaches do or do not work in certain circumstances. For example, do certain interventions work best for certain student populations at a certain point in their learning trajectory? Do particular software tools work better for practicing hard skills, whereas others excel at developing students’ abilities to persist through challenges? 

Answering these questions means encouraging research that digs into anomalies—instances where the prevailing research cannot explain a certain result—to surface new explanations and refine our understanding of what drives individual learning. For decades, medical researchers have seen variations in drug trial results in women as anomalies that should be explained away. Instead, anomalies should be seen as opportunities to move beyond correlation to understand which circumstances predict success or failure of a given approach.

Moving towards this vision for education research—one that moves beyond a “pesky hormones” mindset—requires a commitment to new data systems that can fuel more circumstance-based research. To get to a place where research can more reliably tell us what works for whom, we need better, more collective knowledge on what’s happening in schools across many dimensions: context, school design, and outcomes. Our recently-launched Canopy project takes an initial step in this direction. By casting a wide net through a crowdsourcing process to surface a more diverse set of schools that are innovating, and cataloging each school’s model using a consistent data structure, Canopy data begins to move towards collective knowledge that can support better understanding of what works in what circumstances.

Researchers, funders, and policymakers should continue to develop better data systems that can capture circumstantial differences in schools working to reimagine the learning experience. That way, we won’t wake up in a few decades to an aging John Oliver heckling the education community about why, for too long, we averaged out some of the most important key factors in what makes or breaks the success of a given innovation.

Author

  • Chelsea Waite
    Chelsea Waite