In recent years, many have beaten the drum about the findings that rigorous curriculum can improve student outcomes—with gains that sometimes outpace those of other popular education reforms.
But much to the frustration of education researchers, philanthropists, and others on the rigorous curriculum train, school districts sometimes don’t purchase and teachers sometimes don’t use curriculum that lines up with their view of what the evidence suggests.
Most recently that’s manifested itself out of people like me expressing frustration at how many teachers had to create materials themselves on the fly to serve their students in the wake of COVID, and how comparably fewer were able to access the quality online resources that have been built over the years, according to a national survey by the Clayton Christensen Institute.
Frustration has mounted over what it will take for the majority of districts and teachers to adopt and use “evidence-based” materials.
What all of this griping misses, however, is that it’s not that districts ignore evidence or quality. It’s just that their definition of quality—and therefore what the appropriate evidence may be to figure out whether something meets that standard—is sometimes different from those on the “rigorous curriculum bandwagon” (and yes, I confess—I’m a card carrying member).
Quality, in other words, is not absolute. A report by Thomas Arnett and Bob Moesta of the Christensen Institute titled Solving the Curriculum Conundrum (and an accompanying one applying the findings to the situation during COVID) lays bare just how different the definition of quality can be based on the circumstance a district is in and the progress it is trying to make.
Before diving into the report’s findings, it’s worth thinking about the statement that quality isn’t absolute. All too often in education we act as though that’s not the case. We ask whether a program is high quality—without inquiring for whom, in what circumstance, and according to what measures.
To illustrate how ludicrous that is, think about the following question:
Which cup—one made from a mix of new and recycled paper that is designed for hot liquids or a sturdy, drinking glass—is higher quality?
It’s an absurd question. The paper cup is perfect for toting around hot liquids on the go and for discarding the cup after its use. The latter is meant as a staple in your kitchen for cold drinks. Your context as an individual determines which is the better fit.
So too for districts.
According to Arnett and Moesta, there are at least four different “Jobs to Be Done”—the progress that someone seeks in a struggling circumstance—in which districts find themselves that cause them to adopt curriculum:
1) Overhaul: Help us transform instruction to tackle low achievement;
2) Build Consensus: Help us manage a selection and get to consensus;
3) Update: Help us refresh our materials to better support teachers;
4) Influence: Help us shape the field.
For districts looking to transform instruction to tackle low achievement when there is deep discontent from folks like school board members, they define quality for curriculum in ways similar to what researchers and others on the rigorous curriculum bandwagon use, as districts are looking for evidence that any curriculum they adopt will move the achievement needle.
Research trials and evidence of alignment to standards are all helpful in selecting a new curriculum, which doesn’t necessarily happen “on cycle” with traditional curriculum adoption. But districts in this circumstance could also prioritize other investments that they think will pay a bigger dividend on achievement—such as coaching, overhauling learning models, how they use time, and the like—and leave their current curriculum in place, even if it’s subpar from the perspective of the rigorous curriculum lens.
In the Build Consensus Job, districts are selecting new curriculum on cycle, and their focus is on gaining buy-in from key stakeholders—namely the teachers who take part in the curriculum selection committee. The curriculum director wants to survive the process unscathed.
As a result, quality is about what will engage students and is user-friendly and straightforward for teachers. The evidence teachers will be looking at isn’t randomized control trials or ratings from EdReports, but signals from teachers in other districts who have used the curriculum, signs that what they are considering isn’t too different from what they currently use, glimpses of flash and sizzle that might attract student interest, and the like. It’s not that boosting achievement isn’t important; it’s just that given everything going on, it’s not the most pressing priority.
For districts looking to update their materials to better support teachers, there’s more of a problem than in those districts looking to build consensus. Teachers are actively unhappy with their current materials and want something different.
In many cases, this means that their definition of quality is similar to those districts in the Build Consensus Job, but as Arnett and Moesta observe, there are nuances here. If teacher dissatisfaction is because the current materials are not aligned to standards, then EdReports can be a valuable tool in marshaling evidence in selecting what’s next, for example. If, however, teachers are hungry for more project-based approaches to learning, then PBLWorks will be a better resource with better evidence given what they are trying to achieve. The segments in this Job matter in other words.
Finally, for those districts seeking to shape the field, they have a strong reputation on which to build. Like the Build Consensus districts, they are adopting on cycle, but because they are doing relatively well, they are now looking to adopt curriculum that will influence the field more broadly—by building up a new publisher, for example, or adopting something that will win them acclaim for its level of “innovation.”
So what is quality? Anything that will win these districts plaudits and generate positive publicity. The evidence lies in how external stakeholders seem to be reacting to their potential choices, not the raw research on the curriculum itself—an important distinction given the faddish nature of education.
There are many conclusions to draw from this research—and a lot of work to be done for those on the rigorous curriculum train to design curriculum in ways that match the progress districts are seeking to make.
But one conclusion I’ll draw at minimum is that we should be wary of asking why districts don’t use evidence-based practices or care about quality. They do. It’s just their circumstance and their definition of progress—and hence quality—is different from those looking at curriculum in an “ideal” vacuum and against a blank slate.