growth mindset

Controversy on whether growth mindset works will strengthen the theory

By:

Dec 6, 2023

All too often, education research falls short of giving educators on-the-ground actionable advice. However, a recent controversy around Carol Dweck’s well-known growth mindset gives me hope that we can move forward with research that can better inform and support practitioners and students.

Over the last couple of decades, education research has, thankfully, moved to embracing randomized control trial (RCT) when possible. Yet even if it gets to an RCT, education research still tends to stop there—at a stage where all a researcher can declare is that some intervention correlates with a desired outcome.

Research stuck in this stage can only tell us what works on average—what people call “best practices.” Yet what works on average often doesn’t work for a specific individual in a specific circumstance. It’s only by moving to more nuanced statements of what works for whom under what circumstance that allows researchers to offer actionable insights that educators can reliably and predictably use.

So how do we do that? The key is to move beyond inductive research that looks for on-average correlations among large N-sizes to deductive research in which we hunt for anomalies—specific circumstances where the outcome we see isn’t what the RCT or large dataset of correlations and studies would have predicted.

Researchers often bemoan finding a failure in their theory. But anomalies are actually good news because they allow researchers to say, “There’s something else going on here.” And that is what leads to better understanding.

What often happens, instead, in education research is that one set of scholars does a study that shows a positive correlation between one set of recommended actions and a desired outcome, and another set of scholars does another study showing something different. Yet almost always in these large datasets or RCTs there are anomalies—a particular student or class or school for which a given intervention didn’t produce the desired outcome—lurking.

When researchers avoid acknowledging the anomalies and instead simply attack each other’s opposing theories, all we get is a giant game of “my correlations are better than yours”—but nothing that helps people on the ground.

A recent controversy over Dweck’s famous growth mindset findings that Melinda Moyer covered in “Is Growth Mindset a Sham?” captures the point.

Growth mindset is the belief that one can improve one’s abilities through effort, learning, and persistence. The on-average claim has historically been that those individuals who have a growth mindset tend to achieve better than they otherwise would, and are able to work through challenges.

But as Moyer wrote, one recent meta-analysis (a review of several independent studies on the same phenomenon) by Case Western University psychologist Brooke MacNamara and Georgia Tech psychologist Alexander Burgoyne in Psychological Bullet “concluded that ‘the apparent effects of growth mindset interventions on academic achievement are likely attributable to inadequate study design, reporting flaws, and bias’—in other words, the science on growth mindset is flawed, and the approach doesn’t actually boost kids’ grades.”

This feels a lot like the classic case of pitting one set of correlations against another. Your classic “on average” food fight that doesn’t help people on the ground. As Moyer wrote, “Their goal was to figure out if, on average, growth mindset interventions improved academic achievement.” To do this, they lumped students together regardless of circumstance.

Moyer then profiles another meta-analysis, published in the same journal issue by several researchers, which came to a more nuanced conclusion, as it “found positive effects on academic outcomes, mental health, and social functioning, especially when interventions are delivered to people expected to benefit the most.”

According to Moyer: “The other meta-analysis, on the other hand, tried to figure out when and where growth mindset interventions worked, and when and where they did not, using a slightly different data set. In essence, they did the (sic) opposite of lumping all the students together. These researchers found that growth mindset interventions worked in some groups and not in others and that it helped struggling students the most — which, if you think about it, makes a lot of sense. When kids are already getting straight A’s, growth mindset interventions aren’t as important or helpful, since students are already performing well. But when students struggle in school, the researchers found, growth mindset interventions may help.”

Interestingly enough, the meta-analysis criticizing growth mindset also found some evidence of the same varied effects, Moyer wrote. “When they broke down the various studies and looked specifically at how growth mindset affected students who got low grades, they found that the interventions did have some beneficial effects.”

And even more interesting: “After those two meta-analyses were conducted, Elizabeth Tipton, a statistician at Northwestern University, and her colleagues learned about them and decided to conduct yet another meta-analysis of the growth mindset data. They looked at the same studies included in the “growth mindsets don’t work” analysis, but instead of lumping the data together, they teased the various effects apart more. They concluded that “there was a meaningful, significant effect of growth mindset in focal (at-risk) groups.” In other words, again, growth mindset did seem to help kids who weren’t doing well in school.”

Another way to state all this is that there’s an anomaly. Growth mindset doesn’t seem to work as well for those who are already performing well. I suspect Dweck might push back and say something like, “That’s true, but when and if work gets hard down the road and they experience a struggle, having a growth mindset will serve them well.” That’s certainly the implication of a bunch of Dweck’s stories on stars like John McEnroe in her book “Mindset” (debatable as it might be to analyze a star that one doesn’t know).

But leaving that aside, Tipton then makes the case for improving research by hunting for anomalies and boundary circumstances. As Tipton told Moyer, “There’s often a real focus on the effect of an intervention, as if there’s only one effect for everyone,” she said. She argued to me that it’s better to try to figure out “what works for whom under what conditions.” I agree with her. But not all researchers do, which strikes me as unfortunate for those on the ground trying to transcend supposed best practices to do what will work in their specific circumstances and with their specific students.

Even more to the point, I’ve long heard from researchers that there are other anomalies where growth mindset alone doesn’t make sense. Moyer writes about this as well: “Some researchers, including Luke Wood at San Diego State University, have argued that focusing solely on effort could be detrimental for children of color, who may benefit from being praised both for ability and intelligence. (Here’s a great article by journalist Gail Cornwall that delves into Wood’s concerns and recommendations in more detail.)”

We ultimately need more anomaly-seeking to continue to strengthen the theory of growth mindset. And it would be amazing if Dweck would lead this movement. That might give the findings of limitations to the theory more airtime—but also help educators on the ground know how, where, and when to put growth mindset into action.

Because ultimately, whenever growth mindset fails to produce the outcomes it purports to produce, we aren’t undermining the overall theory. We instead have an opportunity to grow it.

Michael is a co-founder and distinguished fellow at the Clayton Christensen Institute. He currently serves as Chairman of the Clayton Christensen Institute and works as a senior strategist at Guild Education.