Discussion of effects of growth mindset: Let’s not demand unrealistic effect sizes.

Shreeharsh Kelkar writes:

As a regular reader of your blog, I wanted to ask you if you had taken a look at the recent debate about growth mindset [see earlier discussions here and here] that happened on theconversation.com. Here’s the first salvo by Brooke McNamara, and then the response by Carol Dweck herself. The debate seems to come down to how to interpret “effect sizes.” It’s a little bit out of my zone of expertise (though, as a historian of science, I find the growth of growth mindset quite interesting) but I was curious what you thought.

I took a look, and I found both sides of the exchange to be interesting. It’s so refreshing to see a public discussion where there is robust disagreement but without defensiveness.

Here’s the key bit, from Dweck:

The effect size that Macnamara reports for growth mindset interventions is .19 for students at risk for low achievement – that is, for the students most in need of an academic boost. When you include students who are not at risk or are already high achievers, the effect size is .08 overall. [approximately 0.1 on a scale of grade-point averages which have a maximum of 4.0]

An effect of one-tenth of a grade point—large or small? For any given student, it’s small. Or maybe it’s an effect of 1 grade point for 10% of the students and no effect for the other 90%. We can’t really know from this sort of study. The point is, yes it’s a small effect in the context of any student, and of course it’s a small effect. It’s hard to get good grades, and there’s no magic way to get there!

This is all a big step forward from the hype we’d previously been seeing, such as this claim of a 31 percentage point improvement from a small intervention. It seems that we can all agree that any average effects will be small. And that’s fine. Small effects can still be useful, and we shouldn’t put the burden on developers of new methods to demonstrate unrealistically huge effects. That’s the hype cycle, it’s the Armstrong principle, which can push honest researchers toward exaggeration just to compete in the world of tabloid journals and media.

Here’s an example of how we need to be careful. In the above-linked comment, Macnamara writes:

Our findings suggest that at least some of the claims about growth mindsets . . . are not warranted. In fact, in more than two-thirds of the studies the effects were not statistically significantly different from zero, meaning most of the time, the interventions were ineffective.

I respect the general point that effects are not as large and consistent as have been claimed—but it’s incorrect to say that, just because an estimate was not statistically significantly different from zero, that the intervention was ineffective.

Similarly, we have to watch out for statements like this, from Macnamara:

In our meta-analysis, we found a very small effect of growth mindset interventions on academic achievement – specifically, a standardized mean difference of 0.08. This is roughly equivalent to less than one-tenth of one grade point. To put the size of this effect in perspective, consider that the average effect size for a psychological educational intervention (such as teaching struggling readers how to identify main ideas and to create graphic organizers that reflect relations within a passage) is 0.57.

Do we really believe that “0.57”? Maybe not. Indeed, Dweck gives some reasons to suspect this number is inflated. From talking with people at Educational Testing Service years ago, I gathered the general impression that, to first approximation, the effect of an educational intervention is proportional to the amount of time the students spend on it. Given that, according to Dweck, growth mindset interventions only last an hour, I’m actually skeptical of the claim of 0.08. It’s a good thing to tamp down extreme claims made for growth mindeset, maybe not such a good thing to compare to possible overestimates of the effects of other interventions.