There often seems to be an attitude among scientists and journal editors that, if a research team has gone to the trouble of ensuring rigor in some part of their study (whether in the design, the data collection, or the analysis, but typically rigor is associated with “p less than .05” and some random assignment or regression analysis, somewhere in the paper), that then they are allowed to speculate for free.Story time can take over.
It’s a bit like that word game, Botticelli, where once you stump your opponent, you get a free question.
Indeed, in the science game, it often seems that story time is the goal, and the point of all the experimentation is just to amass a chit that will allow you to publish a story that you can then disseminate to the world under the imprimatur of the scientific establishment.
Background
I got to thinking about this after Javier Benitez pointed me to this article by Nate Kornell (no relation to the famous researchers on ESP and eating behavior) subtitled, “Education studies always seem to have a happy ending. Why doesn’t education?”
Kornell writes:
Data don’t speak for themselves. I’ve been to a few painful lab meetings where a new grad student tried letting data speak for themselves. It is ugly. Even interesting data, without interpretation, are boring to the point of being pointless. So scientists tell stories. Ask any scientist. If you don’t tell a story you don’t publish. There is nothing wrong with stories, of course, as long as they’re non-fiction. Sometimes, though, the storytelling gets too creative. . . . Recently it hit me: These stories always seem to have happy endings, at least in education. You know those Hollywood movies about the white teacher who comes into a poor school, and at first she clashes with her minority students, but in the end, they lift each other up educationally and morally? Correlational research in education is the same way: The news is good, the problem can be solved, the moral is uplifting, and optimism flows like water. It’s always sunny in correlationville.
What’s wrong with a little bit of optimism?
Kornell explains:
Optimism is great. But science is a search for truth. The truth is not always pretty. When it isn’t—when the pessimists are right—too much optimism can be harmful. I [Kornell] got to thinking of this because of a new study in the journal Psychological Science. Chen, L., Bae, S. R., Battista, C., Qin, S., Chen, T., Evans, T. M., & Menon, V. (2018). Positive Attitude Toward Math Supports Early Academic Success: Behavioral Evidence and Neurocognitive Mechanisms. Psychological Science. doi.org/10.1177/0956797617735528 Here’s the synopsis you’ll find in the press release: If a kid is not doing well in math, it might be because of his attitude. Make him feel more positive about math and he’ll start doing better. Let’s look closer. . . . This is a correlational study. As the authors say in their penultimate paragraph: “We could not determine the direction of causal influences between positive attitude and math achievement” . . . Yet they also say, in the very next paragraph: “In conclusion, our study demonstrates, for the first time, that PAM in children has a unique and significant effect on math achievement independent of general cognitive abilities and that this relation is mediated by the MTL memory system.” In fact, the title of the article is “Positive Attitude Toward Math Supports Early Academic Success: Behavioral Evidence and Neurocognitive Mechanisms.” [emphases added] The words “effect” and “supports” are causal language. . . .
Kornell continues:
These optimistic conclusions should not be taken at face value because there are other, equally valid, ways to look at the data. . . . First, it’s a truism that people often like things they’re good at. Therefore, we should expect being good at math to cause kids to like math. That alone is enough to explain the attitude-performance correlation. (If that wasn’t enough, did you notice that part of the PAM attitude measure asked kids whether they’re good at math? How could PAM scores not be correlated with math performance?) In other words, attitude might not actually have any effect on performance. If this is true, then changing a kid’s attitude toward math will not make them better at it. That’s more pessimistic, but it gets worse. . . . If a kid has below-average math aptitude, they will tend to struggle with math. This struggle will affect their attitude. That is, kids who are not good at math will grow to hate it. In fact, it is the very strength of the correlation in this study, between performance and attitude, that brings down the hammer on kids who don’t do well in math. It’s a strong correlation, which means few will buck the trend. In other words, this study shows that very few kids with low math aptitude will ever like math (which seems, anecdotally, true). They’re doomed to hate it. (Hopefully, this is going too far. Remember, this is storytelling. But it’s consistent with the data.)
In summary:
Different stories about the data produce different headlines: Optimist: “Math performance can be improved by changing kids’ attitudes toward math!” Pessimist: “Kids’ math performance determines their attitudes, and kids with low aptitude are doomed to hate math.” Here’s why it matters. The optimist is going to invest funds into improving attitudes to create a positive cycle. The pessimist is going to give extra math help to kids who are struggling at a young age to prevent a negative cycle. When we read science, we want to hear the truth. But we also like to see problems (like low math scores) get solved. It’s not that we want scientists to tell happy stories about a bad world. We want them to tell happy stories and we want the world to conform to the stories they tell. We want to live in correlationville. But we live on earth.
One thing that Kornell didn’t mention is that the article in question uses brain scans. On the plus side, this represents potentially valuable information on intermediate outcomes (the “neurocognitive mechanisms”) in the paper’s title). On the minus side, this is the sort of high-tech flourish that can fool reviewers into thinking there’s more to the research than there really is.
I have not read the article in detail and so I can’t really comment on Kornell’s specific points, but I agree with his general message which is that we have to be careful about unidirectional spins on research.
An example I recall from a few years ago was the finding that Nobel prize winners live two years longer than comparable non-winners. Setting aside any qualms about the study itself, there’s a big problem with the positive-spin interpretation. As I wrote, one could just as well summarize the study as, “Not getting the Nobel Prize reduces your expected lifespan by two years.” A lot more people don’t get the prize than do, so the negative spin could be warranted.
It’s related to the fallacy of the one-sided bet.
The only place I’d alter Kornell’s article is to reduce his emphasis on correlation vs. causation, as all these problems of interpretation also arise in purely correlational studies as well.