Noah Haber informs us of a research article, “Causal language and strength of inference in academic and media articles shared in social media (CLAIMS): A systematic review,” that he wrote with Emily Smith, Ellen Moscoe, Kathryn Andrews, Robin Audy, Winnie Bell, Alana Brennan, Alexander Breskin, Jeremy Kane, Mahesh Karra, Elizabeth McClure, and Elizabeth Suarez, and writes:
The study picked up the 50 most shared academic articles and their associated media articles about any exposure vs. some health outcome (i.e. “Chocolate is linked to Alzheimer’s”, “Going to the ER on a weekend is associated with higher mortality., etc.). We recruited a panel of 21 voluntary scientific reviewers from 6 institutions and multiple fields of study to to review these articles, using novel systematic review methods developed for this study. We found that only 6% of studies exhibited strong causal inference, but that 20% of academic authors in this sample used language strongly implying causality. The most shared media articles about these studies overstated this evidence even further, and were likely to inaccurately describe the study and its implications. This study picks up on a huge number of issues salient in science today, from publication-related biases, to issues in scientific reporting, all the way down to social media. While this study can’t identify the degree to which any specific factor is responsible, we can identify that by the time we are likely to see health science, it is extremely misleading. A public-language summary of the study is available here.
I’ve not read the article in detail, but I thought it might interest some of you so I’m sharing it here. Their conclusion is in accord with my subjective experiences, that exaggerated claims slip in at every stage of the reporting process. Also, I don’t think we should only blame journalists for exaggerated claims in news articles and social media. Researchers often seem all too willing to spread the hype themselves.