A study fails to replicate, but it continues to get referenced as if it had no problems. Communication channels are blocked.

In 2005, Michael Kosfeld, Markus Heinrichs, Paul Zak, Urs Fischbacher, and Ernst Fehr published a paper, “Oxytocin increases trust in humans.” According to Google, that paper has been cited 3389 times.

In 2015, Gideon Nave, Colin Camerer, and Michael McCullough published a paper, “Does Oxytocin Increase Trust in Humans? A Critical Review of Research,” where they reported:

Behavioral neuroscientists have shown that the neuropeptide oxytocin (OT) plays a key role in social attachment and affiliation in nonhuman mammals. Inspired by this initial research, many social scientists proceeded to examine the associations of OT with trust in humans over the past decade. . . . Unfortunately, the simplest promising finding associating intranasal OT with higher trust [that 2005 paper] has not replicated well. Moreover, the plasma OT evidence is flawed by how OT is measured in peripheral bodily fluids. Finally, in recent large-sample studies, researchers failed to find consistent associations of specific OT-related genetic polymorphisms and trust. We conclude that the cumulative evidence does not provide robust convergent evidence that human trust is reliably associated with OT (or caused by it). . . .

Nave et al. has been cited 101 times.

OK, fine. The paper’s only been out 3 years. Let’s look at recent citations, since 2007:

“Oxytocin increases trust in humans”: 377 citations“Does Oxytocin Increase Trust in Humans? A Critical Review of Research”: 49 citations

OK, I’m not the world’s smoothest googler, so maybe I miscounted a bit. But the pattern is clear: New paper revises consensus, but, even now, old paper gets cited much more frequently.

Just to be clear, I’m not saying the old paper should be getting zero citations. It may well have made an important contribution for its time, and even if its results don’t stand up to replication, it could be worth citing for historical reasons. But, in that case, you’d want to also cite the 2015 article pointing out that the result did not replicate.

The pattern of citations suggests that, instead, the original finding is still hanging on, with lots of people not realizing the problem.

For example, in my Google scholar search of references since 2017, the first published article that came up was this paper, “Survival of the Friendliest: Homo sapiens Evolved via Selection for Prosociality,” in the Annual Review of Psychology. I searched for the reference and found this sentence:

This may explain increases in trust during cooperative games in subjects that have been given intranasal oxytocin (Kosfeld et al. 2005).

Complete acceptance of the claim, no reference to problems with the study.

My point here is not to pick on the author of this Annual Review paper—even when writing a review article, it can be hard to track down all the literature on every point you’re making—nor is it to criticize Kosfeld et al., who did what they did back in 2005. Not every study replicates; that’s just how things go. If it were easy, it wouldn’t be research. No, I just think it’s sad. There’s so much publication going on that these dead research results fill up the literature and seem to lead to endless confusion. Like a harbor clotted with sunken vessels.

Things can get much uglier when researchers whose studies don’t replicate refuse to admit it. But even if everyone is playing it straight, it can be hard to untangle the research literature. Mistakes have a life of their own.