How to think about an accelerating string of research successes?

While reading this post by Seth Frey on famous scientists who couldn’t let go of bad ideas, I followed a link to this post by David Gorski from 2010 entitled, “Luc Montagnier: The Nobel disease strikes again.” The quick story is that Montagnier endorsed some dubious theories. Here’s Gorski:

He only won the Nobel Prize in 2008, and it only took him two years to endorse homepathy-like concepts. He’s also made a name for himself, such as it is, by appearing in the HIV/AIDS denialist film House of Numbers stating that HIV can be cleared naturally through nutrition and supplements. This he did after publishing a paper in a journal that for which himelf is the editor . . .

But that’s just the beginning:

From there it only took Montagnier a few months more to turn his eye to applying that “knowledge” to autism . . . Unfortunately, the pseudoscience that Montagnier appears to have embraced with respect to autism is combined with a highly unethical study . . . The trial is sponsored by the Autism Treatment Trust (ATT) and the Autism Research Institute (ARI), both institutions that are–shall we say?–not exactly known for their scientific rigor. Apparently Montagnier has teamed up with a Dr. Corinne Skorupka, who is a DAN! practitioner from France . . . Whenever you see an “investigator” charge patients to undergo an experimental protocol, be very very wary. . . . here we have Montagnier and colleagues charging the parents of autistic children . . . Perhaps even worse than that, check out how badly designed this experimental protocol is . . . there are no convincing preclinical data . . . Based on an unsupported hypothesis that bacterial infections cause autism, Montagnier will be subjecting autistic children to blood draws and treatment with antibiotics. The former will cause unnecessary pain and suffering, and the latter has the potential to cause the complications that can occur due to long term antibiotic use over several months. . . . The study proposed is poorly designed even for a pilot study. There is no control group . . . Moreover, because the selection criteria for the study are not specified, there is no way of knowing how much selection bias might be operative there.

Gorski asks:

I’ve wondered how some Nobel Laureates, after having achieved so much at science, proving themselves at the highest levels by making fundamental contributions to our understanding of science that rate the highest honors, somehow end up embracing dubious science . . . or even outright pseudoscience . . . Does the fame go to their head? . . .

I’m guessing the story is a bit different, maybe not for the particular case of Montagnier but for this general “Nobel prize disease” thing—the pattern of celebrated scientists embracing wacky ideas. It’s not so much that these scientists get drunk by fame; rather, it’s that the prize attracted more attention to the wacky ideas they were susceptible to in the first place.

And then the feedback loop comes in. Scientist expresses wacky idea; then because of the Nobel prize, his wacky pronouncements get attention; scientist enjoys being in the limelight (maybe it’s been a bit disappointing after the Nobel publicity fades and his life is pretty much the same as always) so he makes more pronouncements; these pronouncements get more attention; scientist realizes that to continue to get in the news, he needs to make grander and grander claims; etc.

The apparent research progress comes in, faster and faster with stronger and stronger results

But I actually want to talk about something else, not the Nobel disease or anything like that, but the following pattern which I’ve seen from time to time.

The pattern goes like this: A researcher studies some topic, and after lots of effort and many false starts, he makes some progress. After that, progress comes faster and faster, and more and more research results come in.

This escalating pattern can arise legitimately: you develop a new tool and then find applications for it everywhere. For example, it took us a few years to write the Red State Blue State paper, but from there it only took a year to write the book, which had tons of empirical results.

Other times, though, it seems that what’s happening is escalating overconfidence, exacerbated by whatever echo chambers happen to be nearby. Luc Montagnier, for example, will have no problem finding yes-men, with that Nobel prize hanging in the corner. Another echo chamber is the science publication and grants system: if you have a track record of success, you’re likely to have figured out ways of presenting your results so they’re publishable and grant-worthy.

But the example I have in mind is my friend Seth Roberts, who spent about 10 years on his self-treatment for depression and then a few years more on his weight-loss method. At this point he spent a few years working on that, writing it up, and becoming a bit of a culture hero. And then he started to let his ambition get ahead of him, using self-experimentation to conclude that eating a stick of butter a day improved his brain functioning, among other things. I’m not saying that Seth was wrong—who knows? Maybe eating a stick of butter a day does improve brain functioning—but I’m skeptical of the idea that he came up with some trick for scientific discovery, so that what took him 5 or 10 years in the past could now be done, routinely, every couple of months.

Beware the escalating pattern of research results.

P.S. Gorski’s post also has a Herbalife connection.

P.P.S. Frey’s post is interesting too, but does he really think that all those people on his list are “way way smarter than everyone I know.” Does Frey really not know anyone as smart as Trofim Lysenko?

P.P.P.S. I came across this other post where Frey remarks that he used to work for Marc “Evilicious” Hauser!

Frey’s Hauser-related post is interesting but he makes one common mistake when he defines exploratory data analysis as “what you do when you suspect there is something interesting in there but you don’t have a good idea of what it might be, so you don’t use a hypothesis.” No! Exploratory data analysis is all about finding the unexpected, which is defined (explicitly or implicitly) relative to the expected, that is, a hypothesis or model. See this paper from 2004 for further discussion of this point.