Following up on our discussion of hysteresis in the scientific community, Nick Brown points us to this article this article from 2014, “Excellence by Nonsense: The Competition for Publications in Modern Science,” by Mathias Binswanger, who writes:
To ensure the efficient use of scarce funds, the government forces universities and professors, together with their academic staff, to permanently take part in artificially staged competitions. . . . how did this development occur? Why did successful and independent universities forget about their noble purpose of increasing knowledge and instead degenerated into “publication factories” and “project mills” which are only interested in their rankings?
Here we should distinguish between natural and artificial competitions. For example, if students get to choose what universities to attend and staff get to choose where to work, then universities will need to compete for both students and staff. But competition for government research grants, for example, could be considered as artificial in that an alternative would just be for the same amount of public funds to be distributed among universities according to some formula.
As Binswanger notes, what works for the top research universities might not make sense more generally:
How can you impress the research commissions responsible for the distribution of funds? This is mainly achieved by increasing measurable output such as publications, projects funded by third-party funds, and networks with other institutes and universities. In this way, “excellence” is demonstrated, in turn leading to easier access to further government research funds. Competitiveness has therefore become a priority for universities and their main goal is to perform as highly as possible in measurable indicators which play an important role in these artificially staged competitions.
One might say that there is no real alternative to this sort of competition—but fifty years ago, or maybe even thirty years ago, the above picture would not have reflected what was happening at many universities.
Binswanger continues:
Relevant publications are in professional journals, where submitted work is subjected to a “rigorous” and “objective” selection method: the so-called “peer-review process”. . . . However, among scientific journals strict hierarchies also exist which are supposed to represent the average “quality” of the accepted papers. In almost every scientific discipline there are a few awe-inspiring top-journals (A-journals), and then there are various groups of less highly respected journals (B- and C- journals), where it is easier to place an article, but where the publication does not have the same significance as an A-journal article. Publishing one’s work in an A-journal is therefore the most important and often also the only aim of modern scientists, thus allowing them to ascend to the “Champions’ League” of their discipline. Belonging to this illustrious club makes it easier to publish further articles in A-journals, to secure more research funds, to conduct even more expensive experiments, and, therefore, to become even more excellent. The “Taste for Science”, described by Merton (1973), which is based on intrinsic motivation and supposed to guide scientists was replaced by the extrinsically motivated “Taste for Publications.”
I’d like to stop here and issue a mild dissent. Yes, there is some extrinsic motivation to publish in top journals, a motivation which I don’t feel much right now but which was a big deal for my colleagues and myself when we were younger. Even now, though, I’d like to publish in top journals, not so much for the league standings or even to help out my younger colleagues, but because I feel that papers in such journals are more likely to be read and to make a difference. But I don’t really know how true that is anymore; it may just be habit that I retain a weak preference to publish in higher-ranked venues.
Binswanger continues:
At the end of the peer review process, the reviewers inform the editor in writing whether they plead for acceptance (very rare), revision, or rejection (most common) of the article submitted to the journal in question. Quite a few top journals pride themselves on high rejection rates, supposedly reflecting the high quality of these journals . . . For such journals the rejection rates amount to approximately 95%, which encourages the reviewers to reject manuscripts in almost all cases in order to defend this important “quality measure”. Solely manuscripts that find favor with their reviewers get published . . .
And thus:
The peer-review process is thus a kind of insider procedure . . . The already-established scientists of a discipline evaluate each other, especially newcomers, and decide what is worthy to be published. . . . Outside of the academic system, most people neither know what modern research is about, nor how to interpret the results and their potential importance to mankind. Although scientists often also do not know the latter, they are—in contrast to the layman—educated to conceal this lack of knowledge behind important sounding scientific jargon and formal models. In this way, even banalities and absurdities can be represented as A-journal worthy scientific excellence, a process laymen and politicians alike are not aware of. They are kept in the blissful belief that more competition in scientific publication leads to ever- increasing top performance and excellence.
Also this amusing bit:
Calculating published articles per capita, Switzerland becomes the world’s leading country . . . in no other country in the world are more research publications squeezed out of the average researcher than in Switzerland.
Are you listening, Bruno?
Binswanger lists a number of “modes of perverse behavior caused by the peer-review process,” most notably this one: “Form is more important than content.” I think about that all the time when I see papers backed up by “p less than 0.05.”
Nick Brown points us to this quote from Binswanger:
Cases of fraud such as the example of Jan Hendrik Schoen mainly affect the natural sciences, where the results of experiments are corrected or simply get invented. Social sciences often have gone already one step further. There, research is often of such a high degree of irrelevance that it does not matter anymore whether a result is faked or not. It does not matter one way or the other.
This reminds me of Clarke’s Law: Any sufficiently crappy research is indistinguishable from fraud. Just to clarify: I’m not saying that all, most, or even a large fraction of social science research is fraudulent, nor am I questioning the sincerity of most social science researchers. I’m just agreeing that in many cases the empirical evidence in published papers can be pretty much irrelevant, as we can see in the common retort of authors when problems are pointed out in their published work: “These mistakes and omissions do not change the general conclusion of the paper . . .”
That sort of attitude is consistent with the idea that publication, not research, has become the primary goal.
And this sounds familiar:
What scientists at universities and other research institutions are mostly doing are things such as writing applications for funding of research projects, looking for possible partners for a network and coordination of tasks, writing interim and final reports for existing projects, evaluating project proposals and articles written by other researchers, revising and resubmitting a rejected article, converting a previously published article into a research proposal so that it can be funded retrospectively, and so on.
I guess we could add “blogging” to that list of unproductive activities . . . Hey! I guess things could be worse. Imagine a world in which, in addition to everything else, productive researchers were expected to regularly blog their findings, participate in internet debates, answer questions posed by strangers in the comment sections of their blogs, etc.
In all seriousness, I’m glad that blogging is an option for researchers, but just an option, and not considered to be any sort of requirement. 15 years ago, when blogging was beginning to really catch on, one could’ve imagined an academic world in which blogging would’ve become expected behavior of junior and senior scholars alike, leading to a world of brown-nosing, back-stabbing, etc. I’m a little sad that blogging isn’t more popular—I hate twitter—but at least blogging hasn’t been sucked into the bureaucracy.
A few years ago, I wrote:
It’s easy to tell a story in which scientific journals are so civilized and online discussion is all about point scoring. But what I’ve seen here [in the case of a particular scientific dispute] is the opposite. The norms of peer reviewed journals such as PNAS encourage presenting work with a facade of certainty. It is the online critics such as myself who have continued to display a sprit of openness.
At the time, I was thinking of these positive qualities as a product of the medium, with online expression allowing more direct and less mediated discussion without the gatekeeping role that has created so many problem with PNAS, Lancet, and various other high-prestige journals. But maybe it’s just that blogs are a backwater, relatively well behaved because they haven’t been sucked into the incentive system.
Indeed, there are some blogs out there (none on our blogroll, though) that do seem to be “political,” not in the sense of being about politics but in being exercises in strategic communication, and I hate that sort of thing. In the alternative universe in which blogging had become an expected part of academic production, I guess we’d be seeing noxious politicking on blogs all the time.