One of my mathematical mentors, eminent category theorist Robert Pare, expressed concern to me many years ago about the growing tendency in graduate schools to encourage PhD students to publish as many papers as possible before even graduating or completing their dissertation. His contention, to the best of my understanding, was that the PhD is a time to become intimately acquainted with one’s field and the contributions in said field that have informed the student’s own research. Develop that knowledge, add something original to the field, complete the dissertation, then embark on the journey of completing scholarly articles in that field—but that the time to focus on journal publication ought to be the postdoctoral fellowship and/or professorship.
I believe that his intention was twofold. First, to protect the graduate student from the political aspect of the peer-review system so that they are able to conduct their research in a pure environment; and second, he could see that this increasing pressure upon academics and even students to use publication numbers as their primary metric of success would inevitably lead to a decline in the quality of all research in the relevant field.
He wasn’t wrong. In mathematics, particularly, one effect of this pressure to publish pre-dissertation is that more and more young mathematicians are gravitating away from pure mathematics, where results are historically and necessarily scarce, due in part to their intellectual density, and are gravitating toward areas of research where it is traditionally easier to obtain publications. As a result, these fields are now being overrun with people eager to expand their publication list, often at the expense of quality.
On a more meta level, however, there is a clear parallel between moving the metric of academic success away from the quality of research to its quantity with the discarding of clinical endpoints in drug trials in favor of “surrogate markers” (such as using viral load as an endpoint while ignoring morbidity and mortality). If we think of citations and publication quantity as kind of “surrogate markers” for research quality, the pitfalls of this approach become obvious.
I was recently directed to this excellent piece in The Atlantic, "Science has a Crummy-Paper Problem." I encourage everyone to check it out, but here is just an amuse-bouche:
According to the rules of modern academia, a young academic should build status by publishing as many papers in prestigious journals as she can, harvest the citations for clout, and solicit funding institutions for more money to keep it all going. These rules […] have created a market logic that has some concerning consequences.
First, these rules might discourage truly free exploration. […] A 2020 paper suggested that the modern emphasis on citations to measure scientific productivity has shifted and behavior toward incremental science and “away from exploratory projects that are more likely to fail, but which are the fuel for future breakthroughs.” As attention given to new ideas has decreased, science has stagnated. (emphasis mine)
The situation is summed up nicely:
Second, at the far extreme, these incentives might create a surplus of papers that just aren’t any good—that is, they exist purely to advance careers, not science.
As someone who has worked in HIV research, I can attest that this focus on the numbers game absolutely infects the field. Publication and citation numbers are recited with pride; at times the actual results of the papers seem almost to be beside the point. Papers are claimed to be “out of date” but remain in the literature in perpetuity. It becomes nearly impossible to separate malfeasance from incompetence and even naked ambition.
The solution involves, on a large scale, a stepping back and a measured approach to future research. (Look for suggestions on how to implement this approach in a future post.) Regarding the literature that is already available, refer to my most recent post. Concerns about seminal papers—especially those such as the 1984 Gallo Science papers that appear to have been fast tracked—must be addressed. Whether due to incompetence, sloppy science, and even outright fraud, it is imperative to discern whether the results of these papers truly stand the test of time, and if they do not, we need to consider opening the retraction debate.
The Kindle edition of my upcoming book The Real AIDS Epidemic is available at a temporary presale price of $2.99 here.
My first PI used to talk about how in an attempt to maximize published work scientists would look to publish the smallest unit of data possible. He called it the least publishable unit.