“I thought it was fun. It allowed me to read the new papers. And I really felt like, hey, I’m making a contribution,” Kunst said. “And at that time, I was still able to do these things in my working hours.”
But for Kunst and many scholars, that didn’t last for long. Over the course of his career, Kunst started receiving more and more requests from academic-journal editors every week. The change was drastic, he said, and not simply a function of his growing stature as a scholar. Kunst began using his free time to keep on top of the ever-growing pile of requests. He noticed more of the requests fell outside of his area of expertise, and he had to reject them out of hand.
“I’m really asking myself, ‘Why do you keep inviting me?’ It makes no sense,” Kunst said. “It just shows the desperation on the side of the editors.”
Since the practice was standardized decades ago, peer review has been the backbone of the academic publishing industry. In an effort to find the best academic work, journal editors invite and consider critiques from several reviewers — typically a minimum of two or three — who are experts in their field. Scholars are expected to review others’ work to help maintain the ecosystem that allows their own to be published. But it is generally unacknowledged labor. One journal editor compared it to housework — you only notice when someone isn’t doing their part.
As universities jockey for rankings and faculty members vie for increasingly more competitive tenured posts, the pressure to “publish or perish” has only grown. The number of academic publications worldwide increased nearly fivefold from the 1980s to 2020, whereas the number of university researchers merely tripled.
The pandemic pushed the system to the breaking point. According to Nature, the number of submitted articles across several fields increased in the middle of lockdown, and reviewer requests rose along with them. Women and people of color were less likely to publish research during that period because of expanded caregiving responsibilities or effects on their physical health, but they still faced the expectation to review others’ work as overall totals grew.
Meanwhile, Kunst said, he and many of his peers became more disillusioned by the academic publishing system’s reliance on unpaid labor from peer reviewers. It “takes the work of scholars for granted,” he said.
People are saying no to reviews because they don’t want to participate in the marketplace of peer-reviewed journals that are on the backs of free labor.
Reliance on unpaid labor also means some scholars are inadvertently left out of the reviewer pool altogether. “I’m tenured, but many scholars hold an underpaid position,” Kunst said. “Effectively, these scholars are often not able to participate in the peer-review process because they can’t afford to review for free.”
Journal editors across disciplines and borders are asking themselves what they can do to encourage a practice that has gone fully uncompensated since its creation. Some say the answer is simply acknowledgment of review labor — from institutions, journals, and peers. Others say monetary incentives are the obvious choice. And other academics are questioning the entire for-profit publishing model.
Kunst decided to go big. In October, he founded a new academic publisher called Advances.in. The platform pays both editors and reviewers to follow an article through its review process. To start, the new publisher has a single psychology journal under its wing, although Kunst said the plan is to expand into more fields.
Kunst said he believes he’ll get better reviews by paying for them. Editors will have more leverage in rejecting or asking for revisions to reviews that don’t meet Advances.in’s list of requirements. Creating a market for only the best, most thorough reviews will result in better research and reviewers who feel more valued, Kunst said.
According to one historical review, the emergence of professional academics and the perception of peer review as a “voluntary duty” allowed commercial publishers to legitimize themselves and dominate the publishing market — all while turning a profit. One conservative estimate put the cost of time “donated” to peer review for U.S. reviewers at $1.5 billion in 2020.
Some of the academics who spoke with The Chronicle said it’s surprising that it took reviewers so long to rebel. But many also felt that being asked to review a peer’s work is a privilege.
“At the end of the day, it is an honor to review,” said Ariel M. Lyons-Warren, a board member of the journal Pediatric Neurology. “Someone thinks my opinion is valid and worthwhile in evaluating the quality of other people’s work.”
Lyons-Warren is also engaged in training clinicians to be better peer-reviewers. She said that peer review is a system vital to providing the best patient care and is an essential duty for the academic and scientific community that relies upon it.
The usual wisdom is that scholars should review more than they write. If every article needs two to three peer reviewers, an author should review the work of two or three other people for every submission, perpetuating the cycle.
Nearly all scholars recognize the value of peer review — 98 percent of respondents to a 2018 global survey of the process said it was either important or extremely important. Yet that same year, most journal editors said finding reviewers was the hardest part of their job. A number of editors told The Chronicle that it’s getting even harder. While scholars can, and often do, put peer-review work on their curriculum vitae, it traditionally has little to no effect on faculty members’ tenure or promotion prospects, or on their standing in the academic community. And it comes with no additional pay or benefits.
David Moher, the director at the Centre for Journalology, in Ottawa, and a prolific researcher himself, said he thinks universities need to formally recognize peer review in the tenure and promotion process.
Make it part of the job description, Moher said, instead of an unspoken expectation, and give faculty members the time to undertake it. Colleges have a reputation for emphasizing the number of publications their faculty members produce (and the prestige of the journals publishing them) above all else. But researchers like Moher question the benefit of such an approach.
Protecting the quality of research through peer review, Moher contends, is nearly as important as conducting that research and should be treated as such.
“It’s not really clear how academic institutions’ advancement and promotions committees weigh those sort of things,” Gelfand said. “If we’re authors, we need to also be reviewers. That’s what the system relies on. … But how do we give our reviewers full credit for the essential service that they’re performing in an unpaid capacity?”
Some publishers provide peer reviewers with coupons to view the reviewed article if it’s published, or offer a discount on the reviewer’s next submission fee.
Mary K. Feeney, editor of the Journal of Public Administration Research & Theory and a professor at Arizona State University, said she believes the publishing model has been inching closer to death for years. According to Feeney, as publishers’ profit margins came into the public eye, so too did discontent.
“People are fed up with it,” she said. “People are saying no to reviews because they don’t want to participate in the marketplace of peer-reviewed journals that are on the backs of free labor.”
But, Feeney said, when it comes to submitting articles, most scholars don’t have a lot of choice. According to a 2020 study, 50 percent of the industry is owned by five for-profit publishers: SAGE, Elsevier, Springer Nature, Wiley-Blackwell, and Taylor & Francis.
These publishers’ journals — think Nature, The Lancet, or Cell — are also the most influential and prestigious, with high impact factors, a measure of how frequently the average article in a journal is cited per year. They are also incredibly profitable.
According to a 2017 article in The Guardian, the academic publishing business has global revenues of more than 19 billion pounds (or about $24.4 billion in 2017), putting its money-making power between the recording and the film industries. Even more startling is the profit margin. Elsevier’s was 37.1 percent in 2019, which was comparable to the profit margins for companies like Apple (37.8 percent) and Amazon (41 percent) that year.
“Most people, they’re not uncomfortable with free peer review. They’re uncomfortable with free peer review resulting in massive profits for the publishing company,” Feeney said. “If the profit went to the author or the institution, I think people would feel differently.”
Another line in your CV. A positive reputation among your peers. Good rapport with influential editors. A place in the broader academic community. Some editors say that these benefits of “traditional” peer review are irreplaceable. Perhaps if recognition for peer review were more formal and more searchable, participating in the gold standard of academic publishing could once again become a priority for scholars.
The Web of Science, whose parent company acquired Publons in 2017, is one major platform for reviewer recognition. Journals can sign up for the service, which connects to their databases, allowing anyone with a free account to search for a researcher’s name and find how many publications they have in the Web of Science, how many articles cite them, and how often they write peer reviews. (The system also hands out recognition like “top reviewer” or “highly cited researcher.”)
Many journals also publish lists of their reviewers, sometimes highlighting their best and most frequent reviewers. These awards can be listed on a CV as evidence of contributions to academe, although a few journal editors who spoke to The Chronicle were skeptical of how much sway such awards hold with university committees or colleagues.
The traditional theory that peer review should be anonymous, too, has been under scrutiny for more than a decade. Gabriele Marinello believes “open reviewing” — where reviewers sign their name to critiques — is an incentive in and of itself. Marinello is the co-founder and chief executive of Qeios (pronounced “chaos”), an open-science, artificial-intelligence-powered platform started in 2019. Every single review on Qeios has a name attached.
“If you speak with some journal editors that use a blind review system, they will say, ‘The majority would prefer blind review,’’’ Marinello said. “And this is not true.”
Most people, they’re not uncomfortable with free peer review. They’re uncomfortable with free peer review resulting in massive profits for the publishing company.
Marinello eschews the old logic of using two or three peer reviewers. Qeios’s AI service scrapes the web, finding articles similar to the one submitted and automatically inviting the authors as peer reviewers. Essentially, it cuts out the middleman, the editor, and goes straight to review in the preprint stages. With the assistance of the program, Marinello said, authors can expect 20 to 25 qualified reviews by the end of the process. While posting and updating an article on Qeios’s server is free, the AI peer-reviewer recruitment service costs £24.99 (or about $30.61) a month.
“I mean, it’s been 150 years, and we still rely on three peers,” Marinello said. “We can have six times the peer reviews. And part of the recipe is exactly the fact that it’s open peer review.”
The reviews live alongside the article and are given their own DOI, a digital object identifier, which is a unique string of letters and numbers that allows a document to live online at a stable link, often used in citations. According to Marinello, giving reviews DOIs makes them more citable, searchable, and able to be indexed on Google Scholar, offering more credit for reviewers. The system allows reviewers to create a good (or bad) reputation in the broader community beyond immediate editors, Marinello said. It also gives authors and reviewers the chance to correspond about edits.
In a survey conducted by The Winnower, an open, postpublication peer-review platform, on scholars’ perceptions of traditional versus open peer review, only 3 percent of respondents said that nothing would entice them to sign their names. The vast majority said publishing their reviews would be worth the effort if their peers did so or if open reviews were explicitly recognized by tenure and promotion committees. A number of respondents also said opening reviews would make critiques more constructive, comprehensive, and less hypercritical.
“I think some unreasonable requests for endless experiments would not have been made if reviews” were open, one respondent wrote.
Qeios is not the first platform to open its peer-review process. The British Medical Journal was one of the first, over two decades ago, to abolish anonymity in peer reviews, although in stages. Today, anyone can go to The BMJ’s website, open any article, and see the full prepublication history — including original drafts, editor and reviewer feedback, and the author’s responses — for articles published after 2014.
Elizabeth W. Loder, an associate professor of neurology at Harvard Medical School and clinical epidemiology editor for The BMJ, said seeing previous reviews is how she identifies the best reviewers. Many internal editorial databases contain lists of past peer reviewers alongside a ranking which might take into account timeliness, responsiveness, and expertise.
“But yeah, there’s nothing like being able to go to BMJ.com and pull up a peer review,” Loder said.
Kimberley R. Isett, associate dean of research at the University of Delaware and editor of Perspectives on Public Management and Governance, said an individual’s peer-review record (or lack thereof) can also be a tip-off.
Isett and her colleagues sometimes have internal conversations when a perpetual review rejector submits their own article for review. She wonders if they have the right to publish in a journal they aren’t willing to put energy back into.
“The system is predicated on the collegiality of reviewing each other’s work,” Isett said. “If they are only extracting resources, extracting reviews, and not contributing, then that’s concerning.”
Isett said she’s heard of incentives like direct payment or more consideration for peer review in tenure and promotion considerations, but she views the work as part of her university compensation.
She and her co-editors spend a great deal of time “making the jigsaw puzzle fit together,’’ she said — spacing out review requests to different scholars by at least three months and matching expertise as closely as possible. And when her colleagues don’t even take the time to press the “decline” button, and leave her requests unread, she said, it’s especially frustrating. Isett said she’s had instances where she had to invite 26 people to review one article before getting the requisite number of reviewers.
“This is a common pool resource,” Isett said. “We all have to contribute to it to make it work.”