The Scholarly Skill Almost No One Is Teaching
[ad_1]
As director of the Center for Journalology in Ottawa, Canada, Moher studies how academics conduct research and how those results are then verified and published. He’s also a journal editor grappling with an increasingly common challenge: It’s getting harder and harder to find peer reviewers — scholars who often anonymously evaluate the papers published in their field — and even more difficult to find ones who know what they’re doing.
Moher, an epidemiologist and associate professor at the University of Ottawa, is a subject editor for Facets, a Canadian open-access science journal. Last month, Moher said, he sent 11 requests asking scholars to peer review a manuscript. Just one accepted. He sent six more lines out, hoping for a bite to reach the two-review minimum. But sometimes, even that isn’t enough.
“After 40 attempts, you go back to the authors and say: ‘Look, I don’t think our journal is helping you. We simply can’t get peer reviewers,’” Moher said.
Peer review — the process through which editors like Moher ask outside experts to critique a paper’s methodology, reporting, conclusions, and more — is vital to academic publishing. The system is designed to ensure that only the best research makes it into the collective body of academic knowledge used to inform everything from college classrooms to clinical practice. Articles accepted as valid by peer scholars bring prestige and legitimacy to their authors and institutions, along with better job and tenure prospects for faculty members.
But peer review is often unpaid and unacknowledged work. Highly respected academics can receive several peer-review requests a day, and may not have time to perform more than a few a month.
The pandemic was also a shock to the system, causing more researchers to push the task to the side as other responsibilities piled up. Those who accept requests to peer review for a journal are becoming increasingly unlikely to complete their critiques. That means researchers wait for months while their papers languish on an editor’s desk. When reviews do come in, some editors have noted seeing more short and unhelpful critiques, leading to longer review periods and harder decisions for editors. In dire cases, the research may become outdated before it’s published. One scholar told The Chronicle it took four years and submitting to three different journals to get a manuscript published because of problems finding reviewers.
The struggle to find peer reviewers varies widely depending on the journal and field, but some scholars say the system’s flaws can be traced in part to an essential issue with peer review: No one’s sure who is supposed to teach it.
And often, no one does.
According to a report on the 2018 Global Reviewer Survey, 39.4 percent of respondents have never received any peer-review training. The survey does not account for the quality or comprehensiveness of the training other respondents received.
In a recent systematic review of online training in manuscript peer review, the study’s authors — one of whom was Moher — found a startling lack of online materials, and the ones they found were lackluster. Academe relies on the ideal of peer-reviewed scholarship, but the system doesn’t seem to be designed to ensure its quality and longevity.
After presenting their findings, the authors ask, “Did we get ahead of ourselves?”
Even so, the practice is not nearly as old as many might assume. The journal Nature didn’t introduce a peer-review process until the late 1960s, and The Lancet didn’t have one until 1976. Peer review, according to scholars of publication science, was initially a way for commercial publishers to legitimate themselves. They tapped into and formalized the unofficial, “collegial discussion” that was the peer-review tradition of the 17th century. Training scholars to peer review has largely depended on the inclinations of individual journal editors and Ph.D. advisers.
As a result, the content and quality of reviews at some journals have grown increasingly varied, sometimes missing information necessary to help editors make decisions about manuscripts. And as reviewers become more crunched for time, some journal editors have found reviews becoming more blunt and — let’s just say it — downright mean.
Sometimes, reviewers are “just trying to put down other researchers,” said Maria Petrescu, an assistant professor of marketing at Embry-Riddle Aeronautical University and an editor of the Journal of Marketing Analytics.
If you wouldn’t say that to their face, don’t put it in your review.
Petrescu said everyone she’s spoken with, no matter how distinguished the scholar, remembers getting one of these antagonistic reviews, with no constructive feedback or even well-articulated criticisms. She still remembers her first such review. She was a doctoral student and had submitted one of her first papers to a journal. In response, one reviewer wrote, “If my students presented a paper like this, I’d fail them.”
Petrescu said she was demoralized and humiliated. When she was able to pick herself back up, she submitted the paper to another journal, and it was accepted after the first round of reviews. It is still one of her most cited papers.
“People get frustrated in their own lives, and then they let it out through this anonymous process,” Petrescu said.
A 2020 study in the Journal of Research Integrity and Peer Review looked at nearly 1,500 reviewer comment sets for articles in the fields of behavioral medicine and ecology and evolution. It found that 12 percent of the sets had at least one “unprofessional comment” and over 40 percent contained “incomplete, inaccurate, or unsubstantiated critiques.”
Petrescu said as an editor she tries to remove some of the more destructive comments, but she knows not all editors do the same. And even when reviews aren’t cruel, per se, they might not be helpful or constructive. Sometimes they can be as short as a few sentences, indicating that the reviewer did not address all elements of the paper, harped on a single point, or made overly broad statements about the manuscript.
Editors in search of better peer reviews often burden another handful of inboxes, lengthening the process. And Petrescu said that both she and her network are overwhelmed as it is. When she gets an unhelpful review, she makes a note not to ask that scholar again.
“Even though I am an editor, I still do reviews for many other journals. I get requests pretty much every day to review journal papers and conference papers,” Petrescu said. “It’s a snowballing thing.”
When she receives a subpar review, Mary K. Feeney, editor of the Journal of Public Administration Research & Theory, has taken it upon herself to send the reviewer a file of sample reviews. She does this especially for early-career scholars, and at Arizona State University, where she is a professor of ethics in public affairs, Feeney teaches peer review as part of a professional-development seminar. She gives participants, most of whom are doctoral students, an article to review. Once they’ve turned in their reviews, she brings in the article’s author.
“It’s a nice reminder. Like, you’re reading someone’s work. If you wouldn’t say that to their face, don’t put it in your review,” Feeney said. Then, as she does with inadequate reviewers for the journal, she then gives them a file of model reviews.
But Feeney said it’s hard to teach peer review to students before they are actively publishing and thus being asked to review. It’s simply not relevant to their work yet and not a priority.
“Every professional-development thing you put into the curriculum is less time you’re spending on the subject and the science,” Feeney said. “They’re doing what they can with, I think, pretty limited hours.”
When Moher speaks on his work in publication science, he often asks how many audience members have received formal training in peer review. A couple of people typically raise their hands. He then asks how many engage in the process. Nearly every person raises a hand.
“At the moment, people get asked to peer review who have no training,” Moher said. “That’s equivalent to going out to the subway station and saying: ‘Oh, you look like a great person. Do you want to come and do a breast biopsy of someone?’ You’re not likely to allow that. But yet we’re allowing people who have no formal training to peer review.”
While the analogy is an exaggeration (peer reviewers are, at least theoretically, experts in their field), other researchers and journal editors echo the sentiment. This system, which is supposed to distill all available research into the best of the best, relies on mostly untrained labor.
There ought to be consistency around peer reviewing, and that does not exist.
In a comprehensive search for online peer-review training materials, Moher and other researchers turned up only 22 accessible items. Most were online modules, but there were also recorded webinars, a resource website, an asynchronous video, and even an online game. The majority could be completed in less than an hour.
“Now, what are you going to learn in an hour?” Moher asked. “There ought to be consistency around peer reviewing, and that does not exist. It’s making the situation worse.”
Moher gets anywhere from 20 to 50 peer-review requests every day. He said he doesn’t even have the time to respond to the vast majority and just deletes them almost automatically each morning. If anything, he said, the crisis in peer review has been understated.
The lack of training resources comes down to time and money, Moher said. He wants to create an evidence-based, comprehensive training program, but he has been unsuccessful thus far.
“I’ve gone to publishers and they’ve said: ‘Oh, what a great idea. We’ve no money, but do let us know when it’s finished. We’d love to use the tools,’” Moher said with a roll of his eyes.
And even though there is demand for a training program — 88 percent of respondents to the Global Reviewer Survey believe that peer-review training is either important or extremely important — Moher said that scholars are clearly willing to do it untrained. Consequently, many publishers are disinclined to invest in a more intensive peer-review training that could improve the process.
Moher also expressed frustration at some papers that cast doubt on the effectiveness of efforts to train peer reviewers, noting that most existing research focuses on training that lasts a day or less.
“How can you expect effects from one day of training?”
Making peer-review mentorship standard is one of Ariel M. Lyons-Warren’s goals. Lyons-Warren is an instructor in pediatric neurology at Texas Children’s Hospital, affiliated with the Baylor College of Medicine. She said she benefited greatly from the mentorship she received through Neurology Journals, a group of associated journals with a section for residents and fellows.
“I was paired with a mentor who was phenomenal. And when I had questions, I would just email him. It was very informal,” Lyons-Warren said. “And that’s how we learned to review.” Curious about whether this approach was effective and could be more widely applied, she designed a more structured version and ran a study of 18 mentor-mentee pairs.
Mentees first completed reviews unassisted. Then the program provided structured resources on proper methods to review a manuscript and write a review. Mentors met with their mentees at least twice to work on two reviews over the course of six months. Finally, the mentees did a post-program review by themselves. Independent evaluators compared the two unassisted reviews and found that reviewing skills did improve, on average. Lyons-Warren ran another round with 14 more pairs and early data shows similar gains.
“What we saw is that we could teach people the technical aspects of review: What components of the manuscript should you be thinking about or should you be commenting about in your review? How do you organize a review? What are the elements of a review?” Lyons-Warren said. “That stuff we saw a dramatic improvement in.”
She pointed out that her program is geared toward early-career clinicians, not Ph.D. students. And in a medical school, especially, learning peer review is almost always going to come second to learning to provide the best patient care.
But the formal approach to peer-review mentorship could be applicable to other fields.
“As an author, if my reviewers don’t have good training, that’s a lose-lose situation,” Lyons-Warren said. “The question we should be asking ourselves in this field is, how do we teach? How do we integrate teaching peer review into the way we teach all of these essential concepts?”
Much in the “pay it forward” spirit of peer review itself, Lyons-Warren wanted to make sure the program that benefited her could continue to train future clinicians and researchers. Now that she has evidence peer-review mentorship and training can work, she hopes to help produce a new generation of better reviewers.
[ad_2]
Source link