Sizing up competing peer review models
Is peer review broken? No, it’s not. The “stuff is broken” is overused so much that it now just sounds like hyperbole.
Can we improve peer review? Yes. The review process takes longer than some people like. And yes, editors can have a hard time finding reviewers. And there are conflicts of interest and bias baked into the process. So, yes, we can make peer review better.
As a scientific community, we don’t even agree on a single model of peer review. Some journals are doing it differently than others. I’ll briefly describe some peer review models, and then I’ll give you my take.
Single-blind review: The authors are not informed of the identity of the reviewers, but the reviewers are provided the identity of the authors. This is the prevailing mode of review in most scientific fields, I think.
Double-blind review: The authors are not informed of the identity of the reviewers, and the reviewers are not informed of the identity of the authors. The authors submit a manuscript with their identities and acknowledgments redacted, and some additional identifying details may be redacted as well. In my corner of science, a number of major journals are double-blind, including American Naturalist, Animal Behaviour, Behavioral Ecology, Conservation Biology, and Evolution.
Triple-blind review: This is like double-blind, but the handling editor is prevented from knowing the identity of the authors. I’m not aware of any such journals in the sciences.
Zero-blind review, or “open review”: This is where both authors and reviewers are informed of the identity of one another. This often happens in PeerJ, where reviewers are “encouraged” to share their identity. This also can happen with Royal Society Open Science, and Open Biology is designed for zero-blind.
Post-publication peer review: This is a twist on zero-blind, where the unreviewed manuscript gets published, and then review is invited, and the article gets revised afterwards. This is apparently how it is done at F1000Research. I say “meh,” and am just not going to mention it again.
What’s the best way to go? I think double-blind is tops. And I’m not alone. peer reviewed research shows that most scientists think that double-blind is best.
Let’s go over the motivations for concealing identities.
In all but zero-blind, reviewers are allowed to do their job anonymously because an author can’t resent someone — or retaliate against them — if they don’t know who the reviewer is. It’s possible for a perfectly friendly, reasonable, and fair review to cause some people to be upset, and I’ve seen this happen on a number of occasions. In double-blind, reviewers are not informed of the identity of the authors to prevent the abuse of power by the reviewer, and to prevent the manifestation of intentional and implicit bias.
Both the double-blind model and the zero-blind model are presented as ways to improve on the flaws of single-blind review. I think the zero-blind approach just makes things worse, while the double-blind improves on single-blind.
The more ecstatic prophets of Open Science have argued that the only way to prevent reviewers from exploiting their position is to put their abuses in the open. If they sign their names to reviews (and have these reviews revealed to the public), then they ostensibly cannot commit nefarious deeds under the watchful eye of the author and perhaps the public. I find this line of reasoning to be astoundingly naive.
First, this doesn’t deal with any of the implicit bias. Second, just because people are watching doesn’t mean that can prevent someone from intentionally helping out their friends or harming their competitors, they just can’t do it in an unsubtle fashion. Third, this entirely opens the door to retribution in the future, and to think otherwise is just ignorant of human nature. (For example, I’ve sat in on grant panels, where my identity has been shielded, and the panel was reviewing proposals by someone who has signed a review of a manuscript of mine. Regardless of whether it was a good or bad or positive or negative review, I would have much preferred to not know their identity. But let’s say I’m a jerk and it was a fair but negative review, then I totally could have screwed this person over and gotten away with it. Or I might have just have had an implicit negative bias from the experience.) Also, consider the scenario where a junior scientist is signing a review for a bigwig in their field. This person has so much power over their professional fate, in terms of reviews, funding, employment, tenure letters, et cetera. Would you expect them to give the most robust review to a poor quality paper coming from this PI’s lab? Okay, maybe you as an individual would be willing to assume that risk. But we can’t expect this from everybody. Rather than get into the problems involving zero-blind review in more depth, I’ll refer you to this interview with Tim Vines in the Molecular Ecologist.
As far as I know, “open review” is the only one involving a Manifesto and an Oath. Which seems a a little odd. And what happens is someone breaks the oath? Will the system fall apart? It doesn’t seem like an evolutionarily stable strategy. Anyway, the number of folks who love zero-blind review are a small minority among us. They’re just really vocal, like campus Republicans at UC Berkeley.
So how is double-blind better? Let’s think about the first criticism that arises: that reviewers can just guess who the authors are anyway. How good are authors at guessing the identity of the authors? Even when they voluntarily hazard a guess, they’re wrong as often as they are right. At least in some fields, for the vast majority of submissions, the authors have no idea. For authors who have a very well-established line of research with a combination of organism, location, and techniques, then other people in the field are likely to be able to guess the authors. But on the other hand, if someone is not very well established, then reviewers wouldn’t be able to guess, or they’d guess incorrectly. If double-blind is introduced to reduce against people who are marginalized, then it would still serve this function quite often, even if reviewers guessed correctly once in a while. Also, there is a not insignificant distinction between knowing the identify of the author, and guessing their identity.
What is the effect of double-blind review on women authors? Well, there have been a number of studies (a lot of which have design issues), and the results aren’t a slam dunk either way. It doesn’t make things worse, and it might make things better. Based on talking to women authors, it’s clear to me that most prefer double-blind review over single-blind, so this might increase submission rates from women. This eradicates implicit bias against gender, so it removes the possibility for it to exist, which is pretty awesome, since we know that those biases are often there in the minds of the reviewers.
Another concern is that it’s a pain in the butt for authors to have to bother with the redaction of details that would identify them in review. If double-blind is there to protect junior and marginalized scientists who would be harmed by bias, then to other folks, I say: suck it up. It’s not that hard. Gosh forbid everybody go to a little more effort just to make the process more fair for everybody who is different from themselves.
Based on my experience editing, there are a two other ways that double-blind reviews would be even more useful. First, even if a reviewer might guess the lab group, or set of collaborators, who are working on the project, being able to guess who the first author is on the project is a lot more difficult. I’ve seen reviews with assumptions involving the experience of the author that don’t belong in there, and this would fix that problem. Second, I’ve seen no shortage of biased remarks coming from reviewers when the author of the manuscript comes from a country where English isn’t the primary language. A manuscript is often gorgeously written, but if the author is from (say) Paraguay or China, then so often there are dings against the writing that are unsubstantiated. I wonder how much of this spills over into concerns about the science itself. Double-blind review would remove that kind of bias.
It might be worth nothing that all of the double-blind journals I mentioned at the top of the post are society journals, published respectively by ASN, ASAB/ABS, ISBE, SCB, and SSE — all societies that are responsive to their members. (If I end up running a society journal, I’d gladly lead a move to a double-blind review model after consultation with the society.) I’ve advocated before that society journals are the best option we have to represent the interests of our scientific community, and it appears that societies are taking the lead on double-blind review, and I find that encouraging.
A while ago, I was talking about double-blind review on twitter, and issued this poll:
A poll for just for women: do you think double-blind review is a good idea?
— Terry McGlynn (@hormiga) October 26, 2016
Then, the same poll but for men, 1.5 years later:
A poll for just for men: do you think double-blind review is a good idea?
— Terry McGlynn (@hormiga) March 11, 2018
Yes, this sampling design is weak in twenty different ways, I realize. My take-home message here just reinforces what’s already known from the literature. People think double-blind review is a good idea.
Single-blind review favors the people who are well established and have prestige, and people who are not in precarious positions. So keep in mind, when someone is defending the ability to use their identity in the review process, what are their interests?
Whenever dealing with "open science" issues, whenever I point out that margainalized folks are put more at risk, the only reply I recall getting is "but more openness should help level the playing field" but there is no logical bridge between "should" and "will"
— Terry McGlynn (@hormiga) February 14, 2018