Why doesn’t NSF redact horrible GRFP reviews that demonstrate overt bias?
The ongoing conversation about inequitable outcomes in the National Science Foundation’s Graduate Research Fellowship Program (NSF GRFP) is scaffolded on a steady trickle of inappropriate remarks from reviewers that applicants share on social media.
Take, for example, the howler from this weekend, which said, “His Hispanic pride prevented him from seeking mentorship and advise [sic] from other [sic] that would have helped him avoid and lesson [sic] some of his struggles and progress further.”
I love the NSF GRFP!!! pic.twitter.com/2AjBpH3FId
— Ulises Perez (@UiisesPerez) April 17, 2023
Even taken out of context and imagining it was written with functional grammar, the content of this review is highly problematic. You can’t invent a universe in which that sentence is okay.
This isn’t just a one-off situation. I think that the vast majority of reviews are actually lacking in obviously racist bias, and the vast majority of them don’t even have classist bias. In my experience, which is not insubstantial, the reviews are by mostly from people who are thoughtful and are genuinely working with good intentions (if not constructive approaches) to broaden access and participation. But I also have seen a plenty of people in the process who aggressively do not aggressively do not understand the forces at work that structurally exclude folks on the basis of their identity and personal backgrounds.
It’s a problem because the review process involves the human beings who comprise our academic community. These problems are a reflection of us, and the racism in that review is a mirror for us to see ourselves. One thing to keep in mind is that these reviews are coming from the panelists themselves. The call is coming from inside the house.
So how bad is the situation? There are enough overt problem that Science wrote a news article about it last year and called it dysfunctional. I spent a few minutes searching twitter for screenshots of problematic reviews, and considering the volume of those you have to wonder about all the applicants who aren’t on twitter and how only the extremely rare applicant will make the choice to share that kind of thing publicly, and well, yeah. It’s a problem.
While the absolutely number of such damning screenshots is finite and perhaps small, this relative frequency belies the depth of implicit bias that happens. While I’ve seen some folks claim that the folks at NSF doesn’t care about this bias, from my observations and interactions, I’m confident that there is plenty concern about bias. But I think there’s more concern about bias in the process than there is about outcomes that may not result in broadened participation. I think there are varying levels of concern that process that winds up principally elevating those who have already had access to prior advantages. If you’re going to measure “merit” and “potential” — which are the building blocks of the program, that’s hard to do when you have a pool that has experienced different levels of opportunity. And measuring achievement relative to opportunity is a hard thing to do, especially when the reviewer pool is replete with people who have succeeded under the current system.
Anyhow, the specific question I came back to my slumbering blog to try to answer is: “Why does NSF share reviews even when they happen to be incredibly nasty and racist?” I obviously can’t provide any accurate answer to this because I don’t work for them and haven’t been involved in the stuff that program officers do. In the absence of an explanation from the GRFP program, I hope I might be able to shed at least a little light for those of you who know even less about how the place works than I do. Here are some unordered things I have to say about this.
As far as I am aware, NSF hasn’t made any kind of public statement about how it is that they end keeping reviews that have highly problematic statements. Which means that they end up being part of the process before winding up in the hands of the applicants.
Considering the funding rates and the number of awards, they get at least 10,000 applicants or so? This is a ginormous program. And each application gets a few reviews. I’m not sure it’s quite possible for every review to get that the level of attention that would be needed.
There is a small number of program offers who operate the process year-round. When they operate the review panels around the winter break, it’s an all-hands-on-deck situation where people who don’t operate the program are brought in to support the program.
Who gets to decide where the line is on what constitutes overt bias? I think there is probably a philosophy at work where transparency is valued and the idea that the occasional overtly biased review might be preferred to the idea that someone behind the scenes is tainting reviews by removing content that was intended by the reviewer. In my opinion, in a perfect world all of the reviews with overtly problematic bias would be simply deleted from the process before anybody on a panel would be exposed to them. But there a lot of this is a big grey zone. The howlers are very obvious of course, but that’s just the long end of a tail, and who’s going to draw that line. Maybe it’s better that no line is drawn so that we can see the reviews when they are a problem?
I think there is the prevailing philosophy (which I admittedly don’t radically disagree with, though I’m not sure if I agree with it) that overtly biased reviews are best addressed by not squashing them, but instead, by giving them sunlight. I’ve been involved in enough of these kinds of situations where I’ve seen problematic statements in reviews either get entirely ignored and that these statements just obviously devalue the weight of the review in general, and sometimes it’s brought up and discussed in the way you hope it would be discussed. (Then again, sometimes it’s me bringing up the issue and I don’t have knowledge of rooms when I am not in them.) I think it’s worth asking, “Is it better to just delete these reviews and move on, or is it better to share them so that everybody can see these horrible statements for what they are?” I think how you answer that question might be connected to your level of trust in the idea that most panelists find these horrible comments to be truly horrible.
I sure do hope that they find who these people are and make sure that they never participate in the review process again.
According to a current rotating program officer who took to twitter about the review process (though not explicitly about the GRFP), getting the content of a review redacted is quite a big deal.
One way to ameliorate this problem is to have a deeper reviewer pool. That’s a thing I’ve long been talking about. Of course this doesn’t fix the problem.
(For what it’s worth, I’ve mentioned on occasion the idea of limiting the number of applications from a given institution based on enrollment. Of course you’ll have several dozen undergrads from MIT applying each year, and relatively few from, say CSU Dominguez Hills. So if you think that there’s a problem with representation in the pool, then you can change the composition of the pool by making sure that you’re increasing representation from universities that have more representation.)
Anyhow, if you’re wondering why NSF lets these bad reviews happen be a part of the process, I’m wondering right along with you. But I also know enough about how bureaucracies work and how simple fixes to big problems are often not that simple. I think part of the accountability here is to make sure that when there are problems, that they get exposed to daylight. For those of you who have had the courage to be vocal about the bias you’ve experienced, thank you for your service, and I’m sorry that you have gone through this. You, and our entirely community, deserves better. (I suspect most everybody at NSF agrees with this too, but they aren’t well positioned to come out and say it for themselves.)