What are better models for evaluating scholarship?
We need to wrestle with the reality that impactful scholarship is more than pubs and citations.
We all say it’s bad to rely on h-scores, pub counts, and journal impact factor as measures of research success. But then so many of us turn around do this.
Why is this bad? These numbers don’t say anything about research quality or genuine impact. Not to mention the many socioeconomic and ethnic biases baked into who gets opportunities to ramp up those numbers. The biggest problem with a quantitative approach is that it causes perverse incentives and causes talented researchers to prioritize things that simply don’t matter, and forsake things that matter.
The reliance on quantitative metrics for research “excellence” might be as as pathetic as it is absurd. But for what it’s worth, I’m not convinced that it’s as unambiguously true as the conventional wisdom might suggest. I don’t think people rely on these numbers as much as people are complaining.
I think our ongoing narrative of “We’re using the wrong kinds of measuring sticks for scientists” is off in two ways.
First: The problem isn’t just quantitative metrics, it’s the emphasis of qualitative stuff that often requires social capital and access to prestige.
I’ve been around long enough to spend plenty of shifts working in our metaphorical academic sausage factory1. From what I’ve seen I think all of the hand-wringing about the absurdity of quantitative metrics, it’s not this simple. I think the general conversation in social media and in-person conversations might be overstating, or mislabeling, the problem. I don’t think we rely overly on these quantitative metrics. (We might even not be relying on these metrics enough, because the way that these evaluations and conversations go usually don’t involve a lot of quantitative measures.)
When a person is wealthy, they don’t hand out business cards that advertise their net worth. But we all can tell from all the qualitative indicators that they’re quantitatively rich. I think it’s the same in the world of science. A person with high metrics doesn’t advertise it because that would be uncouth. It’s just qualitatively obvious by reputation. Junior scientists might not have stratospheric metrics, they do have many of the same qualitative indicators based on how they talk about their work, who they work with, the places they know. Once in a while, when people grow too comfortable, I hear terminology from the world of heritability: pedigree. It’s no accident that the biggest way to launch your academic science career is to find your way into the lab of someone famous. Because it gives you both the qualitative and the quantitative indicators that are valued by the broader community. Academics are smart people and scientists are often good at incorporating many parameters into a decision making process, but let’s face it, it involves a lot of homophily and even smart people are prone to measure success using a very personal template based on their own experiences.
While I think the problem with quantitative metrics might be overstated, the problem is much bigger: it also involves the use of the wrong qualitative indicators. Not only are there perverse incentives for publication metrics, but our research community also creates perverse incentives for scholars to develop qualitative reputations that may have nothing to do with do with doing good science or being a good human being.
Second: There is an existing model for evaluating scientific scholarship that serves our community well and the places that are doing this are overlooked.
If you’re a regular reader, you probably know what I’m going to say: that regional public universities (RPU) and some other primarily undergraduate institutions (PUI) are already leaders in this area and that major research institutions should be looking to us for leadership. You’re right.
Let’s hop into a time machine.
It’s the mid-1990s. A couple big ideas were abuzz in higher education. The long-promised boom in tenure-track positions was just about to land (because of a wave of retirements and an increase in undergraduate enrollment). Ha, ha. ha. Also, university administrations were aflutter with the latest approach to the professional development and evaluation of scholars.
By the time you got to the late 1990s, I doubt there was any PUI that wasn’t immersed in the “Teacher-Scholar model.” When I was interviewing for faculty positions at SLACs and other PUIs in 1999 and 2000, I feel like that’s all anybody could talk about. Teacher-scholar model this, teacher-scholar model that. If you search up the term, you’ll see that there’s plenty of it still in use today (and somehow one particular campus in my university system winds up at the top of the search results). While some folks might think of the Teacher-Scholar Model as an antiquated fad in higher education, it’s apparently a fad that’s still operational in a lot of places.
What is the Teacher-Scholar Model? I think that’s a really good question because not so many people know the actual answer to this question. The way that everybody seemed to be talking about you could get this general impression: That there’s a continuum between teaching and discovery research, in which the two support one another. That’s not wrong, but it’s also just a mere shadow of what the Teacher-Scholar model was about, even though the nuance didn’t really sink in for most of us. Clearly not for me in my pre-tenure days.
I think if we spin the time machine back to 1990 to the origin of the Teacher-Scholar Model then we might have a template that can help us move forward.
Wow. Folks were really talking about this book!
What possibly could Ernest Boyer have to say to have folks in higher ed so excited about priorities for faculty scholarship?
This isn’t about how to measure faculty scholarship. It’s about what should matter.
The problem with evaluating faculty scholarship is that our measures are rooted in our own values system about what kind of work is valuable. That’s a hard conversation that doesn’t result in much resolution, so we’ve managed to by pass it, in favor of liking the quantity of scholarly product and how it gets disseminated. That superficial approach that we generally go by today counteracts the Teacher-Scholar model which was very explicit about what is valuable about scholarship.
In this model, there are four kinds of scholarship, which all should be valued.
Those are:
Discovery research (basic research, the stuff that is traditionally valued among us).
“Integration” scholarship that goes across disciplines, including things like what we consider to be interdisciplinary research, and science communication)
Scholarship of “application” - things what we now call community-engaged scholarship, extension, and transdisciplinary scholarship
Scholarship of teaching and learning, including discipline-based educational research. Depending where you are, this may or may not be valued as a scholarship by your department. It’s only within the past decade, for example, that my university passed a policy that for promotion and tenure decisions, publications on SoTL were just as valid as basic disciplinary research.
I’m not sure how many people got the memo that the teacher-scholar model was about uplifting community-engaged work, communication, scholarship of teaching, and all that other stuff that our disciplinary colleagues might vaguely appreciate but not count as scholarship. Even though some of these avenues might result in fewer publications, lower h-scores, and so on, they have scholarly impacts that are just as important, are wholly legitimate, and are valued. (Which is why, I suppose, I’ve been at this science community writing for 10 years and am still at it, even though it never had a place in my file when I came up for Full.)
Now we are 30 years after the wild success of idea of the teacher-scholar model but still our broader academic community has done a poor job of adopting the ideas . We have many bright spots where faculty are doing work under the teacher-scholar model and are being valued for it. We need to see more of this across our broader research community — and the institutions that are doing this are well positioned to be models for other universities. And it’s possible that you might need to look into a different “tier” to find those campuses. They might be just down the road and you’ve just never visited?
A lot of us are working hard to make sure that our institutions are valuing the work that is hard to put a number on. Building relationships with community partners, doing work that influences policy, building projects with folks outside the sciences, enhancing the work and lives of practitioners, and those other things are really important and they are all scholarship. And it’s also what government funding agencies and private foundations want to see from us too. But on our CVs, that’s a hard story to tell. Some universities are making progress on this, and it’s a matter of both culture change and being able to tell the narrative.
Understanding the history of the teacher-scholar model is valuable for moving forward. We don’t need to reinvent new measuring sticks. Once we get past the idea of the measuring stick, we can sensitize ourselves to what impactful scholarship truly looks like. And then we can have a meaningful conversation about what good science looks like.
I will concede that there are some positions of power that I simply will never occupy as a Professor at CSU Dominguez Hills, but I’ve gotten around. For example, I’ve chaired tenure committees at the college and university level, do external tenure evaluations, serve on a variety of grant panels, been on committees for professional societies to select awardees, journal EICs, and so on. And I suppose “sausage factory” could be interpreted as a gendered remark about who gets to experience career advancement and yeah that would not be incorrect: