I was delighted to come across Joshua Sperber’s new research project about Rate My Professors. In Making the Grade – Rating Professors, published in CUNY’s New Labor Forum, Sperber studies what happens when students can “rate their professors” on the web. The project was based on an online survey of 41 students and 47 adjunct professors, which seems to have elicited a wealth of rich qualitative data.
Like most U.S.-trained academics, Rate My Professors (dot com) has been on my radar for a long time, but I never knew much about it, except that it’s completely public and seemed to include most of my teachers in U.S. higher education. Sperber explains that, predictably, it was founded by a Silicon Valley type, John Swapceinski, who later founded a slightly more subversive-sounding project, Rate My Boss.
I’ll skip a full summary, since Sperber’s report is already succinct and openly accessible. Let me just pick out a few key points:
- Rate My Professors (henceforth RMP) is structurally sexist, since all the implicit sexism in students’ perceptions comes out in the evaluations. No one is calling their male professors “shrill.”
- The students who write the reviews are themselves in a contradictory role. On one hand, their identification as “consumers” of higher ed is reinforced by treating their courses as products that deserve product reviews. On the other hand, they are also unpaid laborers for RMP itself, since they provide the content for free while RMP keeps the advertising revenue.
- Sperber argues that when RMP systematically encourages students to prefer “easy” classes and “easy” graders, this remains “a self-defeating effort insofar as it accelerates grade inflation, thereby diminishing the value and utility of high grades.” This point deserves further discussion, I thought. To take an analogy with currency: even in the face of inflating currency, consumers are still incentivized to seek the best bargains, are they not? Similarly, even if grades get more inflated, isn’t it always going to remain “rational” in our current system for students to optimize the ratio of effort to reward?
- Many students said that they write reviews, not because they love or hate their teachers, but because they had a “sense of duty to fellow students… coupled with a commitment to fairness.” A curious form of consumer altruism.
- Some adjunct teachers worry about the professional impact of their reviews, but others quip resignedly that “as an adjunct I have no job prospects” anyway.
I suppose I have two general questions about this study.
- In my experience in U.S. higher education, the evaluations that “count” institutionally are the internal course evaluations, not these public online comments. So what’s the relationship between RMP evaluations and internal course evaluations?
- It would be excellent to read further historical and comparative analysis. Sperber mentions in passing that student evaluations in the U.S. “developed as a tactic for advancing popular political demands for student empowerment during the 1960s and 1970s radical student movements.” Is there a history of this? Or any current comparative (international) research on it?
I’ve always read that early medieval universities, particularly in Bologna, were highly “market-driven”: students paid their instructors directly and “voted with their feet” about which classes to take. The early University of Paris is always put forward, on the contrary, as a more faculty-run model — one which eventually won out in much of the world. But a comparative, international history of “evaluation” — including the period before there were written, formalized evaluations — would seem to be necessary, if we are to grasp the longer term struggles between faculty and student power.
One can only concur, however, with Sperber’s conclusion that the “power” students exercise in consumerist course evaluations is extremely circumscribed, politically speaking, and in no way challenges the broader economy of higher education.
A question for Sperber, by way of closing — how did you come to do this research project, and do you plan to expand on it in the future?
We should note that this work also has some definite resonances with Davydd Greenwood’s recent comments on student course evaluations.