When Student Evaluations Become Academic Yelp
How a system built to improve teaching starts actively degrading it
I just received a new wave of professor evaluations from the schools I worked with in S1, business school shorthand for last semester. The first reaction is always the same. Not surprise. Fatigue.
But what struck me this time wasn’t any single remark, but the growing sense that a system originally designed to improve teaching has quietly turned into something else entirely. Not feedback, not quality control, but a low signal, high noise mechanism that distorts behavior while pretending to measure excellence.
Most business schools still rely on anonymous student surveys as the backbone of professor evaluations and present them as feedback. In reality, they function more like an unmoderated comment section on a contentious website. Loud. Emotional. Mostly useless.
What makes this worse is that most students do not leave comments at all. Not because they have nothing to say, but because they are neither incentivized nor guided, and rarely see any consequence or follow up. The minority who do write are often reacting to frustration: a disappointing grade, a demanding course, or a bruised ego. What follows is rarely feedback in any meaningful sense. It’s venting. And once written, it is treated as data.
That’s where the real damage happens. These assessments are non operational. There’s no substance, no context, and no reliable way for a school or a professor to extract signals that lead to improvement. Just numbers and anonymous sentences floating without accountability, yet taken seriously enough to influence careers.
For adjunct faculty, the incentives are particularly perverse as hiring and renewal decisions quietly orbit around these scores. The rational response becomes obvious: be nice, inflate grades, avoid friction, and chase consensus. Inevitably, education slowly turns into a popularity contest rather than a learning experience.
This is the Uberization of teaching. Ratings without dialogue, labor without protection, and incentives that reward short term comfort over long term growth.
If schools were serious about quality, they would redesign the system entirely. No anonymity by default. Coached feedback prompts. Minimum substance requirements. AI could easily block empty or defamatory comments and push students to explain, clarify, and justify. Add a mandatory feedback loop where professors or programs can respond.
This is not rocket science.
In my own courses, I already do this outside the official machinery. Structured mid and end course feedback, named, discussed, and acted upon. Students learn how to give feedback, not just how to complain. I learn what to improve. Everyone grows. That’s what real progress looks like.
Anonymous noise helps no one. Accountable feedback builds better classrooms and better people.
It’s time to choose.


