Professor Evaluations Are a Joke. Everyone Knows It.
A system designed to collect feedback, not to use it.
Professor assessments in business schools are one of those strange little rituals that everyone pretends to respect and almost nobody actually believes in. They’re presented as a tool for quality, improvement, reflection, and all the other clean managerial words schools love to teach, but in practice, they are mostly a smokescreen. A bureaucratic prop. Something that makes the institution look like it is listening while avoiding the much harder task of actually changing anything.
And the game is not even the same for everyone. For permanent professors, these assessments often land with a soft thud. Teaching is only one slice of their role, and usually not the slice that drives status, promotion, or security. Research does that. Publishing does that. So the evaluations exist, yes, but often in the polite and largely ceremonial sense. For adjuncts and other gig professors, though, the exact same survey suddenly becomes something else entirely. Not a development tool, but a rejection sheet. A vague pressure mechanism hanging over people who are already doing frontline teaching without the comfort of institutional protection.
Then there is the method itself, which is almost comically outdated. End of course surveys, sent late, usually with all the charm and strategic finesse of a bad mailing list campaign from fifteen years ago. The irony is hard to miss. These are institutions that teach marketing, CRM, funnel optimization, and customer engagement, yet they cannot get their own students to meaningfully complete a simple feedback survey. Response rates are low, timing is poor, and the experience itself is forgettable at best and irritating at worst.
Students, for their part, are thrown into this with almost no guidance. No framing of what “good feedback” looks like. No examples. No sense of responsibility. So you get everything from thoughtful input to rushed checkbox clicking to anonymous comments that drift into the kind of territory you would never tolerate in a professional setting. Some of it is useful. Some of it is noise. Some of it is, frankly, unacceptable. And yet it all gets treated the same.
Then the results disappear.
Students never hear what their feedback changed. Not at the class level, not at the program level, not even at the most basic “here is what we heard and what we did about it.” Which is remarkable, because the entire value of feedback in any modern system is the loop. Without the loop, it’s just extraction.
Professors do not fare much better. In many cases, adjuncts have to actively request their own evaluations, which is already telling. And when they do receive them, it is often a flat document. Scores. A few comments. No synthesis, no guidance, no interpretation, no direction. You’re left with a vague sense of whether you were “liked” or not, but almost nothing you can actually use to improve your teaching in a concrete way.
And when a system gives you shallow signals, people start optimizing for the wrong things. I know professors who soften their standards, over-index on being liked, or turn the last session into a charm offensive just to protect their scores. When your future teaching opportunities depend on a blunt, poorly designed tool, you adapt to the tool. Not to better teaching - to better survival.
Meanwhile, schools continue to teach NPS, feedback systems, continuous improvement, and customer centricity as if these were non negotiable fundamentals. Then they ignore every single one of those principles when it comes to their own product, which is the classroom experience.
The most frustrating part is that this is not a hard problem. Any decent modern SaaS tool, combined with a bit of thoughtful design and, frankly, a very light layer of AI, could transform this overnight. Better timing. Better question design. Real time pulse checks. Structured qualitative input. Automatic synthesis. Closed loop communication. This isn’t innovation. This is basic competence in 2026.
What am I doing instead? I run short pulse surveys during the course, not just at the end when attention is gone and opinions are already fixed. I use Slack to keep that feedback informal, quick, and continuous. Then I run a much stronger final survey through Typeform that I keep redesigning and improving every year. There’s only one question about me. The rest is about what should change, sharpen, or be dropped for the next cohort.
It is not anonymous by default. Students can choose, but I ask them to put their name and email so I can respond. And I do respond. That alone changes the tone. People think, write, and engage differently when they know there’s a human on the other side of the loop.
And the Typeform itself is also shared live with the very people taking it. They can see what is being said as it comes in, unfiltered. The only thing I remove is their names, because I want honesty without social backlash. But the feedback itself stays visible, which means the class can see, in real time, what others are noticing, what patterns are emerging, and where the friction actually is. And that creates something most business schools seem oddly terrified of: a feedback process that is transparent while it is happening, instead of being buried six weeks later in a PDF nobody learns from.
Most importantly, I show the next cohort exactly what changed because of the previous one. What was added. What was removed. What was adjusted. Feedback becomes visible. Tangible. Alive.
Not perfect. But useful.
Business schools do not have a feedback problem. They have a willingness to change problem.


