Rethinking Course Evaluations to Improve Student Learning

Everyone hates student course evaluations. They don’t measure what students are learning or even how much good teaching is occurring (although it is probably not good to have persistent high GPAs and low student evaluations). At best, they tell us what students perceive.

It also turns out that most student evaluation forms have one predictive question, usually some version of “Would you recommend this course to your friends?” or “Your general assessment of this teacher?” All the rest is commentary.

I find it can be useful to see if a professor gets consistently low marks for “supportive of diverse ideas in the classroom,” but “How prepared was your professor for class?” or “Was the professor knowledgeable?” are completely useless.

There are great new survey tools, with tested questions,  and these should be required reading for anyone thinking of redoing their course evaluations, but student course evaluations should mostly be about improving teaching and retention.

1. Your course evaluations should reflect your institutional student goals. Reminding both faculty and students in every course will help bring focus and integration to key campus learning outcomes.

If you want to be the creativity place, then every course should ask “How much did this course improve your ability to think and work creatively?” If you want to improve the critical thinking of your graduates, the easiest and cheapest way is simply to remind students (and faculty!) that they do this in every course. Naming what we do is itself a pedagogy that works.

Every degree and department should have learning outcomes and these too should be reflected in every course evaluation: “How much did this course improve your ability to manage a business?” “How much did this course contribute to your ability to solve complex problems?”

Not every course is aligned with a single student learning outcome, and administrators will need to be sensitive to low scores in courses designed with a different purpose, but having a small and focused list of outcomes and evaluating them in every course is an extremely low cost way of improving focus.

2. Your course evaluations should reflect high impact practices and research-based pedagogies.
“How much did this course expand your appreciation for diverse ideas?” “How much did this course expand your ability to collaborate and work with others?” While these again are perceptions, they are perceptions that matter. I should also be measuring students’ actual improvement in self-regulation or motivation, but I also want to know what they think: “How much did this course expand your ability to think about your own thinking?” “How much did this course bolster your motivation to succeed in college?”

If we want faculty to make more use of active learning then asking “What percentage of class time did you spend in active learning?” (or negatively, “in passive listening?”) This will vary in different types of courses, and again, administrators will need to be sensitive to different goals. Most studio, lab and art courses will score high here, but that is not a measure of their relative quality. Still, if faculty think we lecture only 40% of the time, and students say it is 80%, this is important is good to know.

(I will note that in more than a decade of reading tenure files and sitting through P&T meetings, I find that most faculty and administrators are highly sensitive to the problematic nature of the numeric part of course evaluations. Institutions need to avoid averages, but I have never seen a tenure case fail only because of low numbers. I also think we should use course evaluations primarily for development—in the same way we should use assignments and assessments for how they can help students. I know we credential eventually, but if I wanted to spend my life sorting, I would have joined the postal service.)

If we are trying to improve teaching at our institutions, then maybe we should ask “How useful was the feedback for improving your work?”

None of this is a substitute for real assessment of learning outcomes. We need better tools to measure the actual critical thinking, creativity and cultural sensitivity of our students. At the moment, we often use the poor surrogates of grades or distribution (you took an art class or studied abroad). Requiring something only tells us about quantity or exposure (100% of students were exposed to science or passed a test in a foreign language–—yeah us!) but we need to understand about the quality of student learning. In the meantime, we should keep doing everything we can to improve learning and provide incentives for faculty to try proven pedagogies: course evaluations are low-hanging fruit.

In the end, you are what you measure. When we measure and assess, we set values and priorities. All colleges measure, but what we measure is often irrelevant. Measuring for accreditation will do little to improve our graduation rates. While it is true that much of what we do is hard to measure, or at least hard to quantify, that should not deter us. I’d rather have high standards and improving assessments, but we are the people of judgment. We evaluate all day long. We just need to turn more of our attention to better evaluation of ourselves.

Leave a Reply

Your email address will not be published. Required fields are marked *