Spring semester teaching evaluations were released last week, and because of my position as writing program director and incoming department chair, I get to see all of them. It’s an eye-opening experience, but perhaps not for reasons you’d first think. It’s true that the evaluations give you a glimpse (and, really, just a blurry glimpse) of your colleagues’ classrooms and teaching styles. But it’s not the individual ratings I’m particularly interested — at least not now, some seven months before we do personnel evaluations. Instead, I’m interested in what the evaluations as a group tell us about the writing program, the gen-ed literature courses, and the department as a whole.
I’ve done some reading in the scholarship of Student Evaluations of Teaching (SET) recently, and it has lead me to two useful and related findings, as well as a wealth of advice about how to respond to evals as an administrator and mentor. First, the findings. I’ve been mulling the conclusions from economists Paul Isley and Harinder Singh of Grand Valley State University about the relationship between grades and quantitative evaluations. In their article (JSTOR access needed), they confirm previous findings that higher teaching evaluation scores are related to students’ having higher expectations for their final course grades. They go on, however, to argue that the differences between incoming students’ GPAs and their expected course grades have a greater effect on evaluation scores. In other words, if students with high GPAs think they are going to get low grades in a course, they are more likely to rate the instructor low on the evals, and if students with low GPAs think they are going to get high grades, they rate higher. Such findings are important for instructors of required general-education courses (like composition and intro to literature), which often enroll students with little interest or ability in the specific subjects.
Isley and Singh’s findings jibe interestingly with a conclusion drawn by John Centra in his analysis (JSTOR) of higher grades and evaluation scores. According to Centra, evaluations are highest when students perceive “just-right” levels of difficulty, rigor, and learning expectations (I’m simplifying his analysis quite a bit, so please do look at the article). Imagine honors students, for example, who feel a gen-ed humanities course is being pitched too low for them. They’re likely happy with their A’s but are also likely to give lower evaluation scores. Centra’s and Isley and Singh’s analyses expose a central psychological truth about teaching evaluations: that they are deeply rooted in each individual student’s own educational context and history. Evals tell us as much about the students’ identities as learners than they do about teacher effectiveness. Maybe more so.
What’s heartening about this work — and very useful in terms of faculty development — is that the numbers from the evals might suggest the extent to which students understand and appreciate what is being asked of them in a course. I’ve long contended that a great deal of student complaints are rooted in some kind of communication breakdown between students and teachers. It’s possible to read low evaluation scores as representative of that breakdown. Now, I’m not suggesting that one or two low evaluations means a teacher can’t communicate effectively with students. We all know that lots of stuff happens during a semester. Instead, I’m suggesting that low evals mean the instructor and students weren’t really on the same page for that particular course. The next step, then, is for the instructor to reflect on what might have led to that.
And here is where the written comments on evals matter. Those of us who have taught know the thrill and agony of the written comments. We also know how the one negative comment is usually the one we fixate on the most. Comments can cut to the bone, and they often seem like the least fair aspect of the entire evaluation process. After all, the student evals are anonymous, and students don’t have to take responsibility for what they write. Nevertheless, the comments can help instructors understand how their courses were interpreted by the students. And if instructors are unhappy with those interpretations, then they can use the comments to help revise their presentations and deliveries. (Check out Dean Dad’s take on this over at Inside Higher Education.)
It’s important, though, to recognize that not all comments are created equal. Comments like, “She’s hot,” “This class sucked,” and “We shouldn’t have to take this class,” are, to my mind, more connected to students’ immaturity and frustrations than to their perceptions of what they were asked to do. When I read comments, I look for patterns and trends. Do multiple students mention the instructor’s classroom demeanor? Do they remark on how frequently the instructor was late or how slow he was to return papers? One or two “She’s boring” comments don’t get my attention — but seven or eight do. That many comments might mean that instructor isn’t pitching the class at the right level. (I know many people will argue that the “He’s boring” comment speaks mostly to students’ lack of attention spans, but I’m not so sure. I’ll try to write more about that later.) In general, I look for patterns and trends that speak to how the students interpreted the difficulty, rigor, and expectations for the course. I see a lot of comments on low-scoring evals that suggest the instructors’ simply didn’t spend a lot of time articulating what students were to do and, more importantly, why.
The why part of teaching is perhaps the most important. Students want to know that what they’re being asked to do means something. When they can’t relate their required activities to their assessments and to their own learning, they get frustrated. In my meetings with instructors about evaluations, I try to tell them that they need to be consistently transparent in their teaching, even if that means stopping class activities for a moment to explain why something is happening. I think the more meta we can be about these things, the better.