Teaching Evaluations and Faculty Development

Spring semester teaching evaluations were released last week, and because of my position as writing program director and incoming department chair, I get to see all of them. It’s an eye-opening experience, but perhaps not for reasons you’d first think. It’s true that the evaluations give you a glimpse (and, really, just a blurry glimpse) of your colleagues’ classrooms and teaching styles. But it’s not the individual ratings I’m particularly interested — at least not now, some seven months before we do personnel evaluations. Instead, I’m interested in what the evaluations as a group tell us about the writing program, the gen-ed literature courses, and the department as a whole.

I’ve done some reading in the scholarship of Student Evaluations of Teaching (SET) recently, and it has lead me to two useful and related findings, as well as a wealth of advice about how to respond to evals as an administrator and mentor. First, the findings. I’ve been mulling the conclusions from economists Paul Isley and Harinder Singh of Grand Valley State University about the relationship between grades and quantitative evaluations. In their article (JSTOR access needed), they confirm previous findings that higher teaching evaluation scores are related to students’ having higher expectations for their final course grades. They go on, however, to argue that the differences between incoming students’ GPAs and their expected course grades have a greater effect on evaluation scores. In other words, if students with high GPAs think they are going to get low grades in a course, they are more likely to rate the instructor low on the evals, and if students with low GPAs think they are going to get high grades, they rate higher. Such findings are important for instructors of required general-education courses (like composition and intro to literature), which often enroll students with little interest or ability in the specific subjects.

Isley and Singh’s findings jibe interestingly with a conclusion drawn by John Centra in his analysis (JSTOR) of higher grades and evaluation scores. According to Centra, evaluations are highest when students perceive “just-right” levels of difficulty, rigor, and learning expectations (I’m simplifying his analysis quite a bit, so please do look at the article). Imagine honors students, for example, who feel a gen-ed humanities course is being pitched too low for them. They’re likely happy with their A’s but are also likely to give lower evaluation scores. Centra’s and Isley and Singh’s analyses expose a central psychological truth about teaching evaluations: that they are deeply rooted in each individual student’s own educational context and history. Evals tell us as much about the students’ identities as learners than they do about teacher effectiveness. Maybe more so.

What’s heartening about this work — and very useful in terms of faculty development — is that the numbers from the evals might suggest the extent to which students understand and appreciate what is being asked of them in a course. I’ve long contended that a great deal of student complaints are rooted in some kind of communication breakdown between students and teachers. It’s possible to read low evaluation scores as representative of that breakdown. Now, I’m not suggesting that one or two low evaluations means a teacher can’t communicate effectively with students. We all know that lots of stuff happens during a semester. Instead, I’m suggesting that low evals mean the instructor and students weren’t really on the same page for that particular course. The next step, then, is for the instructor to reflect on what might have led to that.

And here is where the written comments on evals matter. Those of us who have taught know the thrill and agony of the written comments. We also know how the one negative comment is usually the one we fixate on the most. Comments can cut to the bone, and they often seem like the least fair aspect of the entire evaluation process. After all, the student evals are anonymous, and students don’t have to take responsibility for what they write. Nevertheless, the comments can help instructors understand how their courses were interpreted by the students. And if instructors are unhappy with those interpretations, then they can use the comments to help revise their presentations and deliveries. (Check out Dean Dad’s take on this over at Inside Higher Education.)

It’s important, though, to recognize that not all comments are created equal. Comments like, “She’s hot,” “This class sucked,” and “We shouldn’t have to take this class,” are, to my mind, more connected to students’ immaturity and frustrations than to their perceptions of what they were asked to do. When I read comments, I look for patterns and trends. Do multiple students mention the instructor’s classroom demeanor? Do they remark on how frequently the instructor was late or how slow he was to return papers? One or two “She’s boring” comments don’t get my attention — but seven or eight do. That many comments might mean that instructor isn’t pitching the class at the right level. (I know many people will argue that the “He’s boring” comment speaks mostly to students’ lack of attention spans, but I’m not so sure. I’ll try to write more about that later.) In general, I look for patterns and trends that speak to how the students interpreted the difficulty, rigor, and expectations for the course. I see a lot of comments on low-scoring evals that suggest the instructors’ simply didn’t spend a lot of time articulating what students were to do and, more importantly, why.

The why part of teaching is perhaps the most important. Students want to know that what they’re being asked to do means something. When they can’t relate their required activities to their assessments and to their own learning, they get frustrated. In my meetings with instructors about evaluations, I try to tell them that they need to be consistently transparent in their teaching, even if that means stopping class activities for a moment to explain why something is happening. I think the more meta we can be about these things, the better.

Advertisements

One thought on “Teaching Evaluations and Faculty Development

  1. “They’re likely happy with their A’s but are also likely to give lower evaluation scores. Centra’s and Isley and Singh’s analyses expose a central psychological truth about teaching evaluations: that they are deeply rooted in each individual student’s own educational context and history. Evals tell us as much about the students’ identities as learners than they do about teacher effectiveness. Maybe more so.”

    While this is not news to me, it’s great to see it laid out so effectively, especially in Centra’s analysis. The psychological dimensions of SETs is extremely telling in regard to student learning and can be useful in gauging what students learn and understand about their learning in the first place.

    I was thinking about this during the semester as I was teaching at least five students who had taken comp the previous semester and failed, the largest number I’ve had in recent years. Each articulated to me in private conferences that my course made better sense than those they’d taken the previous semester, which I thought had more to do with instructor delivery than anything else. Somehow, what I was doing translated better for them and, for at least a couple of these four, motivated them to do better work in the course. I asked students what was so different (and to be fair, without naming or identifying the previous instructor), but they either couldn’t or wouldn’t articulate it to me. I keep a teaching journal and I wrote pretty extensively on this subject this semester in trying to figure out why that was– am I just a better teacher? Do they just like my personality better? [After ten years in this business, I’ve come to the conclusion that this matters, whether we like it or not.] What is so much more effective about what I did in this course, versus what may have been done in the previous course with another instructor? What about these students makes them responsive to what we’re doing in my course– what made them unresponsive in the other course?

    A lot of this reflection is beginning to gel for me, especially since seeing a comment regarding my instruction vs. someone else’s on my evals, and then seeing your post here– the idea that evals can tell us about students’ identities as learners and not only about teacher effectiveness. Regarding evals more productively and reflexively may be the key to understanding what happened for these students. What is reflected in their comments and in my evals tells me more about how they learn and understand the value of what they learn– and what tools I use in class that enable them to recognize this. For me, it’s helping me to understand what I can do to take the strengths of the course, capitalize on them, in order to reach more of my students in the same way. [Which ultimately helps me to see evals as a useful tool, rather than a pain in the neck and a source of anxiety.]

    Melissa

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s