Course Evaluations: Words and Numbers

I just came across a very good review of the literature (pdf) on on-line course evaluations by Jessica Wode and Jonathan Keiser at Columbia College Chicago. Here is one pertinent set of conclusions:

Online vs. paper course evaluations

• The one consistent disadvantage to online course evaluations is their low response  rate; using reminder e-mails from instructors and messages posted on online class discussions can significantly increase response rates.

• Evaluation scores do not change when evaluations are completed online rather than on paper.

• Students leave more (and often more useful) comments on online evaluations compared to paper evaluations.

• Students, faculty, and staff generally view online evaluations more positively than paper evaluations.

The first and third conclusions really interest me for what they mean for faculty development. If response rates prove to be too low, then the exercise is meaningless — and the data ripe for being misused (e.g., discounting an instructor’s abilities in the classroom because the 2 students out of 20 who completed the survey didn’t like the course). The finding about comments is particularly surprising but also very much welcomed. Comments from students provide context for the quantitative data and often provide far more information about how students perceived the course than the numbers do.

I also came across this piece (pdf) from Stanford University’s Center for Teaching and Learning (1997). It offers some very good advice about how to interpret and ultimately use teaching evaluations to improve one’s courses and student learning. I was particularly interested in the section on interpreting students’ comments. Learning to interpret the comments productively is an important skill to master, but it’s certainly not easy, particularly because the comments — more so than the numbers — have the power to elate or deflate us. The comments just seem so personal, and more and more they can read like the worst of some message board flame war between Batman fans and Avengers fanatics.

At my institutions, instructors do not receive the “raw data” from their evaluations. Instead, they receive a document that has all the numeric responses tallied and averaged and all the comments listed on a separate page. The list of comments may be juxtaposed to the numbers, but that proximity actually confuses the matter. In the raw form, instructors would see each individual student evaluation — that student’s numbers and comments. In that form, the instructor can place the comments in relation to specific numbers. Here, an instructor can use the numerical data to get a sense of that student’s experience and then interpret the comments in light of that sense. For example, a student may have rated the course materials low but gave the instructor herself high marks for enthusiasm, willingness to help, preparedness, etc. The comments on that particular evaluation might make this separation between content and delivery clear. But in the aggregated form, the instructor won’t be able to see it.

I write all this because I’m taken by the recommendation in the CTL piece to create an interpretive framework for the student comments. Without a framework, comments can look random or scattered. (Afshan Jafar over at IHE has a good post on this phenomenon.) The CTL recommends either categorizing the comments under general headings (positive or negative, e.g.), or better yet, creating a graph that plots the comments according to characteristics of effective teaching (organization, pacing, explanation, rigor, etc.). Categorizing, characterizing, and generally organizing the comments seems like a good way to help instructors gain some authority over the text. It should also help minimize the effects that the few very bad or wildly positive comments have on our perceptions of how a course went.

So I’m lobbying my institutional data colleagues for access to the raw data and I plan to put together some resources on working with evals for my department.

In my next post, I want to think about the relationship between course evaluations and something I’ve come to call “classroom dynamism,” which sounds purposefully close to “strategic dynamism.” More soon.

Advertisements

One thought on “Course Evaluations: Words and Numbers

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s