Teaching Evaluations and Program Development

In my last post I wrote about teaching evaluations and students’ identities as learners. My goal was to provide instructors with an alternative way of reading evals: not as judgments on themselves as teachers, but as indicators of how students interpreted the course delivery. I don’t think that’s as fine a line as it might appear. Who I am as a teacher is composed of a great many things, not just my ratings on course evals. Those ratings reflect the tail end of a semester long dialogue with a specific group of students — an end that has been shaped by the 15-week session and colored by the rather large amount of stress everyone is feeling as things wrap up. The ratings speak to the students’ relationships with the material, the instructor, and — to some extent — each other. Such relationships, like all relationships, are rooted in communication. And to be successful, communication, to follow Grice’s maxims, requires that people be truthful, relevant, and concise. Course ratings are, perhaps, evidence of the extent to which students perceived these maxims at work in a class.

(Certainly, there is such a thing as willful misunderstanding. Truth and relevance are themselves relative. And concision is often in the ear of the listener.)

I want to consider another useful way of reading course evaluations: as data for program development. Most instructors are rightly concerned about how evals relate directly to them as individuals. But in this specific use, evals simply reinforce what I think is a dangerous notion of teaching as a private affair. I’m not advocating the publication of evaluation data. Instead, I’m trying to think about how program administrators can use eval data to promote effective change in curricula. If individual course ratings are interpretations of a specific course’s delivery, then all the course ratings from a particular program represent interpretations of the curriculum. (Provided a program uses a common curriculum, which my department does.)

I’ll have to think more about how the quantitative ratings of sections can be used in program development. For this post, I’m interested in the qualitative analysis of written comments on evaluations. As an outside reader of evals, it’s easy for me to give little weight to the types of comments that rattle me when I see them on my own evals: “he’s boring,” “arrogant,” “way too strict,” etc. As I wrote earlier, I read for trends, and when I’m reading an entire program’s evals, I read for trends across sections. These cross-section trends speak to how students’ perceived and experienced the writing curriculum as it was filtered through their instructors’ documents, interactions, assessments, etc. As I read for these trends, I’m looking for two specific things:

  • What do students seem most frustrated, confused, or mistaken about, in terms of the curriculum or its delivery?
  • What do students say they have learned (or not learned) from the course?

The data from these questions helps me understand not only what curricular components are vexing students, but also which might be vexing instructors. Again, if evals reveal interpretations, then it’s possible to use them to reveal what concepts or processes instructors are having difficulty explaining or implementing. For example, if many students across a wide range of sections (and instructors) were to complain that the heavy weight put on the final portfolio grade (45%) is unfair, I’d think we’d need to review our grading percentages as a program. If, however, I were to see that type of comment confined to a narrow range of sections and instructors, I’d think those sections had some kind of communication breakdown. From there, I’d begin to plan faculty development activities that could help all instructors understand and explain (and even work to revise) this particular programmatic feature.

I’ll write more later about what I’m learning specifically from the current batch of evals. I’ll end on this note: Evals are just one kind of information, and information carries no value until it is put to use. Program administrators are in strong positions to use eval data in productive, community-building ways. I want to keep working out how to do just that.

Advertisements

One thought on “Teaching Evaluations and Program Development

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s