On-Line Education and Residential Colleges

I am certainly not the only person writing about Mark Edmundson‘s op-ed in the New York Times: “The Trouble with Online Education.” (Indeed, here’s Cathy Davidson’s response.) Edmundson’s piece will rightly get a lot of attention for what it says about the shortcomings of online college courses, particularly MOOCs, but I’m much more interested in what he writes about successfully engaging students in our regular classrooms. I read Edmundson’s piece in the context of this analysis of on-line education by Richard Perez-Pena in the New York Times, published just a day earlier. Perez-Pena offers this point about the types of colleges that are near and dear to me:

Residential colleges already attract far less than half of the higher education market. Most enrollment and nearly all growth in higher education is in less costly options that let students balance classes with work and family: commuter colleges, night schools, online universities.

Most experts say there will always be students who want to live on campus, interacting with professors and fellow students, particularly at prestigious universities. But as a share of the college market, that is likely to be a shrinking niche.

One of his sources suggests that of this particular niche, only the elite schools with large endowments will survive. Other residential colleges — those with small endowments and who rely almost solely on tuition dollars for operation costs — are, according to the author, likely to go away.

Two thing make me question Perez-Pena’s analysis. The second is Edmundson’s argument about the value of face-to-face teaching, and I’ll get to that in a moment. The first is my different understanding of who goes to college in the first place. In terms of overall market niche, it’s probably true that the non-elite residential college niche is shrinking — but it may not be true in terms of real numbers. As more people go to college generally, enrollments are likely to grow at non-traditional institutions. A larger college-going population naturally includes a higher number of non-traditional students — parents, vets, working class folks, etc. These people will — indeed are — flooding the college market, but they were never looking at residential colleges in the first place.

Residential colleges are the engines for credentializing and networking the middle and upper classes. I suppose they educate those classes, too, but right now their biggest value lies in how well they get students into post-degree positions. 30,000-student MOOCs aren’t going to appeal to kids (and parents) looking for schools that will give them access to the cultural cache and connections that residential colleges offer. We can talk all we want about folks demanding more economical education alternatives, but when faced with the option of giving a child an education that happens in the isolation of on-line courses or one that provides the extras of social networks, internships, and general people skills, most middle and upper class folks will choose the latter. And if the kids aren’t too bright or ambitious, those lower-tier residential schools that provide good amenities and decent contacts are going to look quite good, regardless of the bill.

My point here is that Perez-Pena is assuming way too rational and equal a market for higher education. I suggest we’re a long way off from middle- and upper-class parents being comfortable planting their kids in front of the home computer instead of dropping them off at a nicely furnished residence hall. So to the extent that the on-line market is booming, I think it’s more of a threat to mid-level public universities, who already find their funding lacking. Folks who are planning to attend North-by-Northwest State University at Middle City might be more enticed to take on-line courses than those heading to Old Dead Rich Guy College. I’m not taking a swipe at the quality of public universities. I’m just making a point about the allure of cultural capital to the moneyed and even not-quite-so-moneyed classes.

My second point is far less cynical and draws directly from Edmundson’s argument about the value of classroom interaction. Fairly or not, Edmundson characterizes on-line education as sterile and lonely, even in small-ish courses where the instructor and students interact regularly through e-mail, messaging, Skype, etc. The time in-class is a happening, an event; it occurs only once and has a vitality to it that on-line technologies simply can’t recreate. Here’s Edmundson’s metaphor and explanation:

Every memorable class is a bit like a jazz composition. There is the basic melody that you work with. It is defined by the syllabus. But there is also a considerable measure of improvisation against that disciplining background.

… I think that the best … lecturers are highly adept at reading their audiences. They use practical means to do this — tests and quizzes, papers and evaluations. But they also deploy something tantamount to artistry. They are superb at sensing the mood of a room. They have a sort of pedagogical sixth sense. They feel it when the class is engaged and when it slips off. And they do something about it.

Some might criticize Edmundson for fetishizing the classroom and romanticizing the instructor. I’d suggest instead that he’s prioritizing the social aspect of education and crediting the instructor for skills that go beyond simple content knowledge.

It’s these skills — the abilities to sense the mood of the room, to alter one’s plan in midstream, to respond to spoken and physical feedback — that I’m becoming more and more interested in, just as so much of my field seems to be moving further away from any concern about them at all. Good teaching isn’t just about covering the material or completing the lesson. It’s about creating moments that stick in everyone’s mind — instructor and students.

Like Edmundson, I don’t think on-line education can deliver these moments. Even in real-time settings, the screens and interfaces simply dehumanize the effort. Students for whom education is more than just content mastery — i.e., those who appreciate the process of education — are going to continue to be drawn to colleges that offer the best opportunities for memorable moments.

Writing this post reminds me that I have to return to the concept of classroom dynamism . . .

 

Advertisements

When the Students Say I’m Boring

I’m back from vacation and starting to focus my attention on my department’s August orientation and retreat. Doing so brings me again to the subject of course evaluations. I’m fixated on evals because they provide the only regular feedback instructors receive about their teaching, which is ironic given that we’re primarily a teaching institution. (My experiences at several liberal arts colleges has all been the same, frankly: good teaching is just assumed to be happening and so very little support is given to faculty development and mentoring.) Because evals are often the only form of feedback, they take on rather mythic proportions in our professional lives. I want to knock that proportion down a bit by adding other forms of feedback (like peer observations and teaching demos), but I also want to help instructors make the best use of the eval data.

Today I’m thinking about what the written comments can tell an instructor about how students are experiencing his or her courses. In my previous post, I wrote about the benefit of organizing the written comments to look for trends. Today, I want to think about what to do in light of one very common trend across the comments, regardless of course: students calling the course or instructor boring. Now really, this is two different things, and noting that difference is very, very important. Students can be more discerning than we sometimes give them credit for. They can separate the purpose and content of a course from its delivery. Of course, they can sometimes do that too well, creating too great a separation between material and form. Many times, however, they make the distinction as a way of either buoying or sinking an instructor.

For example, I see a lot of comments from our first-year writing courses that say, in effect, “Mr. So-and-So tried to engage us in class, but it’s hard to make English writing interesting.” Likewise, I see many comments that say, “The papers were interesting but Ms. Whatsit mostly wasted class time.” The first comment takes up Mr. So-and-So’s case, arguing that, try as he might, he just couldn’t escape the terrible, no-good purpose and content of the course. The second comment takes a stab at Ms. Whatsit, contending that, if not for her, the course material would have shined brighter.

It’s certainly easy — and sometimes correct — to dismiss students’ complaints about boredom as symptomatic of their small attention spans, general disinterest in learning, Facebook addictions, etc. And it’s equally easy — and equally correct on occasion — to think that our colleagues who are never called boring simply spend their class time entertaining their students, playing to lowest common interest. But let’s give the students and our exciting colleagues the benefit of the doubt for a moment. Imagine that what the students are saying when they offer the “boring” critique is that their experiences in the course didn’t jibe with their expectations. And imagine now that such expectations are indeed the responsibility of both parties — students and instructors. What I’m getting at here is that boredom is itself an expectation and a habit of mind. We expect the waiting room at a doctor’s office to be boring, so we plan against it. Likewise, when I’m confronted by work that is itself either well below or well above my ability level, I’m bored in a different way. I have a responsibility to work against my boredom in ways that speak to my interests. People I’m trying to work with have the responsibility to present my tasks and my roles in ways that make sense to me and that take advantage of my talents and interests.

I think the “boring” critique is a sign of a breakdown in communication between the instructor and the students. It’s true that not every course will be of major interest to every student, but perhaps student boredom can be reduced — or repurposed — through meta-conversations about the purpose of the course, its assignments, and its activities. Students disconnect from a class when they feel no ownership of it, when they feel like their investment in the course will provide little return, and when they can’t see the connection among course discussions, in-class activities, and major assignments. I’ve found that stopping a class session and engaging students in a critique of the discussion/activity is a productive way to get their attention and energy refocused on the task at hand. If conversation drops off, for example, I’ll stop the discussion and ask students to write for a few moments about why the discussion seems to be going so slow. Often I get what I expect: students haven’t done the reading or they didn’t understand something, etc. But sometimes I get interesting notes: like when students have lost track of why we’re discussing something in the first place, or when they explain that they think they have a good understanding of the topic and want to move on (at which point I can give them a quick activity to demonstrate that understanding and then, indeed, move on).

I’d say that the great majority of my teaching is rooted in meta-cognitive awareness. I try to explain to my students frequently what’s going on and — most importantly — why it’s going on in the form that it is. If it seems like an unproductive form, I make changes on the fly. The advice I’d give faculty who see the boring critique trend across their evaluation comments is to engage in meta-discussion more with their students. It gives instructors a really good sense of how students are experience the course on a day-to-day basis, and it provides opportunities to debunk wrong ideas about the course or to explain a key facet in more detail.

How we teach is as much what our courses are about as what we’re covering. No instructor or course is inherently boring, as “boring” is simply a perception rooted in a social context. Alter that context and we can alter the perception.

I’ll try to work on some more specific teaching strategies in another post.

Course Evaluations: Words and Numbers

I just came across a very good review of the literature (pdf) on on-line course evaluations by Jessica Wode and Jonathan Keiser at Columbia College Chicago. Here is one pertinent set of conclusions:

Online vs. paper course evaluations

• The one consistent disadvantage to online course evaluations is their low response  rate; using reminder e-mails from instructors and messages posted on online class discussions can significantly increase response rates.

• Evaluation scores do not change when evaluations are completed online rather than on paper.

• Students leave more (and often more useful) comments on online evaluations compared to paper evaluations.

• Students, faculty, and staff generally view online evaluations more positively than paper evaluations.

The first and third conclusions really interest me for what they mean for faculty development. If response rates prove to be too low, then the exercise is meaningless — and the data ripe for being misused (e.g., discounting an instructor’s abilities in the classroom because the 2 students out of 20 who completed the survey didn’t like the course). The finding about comments is particularly surprising but also very much welcomed. Comments from students provide context for the quantitative data and often provide far more information about how students perceived the course than the numbers do.

I also came across this piece (pdf) from Stanford University’s Center for Teaching and Learning (1997). It offers some very good advice about how to interpret and ultimately use teaching evaluations to improve one’s courses and student learning. I was particularly interested in the section on interpreting students’ comments. Learning to interpret the comments productively is an important skill to master, but it’s certainly not easy, particularly because the comments — more so than the numbers — have the power to elate or deflate us. The comments just seem so personal, and more and more they can read like the worst of some message board flame war between Batman fans and Avengers fanatics.

At my institutions, instructors do not receive the “raw data” from their evaluations. Instead, they receive a document that has all the numeric responses tallied and averaged and all the comments listed on a separate page. The list of comments may be juxtaposed to the numbers, but that proximity actually confuses the matter. In the raw form, instructors would see each individual student evaluation — that student’s numbers and comments. In that form, the instructor can place the comments in relation to specific numbers. Here, an instructor can use the numerical data to get a sense of that student’s experience and then interpret the comments in light of that sense. For example, a student may have rated the course materials low but gave the instructor herself high marks for enthusiasm, willingness to help, preparedness, etc. The comments on that particular evaluation might make this separation between content and delivery clear. But in the aggregated form, the instructor won’t be able to see it.

I write all this because I’m taken by the recommendation in the CTL piece to create an interpretive framework for the student comments. Without a framework, comments can look random or scattered. (Afshan Jafar over at IHE has a good post on this phenomenon.) The CTL recommends either categorizing the comments under general headings (positive or negative, e.g.), or better yet, creating a graph that plots the comments according to characteristics of effective teaching (organization, pacing, explanation, rigor, etc.). Categorizing, characterizing, and generally organizing the comments seems like a good way to help instructors gain some authority over the text. It should also help minimize the effects that the few very bad or wildly positive comments have on our perceptions of how a course went.

So I’m lobbying my institutional data colleagues for access to the raw data and I plan to put together some resources on working with evals for my department.

In my next post, I want to think about the relationship between course evaluations and something I’ve come to call “classroom dynamism,” which sounds purposefully close to “strategic dynamism.” More soon.

Teaching Evaluations and Program Development

In my last post I wrote about teaching evaluations and students’ identities as learners. My goal was to provide instructors with an alternative way of reading evals: not as judgments on themselves as teachers, but as indicators of how students interpreted the course delivery. I don’t think that’s as fine a line as it might appear. Who I am as a teacher is composed of a great many things, not just my ratings on course evals. Those ratings reflect the tail end of a semester long dialogue with a specific group of students — an end that has been shaped by the 15-week session and colored by the rather large amount of stress everyone is feeling as things wrap up. The ratings speak to the students’ relationships with the material, the instructor, and — to some extent — each other. Such relationships, like all relationships, are rooted in communication. And to be successful, communication, to follow Grice’s maxims, requires that people be truthful, relevant, and concise. Course ratings are, perhaps, evidence of the extent to which students perceived these maxims at work in a class.

(Certainly, there is such a thing as willful misunderstanding. Truth and relevance are themselves relative. And concision is often in the ear of the listener.)

I want to consider another useful way of reading course evaluations: as data for program development. Most instructors are rightly concerned about how evals relate directly to them as individuals. But in this specific use, evals simply reinforce what I think is a dangerous notion of teaching as a private affair. I’m not advocating the publication of evaluation data. Instead, I’m trying to think about how program administrators can use eval data to promote effective change in curricula. If individual course ratings are interpretations of a specific course’s delivery, then all the course ratings from a particular program represent interpretations of the curriculum. (Provided a program uses a common curriculum, which my department does.)

I’ll have to think more about how the quantitative ratings of sections can be used in program development. For this post, I’m interested in the qualitative analysis of written comments on evaluations. As an outside reader of evals, it’s easy for me to give little weight to the types of comments that rattle me when I see them on my own evals: “he’s boring,” “arrogant,” “way too strict,” etc. As I wrote earlier, I read for trends, and when I’m reading an entire program’s evals, I read for trends across sections. These cross-section trends speak to how students’ perceived and experienced the writing curriculum as it was filtered through their instructors’ documents, interactions, assessments, etc. As I read for these trends, I’m looking for two specific things:

  • What do students seem most frustrated, confused, or mistaken about, in terms of the curriculum or its delivery?
  • What do students say they have learned (or not learned) from the course?

The data from these questions helps me understand not only what curricular components are vexing students, but also which might be vexing instructors. Again, if evals reveal interpretations, then it’s possible to use them to reveal what concepts or processes instructors are having difficulty explaining or implementing. For example, if many students across a wide range of sections (and instructors) were to complain that the heavy weight put on the final portfolio grade (45%) is unfair, I’d think we’d need to review our grading percentages as a program. If, however, I were to see that type of comment confined to a narrow range of sections and instructors, I’d think those sections had some kind of communication breakdown. From there, I’d begin to plan faculty development activities that could help all instructors understand and explain (and even work to revise) this particular programmatic feature.

I’ll write more later about what I’m learning specifically from the current batch of evals. I’ll end on this note: Evals are just one kind of information, and information carries no value until it is put to use. Program administrators are in strong positions to use eval data in productive, community-building ways. I want to keep working out how to do just that.

Teaching Evaluations and Faculty Development

Spring semester teaching evaluations were released last week, and because of my position as writing program director and incoming department chair, I get to see all of them. It’s an eye-opening experience, but perhaps not for reasons you’d first think. It’s true that the evaluations give you a glimpse (and, really, just a blurry glimpse) of your colleagues’ classrooms and teaching styles. But it’s not the individual ratings I’m particularly interested — at least not now, some seven months before we do personnel evaluations. Instead, I’m interested in what the evaluations as a group tell us about the writing program, the gen-ed literature courses, and the department as a whole.

I’ve done some reading in the scholarship of Student Evaluations of Teaching (SET) recently, and it has lead me to two useful and related findings, as well as a wealth of advice about how to respond to evals as an administrator and mentor. First, the findings. I’ve been mulling the conclusions from economists Paul Isley and Harinder Singh of Grand Valley State University about the relationship between grades and quantitative evaluations. In their article (JSTOR access needed), they confirm previous findings that higher teaching evaluation scores are related to students’ having higher expectations for their final course grades. They go on, however, to argue that the differences between incoming students’ GPAs and their expected course grades have a greater effect on evaluation scores. In other words, if students with high GPAs think they are going to get low grades in a course, they are more likely to rate the instructor low on the evals, and if students with low GPAs think they are going to get high grades, they rate higher. Such findings are important for instructors of required general-education courses (like composition and intro to literature), which often enroll students with little interest or ability in the specific subjects.

Isley and Singh’s findings jibe interestingly with a conclusion drawn by John Centra in his analysis (JSTOR) of higher grades and evaluation scores. According to Centra, evaluations are highest when students perceive “just-right” levels of difficulty, rigor, and learning expectations (I’m simplifying his analysis quite a bit, so please do look at the article). Imagine honors students, for example, who feel a gen-ed humanities course is being pitched too low for them. They’re likely happy with their A’s but are also likely to give lower evaluation scores. Centra’s and Isley and Singh’s analyses expose a central psychological truth about teaching evaluations: that they are deeply rooted in each individual student’s own educational context and history. Evals tell us as much about the students’ identities as learners than they do about teacher effectiveness. Maybe more so.

What’s heartening about this work — and very useful in terms of faculty development — is that the numbers from the evals might suggest the extent to which students understand and appreciate what is being asked of them in a course. I’ve long contended that a great deal of student complaints are rooted in some kind of communication breakdown between students and teachers. It’s possible to read low evaluation scores as representative of that breakdown. Now, I’m not suggesting that one or two low evaluations means a teacher can’t communicate effectively with students. We all know that lots of stuff happens during a semester. Instead, I’m suggesting that low evals mean the instructor and students weren’t really on the same page for that particular course. The next step, then, is for the instructor to reflect on what might have led to that.

And here is where the written comments on evals matter. Those of us who have taught know the thrill and agony of the written comments. We also know how the one negative comment is usually the one we fixate on the most. Comments can cut to the bone, and they often seem like the least fair aspect of the entire evaluation process. After all, the student evals are anonymous, and students don’t have to take responsibility for what they write. Nevertheless, the comments can help instructors understand how their courses were interpreted by the students. And if instructors are unhappy with those interpretations, then they can use the comments to help revise their presentations and deliveries. (Check out Dean Dad’s take on this over at Inside Higher Education.)

It’s important, though, to recognize that not all comments are created equal. Comments like, “She’s hot,” “This class sucked,” and “We shouldn’t have to take this class,” are, to my mind, more connected to students’ immaturity and frustrations than to their perceptions of what they were asked to do. When I read comments, I look for patterns and trends. Do multiple students mention the instructor’s classroom demeanor? Do they remark on how frequently the instructor was late or how slow he was to return papers? One or two “She’s boring” comments don’t get my attention — but seven or eight do. That many comments might mean that instructor isn’t pitching the class at the right level. (I know many people will argue that the “He’s boring” comment speaks mostly to students’ lack of attention spans, but I’m not so sure. I’ll try to write more about that later.) In general, I look for patterns and trends that speak to how the students interpreted the difficulty, rigor, and expectations for the course. I see a lot of comments on low-scoring evals that suggest the instructors’ simply didn’t spend a lot of time articulating what students were to do and, more importantly, why.

The why part of teaching is perhaps the most important. Students want to know that what they’re being asked to do means something. When they can’t relate their required activities to their assessments and to their own learning, they get frustrated. In my meetings with instructors about evaluations, I try to tell them that they need to be consistently transparent in their teaching, even if that means stopping class activities for a moment to explain why something is happening. I think the more meta we can be about these things, the better.