The End(s) of Education

Higher Ed in a World Gone Mad

On-Line Education and Residential Colleges

 

I am certainly not the only person writing about Mark Edmundson‘s op-ed in the New York Times: “The Trouble with Online Education.” (Indeed, here’s Cathy Davidson’s response.) Edmundson’s piece will rightly get a lot of attention for what it says about the shortcomings of online college courses, particularly MOOCs, but I’m much more interested in what he writes about successfully engaging students in our regular classrooms. I read Edmundson’s piece in the context of this analysis of on-line education by Richard Perez-Pena in the New York Times, published just a day earlier. Perez-Pena offers this point about the types of colleges that are near and dear to me:

Residential colleges already attract far less than half of the higher education market. Most enrollment and nearly all growth in higher education is in less costly options that let students balance classes with work and family: commuter colleges, night schools, online universities.

Most experts say there will always be students who want to live on campus, interacting with professors and fellow students, particularly at prestigious universities. But as a share of the college market, that is likely to be a shrinking niche.

One of his sources suggests that of this particular niche, only the elite schools with large endowments will survive. Other residential colleges — those with small endowments and who rely almost solely on tuition dollars for operation costs — are, according to the author, likely to go away.

Two thing make me question Perez-Pena’s analysis. The second is Edmundson’s argument about the value of face-to-face teaching, and I’ll get to that in a moment. The first is my different understanding of who goes to college in the first place. In terms of overall market niche, it’s probably true that the non-elite residential college niche is shrinking — but it may not be true in terms of real numbers. As more people go to college generally, enrollments are likely to grow at non-traditional institutions. A larger college-going population naturally includes a higher number of non-traditional students — parents, vets, working class folks, etc. These people will — indeed are — flooding the college market, but they were never looking at residential colleges in the first place.

Residential colleges are the engines for credentializing and networking the middle and upper classes. I suppose they educate those classes, too, but right now their biggest value lies in how well they get students into post-degree positions. 30,000-student MOOCs aren’t going to appeal to kids (and parents) looking for schools that will give them access to the cultural cache and connections that residential colleges offer. We can talk all we want about folks demanding more economical education alternatives, but when faced with the option of giving a child an education that happens in the isolation of on-line courses or one that provides the extras of social networks, internships, and general people skills, most middle and upper class folks will choose the latter. And if the kids aren’t too bright or ambitious, those lower-tier residential schools that provide good amenities and decent contacts are going to look quite good, regardless of the bill.

My point here is that Perez-Pena is assuming way too rational and equal a market for higher education. I suggest we’re a long way off from middle- and upper-class parents being comfortable planting their kids in front of the home computer instead of dropping them off at a nicely furnished residence hall. So to the extent that the on-line market is booming, I think it’s more of a threat to mid-level public universities, who already find their funding lacking. Folks who are planning to attend North-by-Northwest State University at Middle City might be more enticed to take on-line courses than those heading to Old Dead Rich Guy College. I’m not taking a swipe at the quality of public universities. I’m just making a point about the allure of cultural capital to the moneyed and even not-quite-so-moneyed classes.

My second point is far less cynical and draws directly from Edmundson’s argument about the value of classroom interaction. Fairly or not, Edmundson characterizes on-line education as sterile and lonely, even in small-ish courses where the instructor and students interact regularly through e-mail, messaging, Skype, etc. The time in-class is a happening, an event; it occurs only once and has a vitality to it that on-line technologies simply can’t recreate. Here’s Edmundson’s metaphor and explanation:

Every memorable class is a bit like a jazz composition. There is the basic melody that you work with. It is defined by the syllabus. But there is also a considerable measure of improvisation against that disciplining background.

… I think that the best … lecturers are highly adept at reading their audiences. They use practical means to do this — tests and quizzes, papers and evaluations. But they also deploy something tantamount to artistry. They are superb at sensing the mood of a room. They have a sort of pedagogical sixth sense. They feel it when the class is engaged and when it slips off. And they do something about it.

Some might criticize Edmundson for fetishizing the classroom and romanticizing the instructor. I’d suggest instead that he’s prioritizing the social aspect of education and crediting the instructor for skills that go beyond simple content knowledge.

It’s these skills — the abilities to sense the mood of the room, to alter one’s plan in midstream, to respond to spoken and physical feedback — that I’m becoming more and more interested in, just as so much of my field seems to be moving further away from any concern about them at all. Good teaching isn’t just about covering the material or completing the lesson. It’s about creating moments that stick in everyone’s mind — instructor and students.

Like Edmundson, I don’t think on-line education can deliver these moments. Even in real-time settings, the screens and interfaces simply dehumanize the effort. Students for whom education is more than just content mastery — i.e., those who appreciate the process of education — are going to continue to be drawn to colleges that offer the best opportunities for memorable moments.

Writing this post reminds me that I have to return to the concept of classroom dynamism . . .

 

 

When the Students Say I’m Boring

I’m back from vacation and starting to focus my attention on my department’s August orientation and retreat. Doing so brings me again to the subject of course evaluations. I’m fixated on evals because they provide the only regular feedback instructors receive about their teaching, which is ironic given that we’re primarily a teaching institution. (My experiences at several liberal arts colleges has all been the same, frankly: good teaching is just assumed to be happening and so very little support is given to faculty development and mentoring.) Because evals are often the only form of feedback, they take on rather mythic proportions in our professional lives. I want to knock that proportion down a bit by adding other forms of feedback (like peer observations and teaching demos), but I also want to help instructors make the best use of the eval data.

Today I’m thinking about what the written comments can tell an instructor about how students are experiencing his or her courses. In my previous post, I wrote about the benefit of organizing the written comments to look for trends. Today, I want to think about what to do in light of one very common trend across the comments, regardless of course: students calling the course or instructor boring. Now really, this is two different things, and noting that difference is very, very important. Students can be more discerning than we sometimes give them credit for. They can separate the purpose and content of a course from its delivery. Of course, they can sometimes do that too well, creating too great a separation between material and form. Many times, however, they make the distinction as a way of either buoying or sinking an instructor.

For example, I see a lot of comments from our first-year writing courses that say, in effect, “Mr. So-and-So tried to engage us in class, but it’s hard to make English writing interesting.” Likewise, I see many comments that say, “The papers were interesting but Ms. Whatsit mostly wasted class time.” The first comment takes up Mr. So-and-So’s case, arguing that, try as he might, he just couldn’t escape the terrible, no-good purpose and content of the course. The second comment takes a stab at Ms. Whatsit, contending that, if not for her, the course material would have shined brighter.

It’s certainly easy — and sometimes correct — to dismiss students’ complaints about boredom as symptomatic of their small attention spans, general disinterest in learning, Facebook addictions, etc. And it’s equally easy — and equally correct on occasion – to think that our colleagues who are never called boring simply spend their class time entertaining their students, playing to lowest common interest. But let’s give the students and our exciting colleagues the benefit of the doubt for a moment. Imagine that what the students are saying when they offer the “boring” critique is that their experiences in the course didn’t jibe with their expectations. And imagine now that such expectations are indeed the responsibility of both parties — students and instructors. What I’m getting at here is that boredom is itself an expectation and a habit of mind. We expect the waiting room at a doctor’s office to be boring, so we plan against it. Likewise, when I’m confronted by work that is itself either well below or well above my ability level, I’m bored in a different way. I have a responsibility to work against my boredom in ways that speak to my interests. People I’m trying to work with have the responsibility to present my tasks and my roles in ways that make sense to me and that take advantage of my talents and interests.

I think the “boring” critique is a sign of a breakdown in communication between the instructor and the students. It’s true that not every course will be of major interest to every student, but perhaps student boredom can be reduced — or repurposed — through meta-conversations about the purpose of the course, its assignments, and its activities. Students disconnect from a class when they feel no ownership of it, when they feel like their investment in the course will provide little return, and when they can’t see the connection among course discussions, in-class activities, and major assignments. I’ve found that stopping a class session and engaging students in a critique of the discussion/activity is a productive way to get their attention and energy refocused on the task at hand. If conversation drops off, for example, I’ll stop the discussion and ask students to write for a few moments about why the discussion seems to be going so slow. Often I get what I expect: students haven’t done the reading or they didn’t understand something, etc. But sometimes I get interesting notes: like when students have lost track of why we’re discussing something in the first place, or when they explain that they think they have a good understanding of the topic and want to move on (at which point I can give them a quick activity to demonstrate that understanding and then, indeed, move on).

I’d say that the great majority of my teaching is rooted in meta-cognitive awareness. I try to explain to my students frequently what’s going on and — most importantly — why it’s going on in the form that it is. If it seems like an unproductive form, I make changes on the fly. The advice I’d give faculty who see the boring critique trend across their evaluation comments is to engage in meta-discussion more with their students. It gives instructors a really good sense of how students are experience the course on a day-to-day basis, and it provides opportunities to debunk wrong ideas about the course or to explain a key facet in more detail.

How we teach is as much what our courses are about as what we’re covering. No instructor or course is inherently boring, as “boring” is simply a perception rooted in a social context. Alter that context and we can alter the perception.

I’ll try to work on some more specific teaching strategies in another post.

Course Evaluations: Words and Numbers

I just came across a very good review of the literature (pdf) on on-line course evaluations by Jessica Wode and Jonathan Keiser at Columbia College Chicago. Here is one pertinent set of conclusions:

Online vs. paper course evaluations

• The one consistent disadvantage to online course evaluations is their low response  rate; using reminder e-mails from instructors and messages posted on online class discussions can significantly increase response rates.

• Evaluation scores do not change when evaluations are completed online rather than on paper.

• Students leave more (and often more useful) comments on online evaluations compared to paper evaluations.

• Students, faculty, and staff generally view online evaluations more positively than paper evaluations.

The first and third conclusions really interest me for what they mean for faculty development. If response rates prove to be too low, then the exercise is meaningless — and the data ripe for being misused (e.g., discounting an instructor’s abilities in the classroom because the 2 students out of 20 who completed the survey didn’t like the course). The finding about comments is particularly surprising but also very much welcomed. Comments from students provide context for the quantitative data and often provide far more information about how students perceived the course than the numbers do.

I also came across this piece (pdf) from Stanford University’s Center for Teaching and Learning (1997). It offers some very good advice about how to interpret and ultimately use teaching evaluations to improve one’s courses and student learning. I was particularly interested in the section on interpreting students’ comments. Learning to interpret the comments productively is an important skill to master, but it’s certainly not easy, particularly because the comments — more so than the numbers — have the power to elate or deflate us. The comments just seem so personal, and more and more they can read like the worst of some message board flame war between Batman fans and Avengers fanatics.

At my institutions, instructors do not receive the “raw data” from their evaluations. Instead, they receive a document that has all the numeric responses tallied and averaged and all the comments listed on a separate page. The list of comments may be juxtaposed to the numbers, but that proximity actually confuses the matter. In the raw form, instructors would see each individual student evaluation — that student’s numbers and comments. In that form, the instructor can place the comments in relation to specific numbers. Here, an instructor can use the numerical data to get a sense of that student’s experience and then interpret the comments in light of that sense. For example, a student may have rated the course materials low but gave the instructor herself high marks for enthusiasm, willingness to help, preparedness, etc. The comments on that particular evaluation might make this separation between content and delivery clear. But in the aggregated form, the instructor won’t be able to see it.

I write all this because I’m taken by the recommendation in the CTL piece to create an interpretive framework for the student comments. Without a framework, comments can look random or scattered. (Afshan Jafar over at IHE has a good post on this phenomenon.) The CTL recommends either categorizing the comments under general headings (positive or negative, e.g.), or better yet, creating a graph that plots the comments according to characteristics of effective teaching (organization, pacing, explanation, rigor, etc.). Categorizing, characterizing, and generally organizing the comments seems like a good way to help instructors gain some authority over the text. It should also help minimize the effects that the few very bad or wildly positive comments have on our perceptions of how a course went.

So I’m lobbying my institutional data colleagues for access to the raw data and I plan to put together some resources on working with evals for my department.

In my next post, I want to think about the relationship between course evaluations and something I’ve come to call “classroom dynamism,” which sounds purposefully close to “strategic dynamism.” More soon.

UVa, Students, and Zombie Consumers

According to this piece from Inside Higher Ed, the UVa board passed around the earlier IHE article I blogged about a few weeks back: the one on Wesleyan University’s move away from need-blind financial aid. By itself, that article doesn’t say much about the current state of things at UVa, but in the context of the other articles shared by the board, it helps bring into focus just what the BoV seems to be aiming for. In short, these things include:

  • A move into large-scale on-line education
  • An increase in the number of students able to pay the entire tuition without loans or grants (we can also call this a decrease in the discount rate)
  • Greater emphasis on majors related to business and health care
  • Decreased support for the humanities

I’m no fan of any of these ideas, but I’ll let other, more articulate folks explain the dangers inherent in each. I will say that I’ve seen at least one of these ideas at work in every school I’ve taught at, and in each case the blowback and complications were more difficult than anyone in charge seemed to have predicted.

What concerns me most about the events at UVa is what they say about the evolving relationship between students and institutions of higher education. I’m no romantic about higher ed. For example, I’m not all that bothered by the student-as-customer metaphor. They are, to a certain extent, customers, as in: they are paying for something (an education) that exists in a marketplace defined by competition. The problem with this metaphor is in the customer part. Our economy and culture have made “customer” synonymous with “consumer.” A customer is an actual human being who makes (somewhat) rational choices about what to purchase and who establishes some kind of mutually fulfilling relationship with the retailer. A consumer is a statistic, a data point for economists. When we conflate the terms, we zombie-fy customers, imagining them as mindless eaters looking only for the nearest food source.

Less negatively: customers fuel local economies; consumers present national trends.

The actions — or at least the reading list — of the UVa BoV seem to suggest that they see students as a mass of consumers rather than a community of customers. They see the economy of higher education from a neo-liberal, globalized perspective rather than from a local one. They see the university as a feeding stop for the horde rather than a sanctuary for reflection.

UVa is not alone. What does the rise of for-profit, on-line universities tell us if not that catering to the roaming horde is good for bottom lines? What does the increasing demand on public institutions to respond to the “needs of businesses” tell us if not that beneficiaries of horde-like consuming have gained a lot of power in our society?

What’s evolving in the student-institution relationship isn’t the identification of students as customers. It’s the dehumanizing of students into consumers. Large-scale on-line initiatives and job-training curricular look on the surface to respond to the needs and desires of students. But just a bit deeper we see how such efforts are also attempts to streamline the process by which hordes of people can be sorted, evaluated, and placed — all while paying high fees for the privilege.

UVA and Strategic Dynamism

Just posted the following to the Writing Program Administrators listserv (archives) about the situation at UVA. I want to think more about it later.

——-

… The folks who instigated the firing have been using the term “strategic dynamism” to describe their desire for — as you might guess — a more responsive, more efficient administration model. I think the lead instigator, Peter Kiernan, sent an accidental e-mail that included the argument that UVA needs more strategic dynamism and less strategic planning. In other words, he wants UVA to be able to respond immediately to market trends and investment opportunities rather than waste its time figuring out how best to educate its students, improve its faculty, etc.

We’re seeing the (re?)convergence of several decades-old trends: One, the decline of state funding for public higher education. Take a look at the charts in this piece fromInside Higher Ed (IHE). UVA appears to be getting 45% of its budget from patient revenues from its med school. Only 5.8% comes from the state. Who should the BoV serve when the funding stream looks like this?

Two, another push by fiscal conservatives to cut funding for programs that appear to have low immediate returns on investments: humanities, arts, social sciences. See this op-ed from Forbes. These (stale) arguments are couched in financial terms so that they appeal to broader audiences, but another purpose for them is to align university outputs with the interests of certain segments of the corporate economy. We’re seeing even bolder (but certainly not new) attempts to genetically re-engineer higher education so that it produces a stable flow of labor for these industries. Darker still, I think, this re-engineering is also an attempt to un-do most of what the open-admissions, college-for-all movements accomplished. Those movements muddied the gene line, so to speak, and put pressure on the good-ol-boy, connections-based systems that kept certain groups in power. Now, those groups can influence universities to keep bulking up particular programs. Those students who make connections get jobs. The others are labeled losers who either couldn’t cut it or who made the mistake of following their interests and majored in the humanities. It used to be that powered groups controlled who got into college. Now they control who gets out.

Three, a deeply mistaken notion that our quick-paced lives require quick actions, not slow thinking. The BoV’s move seems steeped in crisis rhetoric. The world is changing fast. We’re not changing as fast. Let’s change faster and think about it later. None of this is news to those of us on this list.

When I was just starting out in this profession, I used to blame the humanities for some of this: for the way we squandered our cultural capital in the 1980s, for how poorly we articulated our value and values, for how unconcerned we seemed about the pragmatic lives of our students. But I was so much older then …

Teaching Evaluations and Program Development

In my last post I wrote about teaching evaluations and students’ identities as learners. My goal was to provide instructors with an alternative way of reading evals: not as judgments on themselves as teachers, but as indicators of how students interpreted the course delivery. I don’t think that’s as fine a line as it might appear. Who I am as a teacher is composed of a great many things, not just my ratings on course evals. Those ratings reflect the tail end of a semester long dialogue with a specific group of students — an end that has been shaped by the 15-week session and colored by the rather large amount of stress everyone is feeling as things wrap up. The ratings speak to the students’ relationships with the material, the instructor, and — to some extent — each other. Such relationships, like all relationships, are rooted in communication. And to be successful, communication, to follow Grice’s maxims, requires that people be truthful, relevant, and concise. Course ratings are, perhaps, evidence of the extent to which students perceived these maxims at work in a class.

(Certainly, there is such a thing as willful misunderstanding. Truth and relevance are themselves relative. And concision is often in the ear of the listener.)

I want to consider another useful way of reading course evaluations: as data for program development. Most instructors are rightly concerned about how evals relate directly to them as individuals. But in this specific use, evals simply reinforce what I think is a dangerous notion of teaching as a private affair. I’m not advocating the publication of evaluation data. Instead, I’m trying to think about how program administrators can use eval data to promote effective change in curricula. If individual course ratings are interpretations of a specific course’s delivery, then all the course ratings from a particular program represent interpretations of the curriculum. (Provided a program uses a common curriculum, which my department does.)

I’ll have to think more about how the quantitative ratings of sections can be used in program development. For this post, I’m interested in the qualitative analysis of written comments on evaluations. As an outside reader of evals, it’s easy for me to give little weight to the types of comments that rattle me when I see them on my own evals: “he’s boring,” “arrogant,” “way too strict,” etc. As I wrote earlier, I read for trends, and when I’m reading an entire program’s evals, I read for trends across sections. These cross-section trends speak to how students’ perceived and experienced the writing curriculum as it was filtered through their instructors’ documents, interactions, assessments, etc. As I read for these trends, I’m looking for two specific things:

  • What do students seem most frustrated, confused, or mistaken about, in terms of the curriculum or its delivery?
  • What do students say they have learned (or not learned) from the course?

The data from these questions helps me understand not only what curricular components are vexing students, but also which might be vexing instructors. Again, if evals reveal interpretations, then it’s possible to use them to reveal what concepts or processes instructors are having difficulty explaining or implementing. For example, if many students across a wide range of sections (and instructors) were to complain that the heavy weight put on the final portfolio grade (45%) is unfair, I’d think we’d need to review our grading percentages as a program. If, however, I were to see that type of comment confined to a narrow range of sections and instructors, I’d think those sections had some kind of communication breakdown. From there, I’d begin to plan faculty development activities that could help all instructors understand and explain (and even work to revise) this particular programmatic feature.

I’ll write more later about what I’m learning specifically from the current batch of evals. I’ll end on this note: Evals are just one kind of information, and information carries no value until it is put to use. Program administrators are in strong positions to use eval data in productive, community-building ways. I want to keep working out how to do just that.

Teaching Evaluations and Faculty Development

Spring semester teaching evaluations were released last week, and because of my position as writing program director and incoming department chair, I get to see all of them. It’s an eye-opening experience, but perhaps not for reasons you’d first think. It’s true that the evaluations give you a glimpse (and, really, just a blurry glimpse) of your colleagues’ classrooms and teaching styles. But it’s not the individual ratings I’m particularly interested — at least not now, some seven months before we do personnel evaluations. Instead, I’m interested in what the evaluations as a group tell us about the writing program, the gen-ed literature courses, and the department as a whole.

I’ve done some reading in the scholarship of Student Evaluations of Teaching (SET) recently, and it has lead me to two useful and related findings, as well as a wealth of advice about how to respond to evals as an administrator and mentor. First, the findings. I’ve been mulling the conclusions from economists Paul Isley and Harinder Singh of Grand Valley State University about the relationship between grades and quantitative evaluations. In their article (JSTOR access needed), they confirm previous findings that higher teaching evaluation scores are related to students’ having higher expectations for their final course grades. They go on, however, to argue that the differences between incoming students’ GPAs and their expected course grades have a greater effect on evaluation scores. In other words, if students with high GPAs think they are going to get low grades in a course, they are more likely to rate the instructor low on the evals, and if students with low GPAs think they are going to get high grades, they rate higher. Such findings are important for instructors of required general-education courses (like composition and intro to literature), which often enroll students with little interest or ability in the specific subjects.

Isley and Singh’s findings jibe interestingly with a conclusion drawn by John Centra in his analysis (JSTOR) of higher grades and evaluation scores. According to Centra, evaluations are highest when students perceive “just-right” levels of difficulty, rigor, and learning expectations (I’m simplifying his analysis quite a bit, so please do look at the article). Imagine honors students, for example, who feel a gen-ed humanities course is being pitched too low for them. They’re likely happy with their A’s but are also likely to give lower evaluation scores. Centra’s and Isley and Singh’s analyses expose a central psychological truth about teaching evaluations: that they are deeply rooted in each individual student’s own educational context and history. Evals tell us as much about the students’ identities as learners than they do about teacher effectiveness. Maybe more so.

What’s heartening about this work — and very useful in terms of faculty development — is that the numbers from the evals might suggest the extent to which students understand and appreciate what is being asked of them in a course. I’ve long contended that a great deal of student complaints are rooted in some kind of communication breakdown between students and teachers. It’s possible to read low evaluation scores as representative of that breakdown. Now, I’m not suggesting that one or two low evaluations means a teacher can’t communicate effectively with students. We all know that lots of stuff happens during a semester. Instead, I’m suggesting that low evals mean the instructor and students weren’t really on the same page for that particular course. The next step, then, is for the instructor to reflect on what might have led to that.

And here is where the written comments on evals matter. Those of us who have taught know the thrill and agony of the written comments. We also know how the one negative comment is usually the one we fixate on the most. Comments can cut to the bone, and they often seem like the least fair aspect of the entire evaluation process. After all, the student evals are anonymous, and students don’t have to take responsibility for what they write. Nevertheless, the comments can help instructors understand how their courses were interpreted by the students. And if instructors are unhappy with those interpretations, then they can use the comments to help revise their presentations and deliveries. (Check out Dean Dad’s take on this over at Inside Higher Education.)

It’s important, though, to recognize that not all comments are created equal. Comments like, “She’s hot,” “This class sucked,” and “We shouldn’t have to take this class,” are, to my mind, more connected to students’ immaturity and frustrations than to their perceptions of what they were asked to do. When I read comments, I look for patterns and trends. Do multiple students mention the instructor’s classroom demeanor? Do they remark on how frequently the instructor was late or how slow he was to return papers? One or two “She’s boring” comments don’t get my attention — but seven or eight do. That many comments might mean that instructor isn’t pitching the class at the right level. (I know many people will argue that the “He’s boring” comment speaks mostly to students’ lack of attention spans, but I’m not so sure. I’ll try to write more about that later.) In general, I look for patterns and trends that speak to how the students interpreted the difficulty, rigor, and expectations for the course. I see a lot of comments on low-scoring evals that suggest the instructors’ simply didn’t spend a lot of time articulating what students were to do and, more importantly, why.

The why part of teaching is perhaps the most important. Students want to know that what they’re being asked to do means something. When they can’t relate their required activities to their assessments and to their own learning, they get frustrated. In my meetings with instructors about evaluations, I try to tell them that they need to be consistently transparent in their teaching, even if that means stopping class activities for a moment to explain why something is happening. I think the more meta we can be about these things, the better.

More on Need-Blind Admissions

Wesleyan shifts away from need-blind policy, citing financial and ethical concerns | Inside Higher Ed.

From the Insider Higher Ed site: Wesleyan University has announced it’s moving away from need-blind admissions, effectively taking students’ abilities to pay into admission consideration.

I think we’re going to see an even greater divergence between public and private institutions. Private universities that seek a kind of upper-tier-but-not-quite-Ivy status are going to work to attract the best students who can pay their full-tuition without loans. (Remember, alumni who are not in debt tend to donate more.) Lesser private universities will be consistently strapped for cash. Public schools will pick up the enrollment slack, but will burden students with tens of thousands of debt.

 

Financial Aid and Private Schools

Last week in The Washington Post, the President of Wesleyan University (CT), Michael S. Roth, announced his school’s new initiatives regarding tuition increases, financial aid packages, and a three-year degree. Of the first two items, he writes:

In a new model we are developing we will be committed to spending almost a third of our revenue on scholarships while meeting the financial need of our students without requiring excessive loans. We will also commit to linking tuition increases with inflation, rather than depending on the much higher rates of increase to which Wesleyan (like most colleges and universities) has been accustomed for decades.

On the surface, Wesleyan’s plan seems future-student friendly. One-third of revenue going to scholarships seems pretty generous (I have no idea what WU’s operating and financial aid budgets are), and capping increases to the 3-4% rise in inflation makes the school look like a good financial steward. If I were the parent of student interested in WU, I’d be optimistic that my child would be eligible for some aid from the school and relieved not to have to plan for large jumps in tuition. But there’s a darker side to this, I think, and the school’s new push for three-year degrees is perhaps the giveaway.

If WU is capping its financial aid to one-third of its revenues (i.e., tuition and fees, mostly), it’s possibly doing so at the expense of “need-blind” aid. “Need-blind” aid is essentially merit-based scholarships, money that goes to good students, based on test scores, grades, etc. Universities use need-blind aid to attract some percentage of excellent students to their campuses. I would bet that Wesleyan is moving away from need-blind aid and increasing aid to the students who come from less-affluent homes. Again, on the surface this sounds great. But the move effectively caps the number of in-need students. The rich families won’t be affected, as they can afford the tuition if they really want to send their children to Wesleyan. Poorer families will be vying for the few spots made available by the aid packages. One possible result is a less economically diverse student population.

The tuition/inflation link is just a marketing ploy, I think, and potential students might be wary of the effect such a link could have on things like maintenance, faculty salaries (which is related to faculty happiness and productivity), campus services, etc. It’s nice to think that campus expenses rise just with inflation, but that’s rarely the case. Unless WU is happy with all it currently has — and has no plans for growth, etc. — then the link makes sense. Otherwise, the link will force administrators to cut back across the board.

So what of the three-year degree, then? Let me start by saying that I have no problem with a three-year degree in and of itself. Four-year degrees seem arbitrary, and a lot of what students need from college can probably be had in three very focused years. But I think WU is counting on the three-year degree to keep its financial aid cap in place. Students accepted to the three-year program (effectively students with enough AP or dual-enrollment credit) are likely less in need of aid than other students. This is simple demographics. Students with lots of AP credits usually come from affluent school districts, which are funded by the affluent families that live in those districts. Or private schools. WU can count on getting $150,000 (I’m guessing) from each 3-year student, while a 4-year in-need student might bring in that same amount but cost WU $50,000 in aid.

Obviously, I don’t know the details of WU’s plan. And perhaps I’m being cynical. I think private schools without super endowments are feeling the economic pinch. $50,000 a year tuition seems like the magical number, the maximum cost even the richest folks are willing to pay. So if the tuition dollars start to level out, some things must go. Private schools can’t cut back on amenities — gyms, nice dorms, etc. — because those things are what set them apart from much less expensive public schools. So they have to cut financial aid and slow staff salaries. They also have to attract more students who can pay the full amount.

Here’s my worry, especially from my position as a private university employee: How many such students exist anymore? Are the rich reproducing enough to keep a steady flow of offsprings heading to college? And what will these families demand in return for their full tuitions? I don’t think the traditional liberal arts education offered by Wesleyan is very much valued anymore. Rather, students (and their parents) are after the credentials and connections that a place like WU offers. The plan that Roth lays out pays lip service to the liberal education roots of the University, but it masks an unsettling reality: private higher education is more for the rich than ever before. That reality means the poor will have even less access to the connections and opportunities required to move into (or above) the middle class.

Follow

Get every new post delivered to your Inbox.