Wednesday, April 18, 2018

Context, Redux


I don’t often do reruns, but for reasons I’d rather not elaborate, this seems like an especially good time to revisit this piece I wrote and posted in September of 2017.  I stand by it.


My Recurring Nightmare

I’ll admit to some raised eyebrows reading about the lecturer at NJIT who was recorded apparently praising Hitler in class.  He claims he was taken out of context.

As someone who used to teach political philosophy, a scenario like this is my recurring nightmare.

Among other things, I taught the Greatest Hits of the Western canon of political thought, or, as we called it, “From Plato to NATO.”  I assigned actual texts -- in translation, when necessary, but still -- and spent class time helping students decipher them. Some of it involved reading comprehension, but much of it involved trying to get the overall perspective of each thinker.  A middle-class American 18 year old may not find, say, Locke’s Second Treatise terribly relatable at first blush, so part of my task involved painting word-pictures and trying to provide context.

Sometimes that meant role-playing, or playing the devil’s advocate.  At various times over the years, in class, I would role-play a monarchist, an anarchist, a Marxist, a utilitarian, a libertarian, a Hobbesian, an Aristotelian, a Burkean, a feminist, or a Platonist.  Fascism was a tough one, but sometimes I’d try to ventriloquize Nietzsche, which can be great fun in very small, carefully selected doses.

This was before smartphones and YouTube.  Back then, a single student might misunderstand something I said, but the odds of that student recording it and distributing it instantly to the world were close to zero.  There were times when I would play a character for ten or twenty minutes at a pop, trying to help students understand how a given thinker or school of thought connected the dots.  
If some student had recorded, say, five minutes of the anarchist role-play and posted it to YouTube, shorn of context, I would have been in a bad spot.  But I wouldn’t have been doing anything wrong.

This, to me, is why it matters to have presidents and vice presidents who have actually taught.  If some ideologically-driven student or organization starts pulling this kind of stuff and trying to shut down real inquiry, you want to have people high up who understand both what’s at stake and what was really going on.  If I couldn’t try to present each thinker’s most compelling claims in the most compelling way I could, I wouldn’t have been doing my job very well. Getting students to grapple with difficult questions can involve some uncomfortable moments.  

The threat that those uncomfortable moments could be taken entirely out of context and sent to the world as evidence of something sinister is deeply scary.  It cuts to the heart of the teaching role. The panopticon-from-below is such a severe threat because it’s so easy to pull off. The original panopticon took actual effort to build.  Now anyone with a midrange phone can do both surveillance and mass distribution. As Neil Postman put it, Big Brother is you, watching.

I don’t know whether the NJIT case involved thoughtful pedagogical role-playing, unhinged ranting, inappropriate recruiting, or what.  It could have been any of those, or some combination of them. But on general principle, I’d be deeply wary of drawing conclusions from a single recorded clip.  It’s just too easy to mislead.




Tuesday, April 17, 2018

Two Articles and a Postcard


Which is better: a low price for everyone, or a special sale just for you?  

I saw two articles and a postcard today that, taken together, gave me pause.

The first article, from The Journal of European Social Policy -- I am sooooo much fun at parties -- concerns “the paradox of redistribution.”  The paradox is that countries that engage in “from the rich to the poor” redistribution actually achieve less redistribution than countries that use a “from everyone, to everyone” model.  (The American version of this theory is “programs for the poor become poor programs.”) The article finds that the paradox remains largely true in 21st century welfare states. For advocates of consciously using the state to sand down the rougher edges of a Darwinian economy, the lesson is that targeted benefits are far less effective, over time, than universal benefits.  

The easy American case is the contrast between Social Security and “welfare.”  The latter took on a harsh stigma, based largely on racism, and became so loathed that a Democratic president ran on ending “welfare as we know it.”  Social Security, on the other hand, is politically sacred; even the current President ran on protecting it. Both are essentially cash transfers, but in the popular imagination, one is available to anyone who lives long enough and the other was an unearned benefit for “those people.”  

The contrast between the two suggests that if we want to increase college attendance and attainment, we should go with lower sticker prices (“free community college!”) for everyone.

The second article, from the Wall Street Journal, is about colleges using “merit aid” as a recruitment tool.  Most colleges don’t meet students’ full financial need, and even among those that do, well, there’s need and there’s need.  So they’ve adopted a model of a high sticker price with individualized discounts. Entire companies exist to help colleges figure out the optimal discount for each student.  The article notes that what economists would recognize as “price discrimination” gets repackaged as a positive when it’s presented as “merit scholarships.” It quotes students saying that the scholarships they received made them feel wanted, or special.

The postcard that gave me pause arrived this week from Vanderbilt, addressed to The Boy.  It writes in gold font that “66% of Vanderbilt students receive some type of financial assistance.”  It adds that the 2017/8 “average financial aid package was $49,242,” and that their financial aid packages “DO NOT INCLUDE LOANS” (all caps in original).  In case that’s too subtle, the bottom line (literally) reads, in all caps and gold font, “IT’S FREE MONEY.”

The postcard strikes an awkward balance.  It reminds me of the old Publisher’s Clearing House sweepstakes that Ed McMahon used to hawk.  (For younger readers, Ed McMahon was the Andy Richter of his time.) We’re giving money away! You might win some!  Gotta be in it to win it!

Taken together, there’s a pretty clear distinction between what benefits the larger society and what benefits individual institutions.  Universal benefits -- free community college, say -- can be politically sustainable, and can affect large numbers of people. Tennessee’s experience with free community college is the paradigm case; enrollments there surged, and not only among low-income students.  Benefits that target the poor tend to get cut. Benefits that target individuals in the upper-middle-class can help individual institutions and individual recipients -- the college that lands TB will be lucky to have him -- but come closer to zero-sum socially, and can even be regressive.  

If that’s right, then there’s a fundamental tension between the broad social good of fostering a more and better educated population, and pitting colleges against each other.  The former would be best served through universal benefits; the latter, through fighting over a few stars with discounts that make them feel special, or lucky. In that context, “performance-based funding” for public colleges is likely to wind up somewhere between “more trouble than it’s worth” and “actively destructive.”  What we should do -- what we need to do -- is to strengthen public colleges and universities across the board, and make them worthy of everybody.

I don’t blame Vanderbilt for playing the game as it exists.  TB is a great kid, and they’re taking their best shot at landing him.  But if we want to change the results of the college arms race, we need to change the game.  If we want colleges to serve broad social goals, we can’t force them to fight each other. We need to make their funding as solid as Social Security.



Monday, April 16, 2018

Taking Diversity Seriously


Back in the 90’s, when I was in grad school, I had a feminist professor who used to denigrate one school of feminism by referring to it as the “add women and stir” school.  As she used the term, it referred to the idea that all you had to do to achieve gender equality in the workplace was hire more women. In truth, she argued, it would take far more than that; it would take rethinking the workplace itself.  The work-focused employee model only worked if he had a stay-at-home wife; without that, something had to give.

She was right, IMHO, but the “add women and stir” school won most of the practical victories.  That’s too bad, because we missed an opportunity to humanize the workplace. Instead, we’ve chosen to make family life progressively harder, with predictable effects on birthrates.  As Nathan Grawe’s recent book warns us, the worst of the birthrate crash should hit higher ed around 2026 (2008 plus 18).

I was reminded of that in reading Ashley Smith’s piece in IHE about diversity at community colleges.  The focus of the piece is on the hiring of chief diversity officers, with some fairly predictable carping about “administrative bloat” in the comments.  (Anyone who wants to trot out that canard in reference to community colleges is referred to the Delta Cost Project. Look at the cross-sector comparisons.)  Chief diversity officers are necessary and helpful, but reducing the topic to them reminds me of adding women and stirring.

My colleague Michelle Asha Cooper coined a wonderful line about the “college-ready” student.  She suggested that we should focus instead on becoming student-ready colleges. She’s right.

Taking student diversity seriously means, among other things, rethinking some of the taken-for-granted ways that colleges do what they do.  If we made the elimination of achievement gaps a top priority, what would that entail?

In that context, I’m heartened by the traction that Sara Goldrick-Rab’s work on student basic needs has had recently.  Community colleges -- which enroll a disproportionate share of low-income students -- are starting to realize that student housing, food, and transportation are fundamental to their academic success.  That sounds obvious, but I’ve seen that argument gain purchase only over the last couple of years. It has been true for a long time, but it has started to resonate at a new level only recently.

Open Educational Resources are properly about diversity, too.  They enable students of limited means to participate in class on a more level playing field.  Given the racial distribution of wealth in the US, that should be understood as a diversity initiative.

Odessa College, in Texas, made meaningful dents in achievement gaps by race and income in part by shortening its semesters.  8-week classes allow students to take fewer courses at a time and focus more intently on them. They also reduce the damage if life gets in the way ten weeks into the Fall.  The results have been impressive; Josh Wyner, from the Aspen Institute, noted that Odessa has improved its outcomes more quickly than any other college in the country. Although it isn’t usually presented as one, I see the calendar change as a diversity initiative.  It’s an adjustment of the way the college operates in order to fit more effectively the needs of the students it actually has.

I’ve heard good things about “Aid Like a Paycheck,” a program in which student financial aid refunds are distributed biweekly over the course of the semester.  As the name indicates, it treats financial aid like a paycheck, enabling students to budget with some predictability. When students work part-time with volatile hours, having that reliable, steady source of income as a base can make a difference.  As a side benefit, it also reduces the institutional cost of “Return to Title IV,” since the money isn’t all expended at once.

Of course, on the academic side, the move to streamline remediation can help reduce achievement gaps, too.  Co-requisite models, multi-factor placement, and “boot camps” in August or January can help students pick up momentum at a critical moment.  Someday, someone will make a similar breakthrough with ESL; I remain convinced that there’s room for real improvement there, but I don’t know quite what it will look like.

Online instruction is a blessing and a curse.  It tends to have lower success rates than onsite teaching, though some places have made headway in reducing those gaps.  (We’ve done that here, going from double digits to seven points in the last few years.) I’ve read that online instruction actually increases achievement gaps, since it tends to magnify the importance of a room of one’s own.  But tossing it aside on those grounds strikes me as misguided. Online instruction fills a real need, and has become popular for very good reasons; the real challenge, I think, involves institutions getting better at it.

The common denominators to these interventions -- and the list is far from exhaustive -- are that most of them aren’t usually thought of as diversity initiatives, and every single one of them generates some opposition on campus among people who would have to change some element of what they do all day.  It’s not enough to add diversity and stir. If we’re serious about improving outcomes, we have to be willing to challenge some of our most basic operating assumptions. Otherwise, we’re like the employer who expects both members of a working couple to have a wife.





Sunday, April 15, 2018

The Intro Course


A new study finds that community college students who take an introductory course with an adjunct professor are less likely to take subsequent courses in the same discipline than students who took the intro course with a full-time professor.  

It’s the sort of finding that raises as many questions as it answers.

At a basic level, it suggests that there’s an institutional payoff to using more full-time faculty, as opposed to more adjuncts.  Some of us (hi!) have been arguing that for years, to limited effect. The battle continues. If nothing else, the study offers some hope of quantifying the relationship, and thereby of pricing it.  It’s hard to win economic arguments on moral grounds, at least internally.

But it isn’t just about the employment status of the professor, as important as that is.  It’s about the importance of first impressions, including in the classroom.

Large universities have long treated introductory classes as cash cows.  The model of the 300 student lecture, with discussion sections led by graduate students, persists because it’s cheap.  But the message it sends to incoming students -- you’re on your own -- isn’t terribly welcoming.

It wasn’t meant to be.  It was built on the assumption that higher ed was a seller’s market, and that the burden was on the student to show that he -- historically, it was a he -- belonged.  We’ve all heard the apocryphal story of the freshman orientation that starts with “look to your left, look to your right, only one of you will make it.” The old “weed ‘em out” approach was built on the assumption that there’s a surplus of students, and the job of intro courses is to gatekeep.  They were admirably well-designed for that purpose.

That purpose doesn’t make sense anymore, to the extent that it ever did.

Community colleges and small liberal-arts colleges share the distinction of featuring small sections of introductory courses taught by actual faculty.  They haven’t always made as much of a fuss about that as they could. They should.

But the employment status of the professor, and the size of the class, aren’t the only relevant variables in introducing students to college.  

It’s pretty well-established in the student success literature, for instance, that banning late registration would help.  If it were up to me, and if the short-term economic loss weren’t prohibitive, I’d love to close registration (with plenty of notice, obviously) at least three weeks before the term started.  Then we could spend those three weeks making sure that every student has financial aid ironed out, textbooks acquired, work schedules adjusted, and the like, so professors could jump right in and start teaching substantively on day one.  The last-minute chaos that stems from last-minute registration impacts the classroom directly. I give faculty a lot of credit for managing it as well as they do, but they shouldn’t have to. In fact, one of the most compelling arguments for OER -- other than cost, of course -- is that they allow every student to have the materials from the first day of class.  By removing an excuse, OER allow the faculty to be stricter about insisting that students do the reading and the homework from the outset.

Still, I’m glad to see some community-college-based data on intro courses and staffing.  It’s a start.

Wise and worldly readers, if you could tweak something about intro courses, what would it be?

Wednesday, April 11, 2018

When There’s No Capstone


This one is a little “inside baseball,” but for those of us who wrestle with outcomes assessment, it’s a real issue.

How does your college assess gen ed outcomes when you don’t have capstone courses?

“Gen ed outcomes” are the skills that students are supposed to pick up from the required courses outside their majors, and that transcend individual fields: communication, quantitative reasoning, information literacy, and the like.  The idea is that every student who graduates a degree program at a given college should have proven proficiency in each of the general education goals. They may well have different majors, but the gen ed outcomes should be common ground.  

Traditionally, those have been assessed in two ways.

One involves mapping each outcome to a given gen ed course, and then looking at whether students in those courses actually achieve what they’re supposed to.  The flaw in that model is that the whole doesn’t always equal the sum of its parts. For example, when I taught poli sci, I used to assign papers. When I graded the papers, I included feedback on the quality of the writing.  Some students would complain “this isn’t an English class!,” as if that mattered. I’d explain that the point of a composition class isn’t to pass a composition class; it’s to pick up a skill you’ll use in other classes, and presumably, on the job.  If they did just enough to pass composition but then left it behind, they will have succeeded in the class but utterly missed the point.

The other involves having students do some sort of project in a capstone course they take in the last semester before graduation, and assessing the outcomes as demonstrated in that project.  Four-year schools often use this model. It gets beyond the issue of the whole and the sum of the parts, which is good, but it only works when you have clear capstone courses. At the two-year level, many programs don’t, and won’t.

Has your college found a way to see if students carried skills beyond the classes tasked with teaching them, in the absence of capstone courses?

At a previous college, we had faculty who would design projects in upper-level classes to lend themselves to assessment, and then submit a few samples from students who were in their last semester to the Gen Ed Assessment Committee, or GEAC (pronounced “geek”).  It worked tolerably well, but it relied a lot of faculty willingness to go the extra mile. Over time, going back to the same few good soldiers repeatedly could bring an unintended selection bias into the process.

I’m confident, though, that many colleges have faced this, and some must have found reasonably adept workarounds.  So, wise and worldly readers, I seek your wisdom. Have you found a good way to assess gen ed outcomes in the absence of capstones?


Tuesday, April 10, 2018

Salary Compression and Step-Grids


I was in my thirties the first time I heard the phrase “salary compression.”  At first I assumed it was an inelegant way of saying “low pay,” which is only half-right.  It’s more commonly used to refer to a salary gap between incumbent employees -- especially longer-term ones -- and new hires that the incumbents consider too small.  They see it as devaluing their experience.

I haven’t seen much of that in my career, whether as a function of location, timing, or both.  But I’m starting to understand how it can happen.

In settings with widespread collective bargaining, there’s usually a relatively strict method for determining salaries.  At Holyoke I sometimes had to quote candidates figures that ended with “...and twenty-five cents” so as not to violate the grid.  Here we use whole dollars, which is a welcome change, but the amount of whole dollars is pretty tightly prescribed. And raises are across-the-board, so everyone in a given role gets the same raise as everyone else in the same role.  (Here, that’s literally true; the current faculty contract gives a set dollar figure raise, rather than a percentage, on the theory that it helps the lowest-paid employees more.) That has its virtues and its downsides -- predictability is good, but high performance isn’t especially rewarded -- but we all know them, and they’re part of the system.  Internally, a grid-and-step system works reasonably well to prevent favoritism and keep the peace.

But the outside world cares not a whit about our step increases.  The market goes where it goes.

In some cases, that doesn’t matter much.  But in roles or fields in which we’re competing with large universities and/or private industry, it’s easy for the step-grid to fall behind the market.  And that brings a real dilemma.

Let’s say that the contract allows you to offer a candidate for a particular role $60,000, but the going market rate for someone with that skill set is $70,000.  What do you do?

  1. Offer what the contract allows, watch the candidate walk away, leave the job unfilled, and hope for the best.
  2. Offer what the market demands, prepare for the inevitable class-action grievance, and hope for the best.
  3. Offer what the market demands, give everyone else the same raise to bring them up, and lay off a couple dozen people to pay for it.
  4. Hire someone who isn’t really qualified, and hope for the best.
  5. Outsource the role, so the contract doesn’t apply.

The careful reader will notice that every option is bad.  That’s because every option is bad.

If we had enough money that we could do it and avoid layoffs, “c” would be the popular and easy choice.  But we don’t. And even if we did, there’s the issue of market volatility. Markets go both up and down. But in the language of economics, wages are “sticky.”  They don’t move down quickly. There are excellent human reasons for that, but it puts limits on how hard you want to chase peaks.

Many colleges, including my own, have chosen option “e” to deal with IT.  Salaries in that area simply outstrip any step-grid, and people with those skills have options.  If you need the services -- and these days, you do -- you may have to go outside the grid. That can create some resentment on campus, but so can a network that crashes regularly.  

The easiest solution politically is “a,” but sometimes that means leaving important work undone.  That can mean more work falling on the people who are already there, or it can mean improvements that don’t happen.  James Baldwin’s observation that poverty is expensive applies to institutions, as well as to people. It’s the human version of “deferred maintenance.”

Although salary compression is often presented as an issue of fairness or novelty, I think it’s more accurately a reflection of a disconnect between the logic of step-grids and the logic of markets.  In a hot market, it may be the least-bad option.







Monday, April 09, 2018

Monopsony and Higher Education


If there’s only one employer in town, the employee’s bargaining power is relatively low.

That’s the empirical claim behind the idea that monopsony -- a monopoly of demand for employees by a single employer in a given region or area -- has a dampening effect on wages.  Given the cost to family life of moving, employees will frequently settle for a suboptimal deal for the opportunity to stay put. In two-earner families, that cost isn’t just emotional.  A move would require finding two jobs in a new location, rather than one, increasing the difficulty exponentially. Over time, those individually rational decisions add up.

Local employer monopolies can occur organically, through sheer market power, or through non-compete agreements.  However they happen, the effect is the same: they shift the balance of bargaining power in favor of the employer.  

A new article suggests that monopsony is a larger factor in wage stagnation than most people have assumed.

As an academic, though, it seems like old hat.  For once, we’re actually ahead of the curve. Monopsony in higher education has been the order of things in the US for most of our history, except for an aberrant period of rapid growth from the late 1950’s to the early 1970’s. During the 1960’s, the United States added an average of one community college per week.  That pace couldn’t be sustained, and wasn’t; I literally don’t remember the last time I heard of a new community college being established.

A paucity of local employers puts real limits on people’s employment options.  It becomes a real issue when, say, a campus closes, as at Eastern Kentucky U, or a college closes, like Mount Ida. There probably aren’t many local options for displaced employees, unless they change industries.  That’s particularly true for faculty in academic disciplines, in which the training is quite specific. And even if there are colleges nearby, if those colleges are affected by the same demographic trends, they probably aren’t hiring much.  

Although academics typically have graduate degrees, they often lack the option of setting up their own practice.  A lawyer can do that, and so can a medical doctor, but a history professor is likely to have a harder time of it.

Starting a medical practice may be difficult, but starting a college is much harder.

Within higher ed, we’ve responded to monopsony by creating different tiers of employees doing similar work.  Those who got in first, or caught some lucky breaks, are largely insulated from risk. Later arrivals, or those who didn’t catch a lucky break, bear the shifted risk to compensate. The arrangement buys local peace in the short-to-medium term, but it doesn’t address the underlying problem, and it’s pretty brutal to a lot of people.

I’d hate to see that arrangement become a broader new social norm.  

At an economy-wide level, identifying monopsony as an issue suggests that it may be time to revisit and strengthen anti-trust law.  Within higher education, though, the answer is less clear. Demographic trends wouldn’t justify another building spree, at least not at a 1960’s level.  Online instruction offers some limited liberation from geography, but full-time opportunities that are entirely online are still the exception.

If employers are going to be relatively few in any given area for a while, we should at least strive to make those employers as sustainable as possible.  Tenure is only as safe as the institution that awards it. Given the realities of monopsony, many former employees of defunct institutions may never find jobs as good as the ones they lost.  That’s a catastrophic waste of talent. Avoiding that disaster seems worth some funding.





Sunday, April 08, 2018

“Demonstrated Interest” is Really Time-Consuming


In the community college world, the admissions process tends to be relatively straightforward.  You show up before the deadline with evidence of a high school diploma or equivalent, fill out a few forms, apply for financial aid if applicable, make a payment, and schedule a placement test, an orientation session, and your classes.  SAT’s and ACT’s are optional, and used only for placement. (If you hit certain numbers, you’re exempt from the placement test.) The whole thing can be done in a day or two. And if you sign up prior to August for a September start, in most cases, you should be able to get the schedule you want.

The Boy’s process, focusing on selective places, introduces a host of new variables.  Some of them aren’t surprising. They look closely at transcripts, including scrutiny of both grades and course selection.  They require essays. They require letters of recommendation. Some offer interviews. All of those, I remembered and expected.

But since my search, they’ve also introduced a new variable: “demonstrated interest.”

You’d think that submitting an application and paying a fee would demonstrate interest.  (The typical application fee is roughly triple what Brookdale charges, which would seem to suggest interest.)  But apparently not. In this context, “demonstrated interest” means signing up for an attending an on-campus information session and tour.  

These are not trivial expenses, either of time or of money.  And there’s no financial aid for them.

In the case of UVA, where we went last week, it involved driving about six hours each way.  (It should have been slightly less, but Route 95 is a fickle beast.) Obviously, that also involved hotel stays and meals on the road.  We decided to make a trip of it by adding Luray Caverns and the Shenandoah National Park to the itinerary, and I’m glad we did, but we probably wouldn’t have made the trip just for those.  Nobody at the Caverns caught my Fraggle Rock reference, which mostly made me feel old. The whole place looked like Fraggle Rock had come to life.

TB’s high school is sufficiently familiar with the process that it actually “excuses” a set number of absences for college visits, assuming you return with the relevant documentation.  And the colleges are familiar enough with the documentation that everybody there knows what you’re talking about when you ask for it. Cultural capital finds a way.

The campus was lovely, as I’ve heard it would be.  TB liked it enough to put it in his top five. I didn’t know until the dean’s presentation that UVA actually doesn’t consider demonstrated interest as a variable.  But most of his other top choices do.

He’s becoming a savvy consumer of college information.  He noticed, for instance, that the tour didn’t include a dining hall, a classroom, or a dorm room; they usually do.  I noticed that the dean’s presentation didn’t include a student, as they usually do. He picked up the requisite pennant to add to his collection, and we explored some of the culinary offerings of Charlottesville.  I was impressed by “Insomnia Cookies,” which is open until 3:00 in the morning. That’s some good market research right there.

Still, as we recovered from the length and cost of the trip, I couldn’t help but think about the burden of “demonstrated interest” on students of modest means and/or limited cultural capital.  Does a student whose parents know about “demonstrated interest” actually have more interest than a student whose parents didn’t go to college? Or who live in a school district that isn’t quite so attuned as to give “excused” absences for college visits?  Or whose jobs just don’t allow that much travel? It seems unlikely.

The point, as near as I can tell, is that many selective colleges receive far more applications from capable and qualified students than they’re able to accept, so they have to winnow the field somehow.  Public ones usually have in-state numbers they have to hit, so start with that. Then they try to hit desired gender ratios, racial distributions, distributions of athletes and musicians, and all of the other variables that make up a class.  (In some cases, they defer to “legacies” or “development admits,” both of which are entirely about cultural capital.) That probably still leaves more than they can accept. At that point, “demonstrated interest” serves as a filter that may help screen out students who are “only” using it as a safety school.  Or not; I’ve never seen any actual data on that.

But from a student and parent perspective, “demonstrated interest” is a steep burden.  Yes, it’s great to see schools if you have the chance; I’ve enjoyed taking TB to several, and we hope to see a few more before we’re done.  But the costs are real, and resources are finite. And not every student has the advantages TB does.

My modest proposal?  In the name of fairness, I’d love to see selective colleges drop legacy admissions, development admits, and demonstrated interest as categories.  Less time on Route 95 is a win for all concerned. And in the meantime, we’ll keep building up the Honors offerings to make the community college a more appealing option for the entire community.  Let’s infer interest from applications, and get around the applicant surplus by making more places appealing in the first place.


Monday, April 02, 2018

The Search Continues...


This week is Spring Break for The Boy and The Girl.  It doesn’t align with most colleges’ spring breaks, including my own, because that would take some of the sport out of it.  (I don’t get a spring break anyway, but the point still stands.) This year, though, the mismatch is actually an opportunity in an annoying disguise.  We’re going on a college visit.

TB’s girlfriend is a senior, so he has had a front-row seat watching her navigate the selective college search process.  That helped demystify things for him, and to update my 1980’s era impressions. Last Fall we even went on a few visits, covering Boston (BU and Northeastern) and Pittsburgh (U of Pitt).  This week we’re heading to UVA.

This is how academics spend vacation days.  

I’ve heard that Charlottesville is a cute little town, and that the UVA campus is lovely.  I’m picturing Ann Arbor with a Southern accent. We’ll see.

But the real fun is watching TB react to what he sees.

I’ve had an odd tour of American higher education over the years: undergrad at a selective private SLAC in New England, grad at a flagship state university, first job at a for-profit, then jobs at three community colleges in two states.  And that’s not counting places I’ve visited over the years, whether as a prospective student, a friend, a family member, a job applicant, or a guest speaker.

TB hasn’t.  He’s in high school.  

The first few visits last Fall were eye-openers for both of us.  Having spent the last fifteen years at community colleges, I was struck by the unfathomable wealth of the places we visited, as well as the unfathomable tuition.  TB didn’t really see either of those. He was focused on the feel of each place, as well as the opportunities for pre-med students. I noticed the public transportation; he noticed the wifi.  I noticed the adjunct percentages; he noticed the food. Between the two of us, I like to think we each added something.

But high school years move quickly.  His frame of mind is evolving from “hey, wow, I’m on a college visit!” to “how does this one compare to that one?”  It’s starting to become real.

His girlfriend has received her acceptances, waitlist notifications, and rejections, as well as her various financial aid offers.  She’s narrowing the field to a single choice. He’s providing a supportive ear as she rides the emotional waves of a major life decision, which is as it should be.  He does that because he cares about her, but as a side benefit, he also gets a preview of the sorts of questions he’ll need to field next year. They’re less scary when you know they’re coming.

The major impression I hope he comes away with, as the search process unfolds, is that most of the places he looks at would be more than fine.  And most of what he’d really like to know -- the friends he’ll make, the life-changing experiences he’ll have -- are necessarily unknowable at this point.  They’re accidents of history. The best he can do is to make himself accident-prone. Some settings lend themselves more than others.

So, off to Virginia we go.  I’ll be taking a writing break for the rest of the week so I can be fully present during the trip.  The blog will be back on Monday. Yes, Virginia, there is a TB. He’s on his way, and you’d be lucky to have him.  




Sunday, April 01, 2018

Defending the Bad Against the Awful


Colleagues all over the internet have responded thoughtfully to the President’s persistent confusion about the role of community colleges.  The former pitchman for Trump University casts aspersions on community colleges, in favor of “trade schools,” in apparent ignorance of the broad and deep “workforce development” roles that community colleges have played for decades.  Whether that’s genuine ignorance, genuine confusion, a disingenuous attempt to allow for-profits the same protections as community colleges, a sign of having stopped paying attention around 1980, or a diversionary tactic, I’ll leave to others better situated to analyze him than I am.

Instead, I’ll draw on the useful work of Jared Cameron Bass, Amy Laitinen, and Clare McCann at New America to focus on a detail that tends to get lost in the series of “did you hear that?” gasps.  They’ve put together a helpful overview of the deregulatory goals that Secretary DeVos has either stated or implied. The one that jumped off the screen for me was the credit hour.

As longtime readers know, I’m no fan of the credit hour.  But there’s a meaningful difference between improving on it and simply tossing it out.  

The credit hour emerged as a way to track faculty work for the purpose of calculating pensions.  Over time, it became both a measure of student workload -- fair enough -- and a proxy for student learning.  From a learning standpoint, the flaw of the credit hour is that it measures the wrong thing. It measures “seat time,” and assumes an approximation of out-of-class work time.  It doesn’t measure learning. From an economic standpoint, the credit hour defeats any effort at meaningful productivity increases by definition, because it denotes learning in units of time.  If it takes 45 hours of class time to earn 3 credits, then the number of credits generated per hour of instruction can never increase. The only way to keep up with rising salaries (and costs of benefits) is to raise prices.  That’s called Baumol’s Cost Disease, and I’ve argued repeatedly that understanding that is key to understanding tuition increases.

I’m no fan of the credit hour because it’s a proxy measure that locks us into a cost spiral, and doesn’t work well with online or other forms of non-classroom-based instruction.  (In the context of an online class, what does “seat time” even mean?) It’s both indirect and inflexible.

In a more perfect world, we could either return the credit hour to its original, much more modest purpose, or junk it entirely.  Instead we would use intelligent measures of student learning. That would get us much closer to what we’re actually trying to do, and would allow for the possibility of improved productivity.  

But DeVos hasn’t shown any interest in replacing it.  Instead, she may be angling simply to eliminate it.

Which puts me in the unanticipated and uncomfortable position of defending the bad against the awful.

The credit hour is an inflexible proxy, but it’s _something_.  It forces a basic level of honesty on colleges that might be tempted to cook the books.  A college unbound by any external measure of student achievement could reduce its labor costs simply by giving classes more credits than they should have.  Paying faculty for three credits while charging students for six credits for the same class would do wonders for a college’s balance sheet; the real damage would show up only over time.  In the meantime, colleges that take undue liberties would be rewarded, and those that do what they should would be placed at a competitive disadvantage.

I can imagine an objection to the effect that the real issue is student salaries upon graduation, rather than how they got there, so who cares if a college defines classes differently?  But diluting the quality of education isn’t likely to improve those salaries. The danger would be a delayed feedback mechanism, during which great damage is being done but almost nobody notices.  Employers wouldn’t notice that the signal had been corrupted for a while, during which time it would look like everybody won. By the time the damage is obvious, it’s too late; an entire generation of students will have been shortchanged.  

If that looks like the sort of move that a short-term investor might favor, well, it is.  For those of us who care about education, the key words there are “short term.”

Without an external referent, even an imperfect one, anything that calls itself a college could peddle whatever it wanted and call it a degree.  The market would flood with “degrees,” devaluing real ones, leaving the holders of diluted ones with a watered-down asset, and leaving employers even more frustrated and confused than they are now.  No, thanks.

So for all of my objections to the credit hour, I’ll defend it against oblivion.  Come up with a better alternative, such as the competency-based education folks are doing, and I’ll happily embrace it.  But casting aside all standards is not an answer. I’ll take the bad over the awful, thanks.

Thursday, March 29, 2018

An Accidental Argument for Honors Programs


Going from an academically challenging high school to an academically unchallenging college correlates with increased rates of depression, according to a new study.

For the sake of argument, I’ll leave aside how they could designate the level of academic challenge at a given college.  I’ll also leave aside any methodological quibbles with the study, and take its headline finding at face value. I’ll assume it’s at least broadly correct.

What should those of us who work at community colleges take from the finding?

Based on observing my 16-year-old son, a high-achieving student in an IB program, I’d suggest that location may have something to do with it.  We live in New Jersey; he sees getting out of New Jersey as part of the point of going to college. Many of his friends are the same way. When I was his age, I ruled out Cornell on the grounds that it was too close to Rochester.  I wanted to get out of Western New York, Ivy or no Ivy. Thirty-plus years later, that’s not true anymore, but it was abundantly true at 17.

Colleges that attract a lot of high achievers tend to draw nationally or internationally.  Colleges that don’t, tend to draw locally. A high-achieving student whose life circumstances ruled out moving away may be responding as much to staying local as to any academic frustration.

There’s also an objectionable but persistent correlation between the resources available to a college and the economic and educational standing of the students it attracts.  Simply put, as a country, we send the most resources to those who already have the most, and the fewest to those who already have the fewest. Princeton’s tax exemption is worth orders of magnitude more per student than Mercer County College’s direct subsidy, but we force austerity on the latter while admiring the former.  

Still, it can be frustrating to be in classes pitched to a median that’s too low.  That’s why I’ve long been a strong supporter of Honors and similar programs at community colleges.  Community colleges are meant to serve the entire community. That includes high achievers.

You’d think that would be an obvious position to take, but it isn’t.  At a previous college at which I worked, I butted heads with the president over the Honors program.  When I suggested that a minor infusion of resources would take it to the next level, he responded, and I am not making this up, “who cares?”  He saw an Honors program as counter to the mission of the place, as a sort of creeping elitism. I argued that academically strong students are just as much a part of the community as everybody else, and their needs were just as valid.  

I was outranked, but I wasn’t convinced.  

Now there’s another argument for my long-held position.  Not only do community college Honors programs present opportunity, they may actually be good for some students’ mental health.  

“But wait!” I hear an imaginary reader object.  “An honors program isn’t an entire college! The study refers to entire colleges!”

Which is true, but probably irrelevant.  Michael Moffatt’s classic Coming of Age in New Jersey has a nearly-forgotten section in which he asks students to draw maps of “their Rutgers,” by which he meant the layout of the university as they actually lived it.  The maps were both small and sparse; most students’ experience of a college is markedly partial. A strong Honors program with a defined cohort can become the dominant fact of a student’s experience, even if it exists only on the periphery for other students.  (The same could be said of the basketball team or the nursing program, for that matter.) That would allow the student of talent who couldn’t (or didn’t want to) move away a chance to have an academically similar experience to her erstwhile peers.

I won’t push The Boy to go to Brookdale, even though it would make my financial life easier.  Part of that is because I don’t believe in parents pushing life choices on their kids; TB is the one who will have the experience, so it should be more his choice than mine.  And part of it is that he wants some distance from Mom and Dad, which I consider healthy, if a little bittersweet. I recognize the impulse, and feel a moral obligation to pay forward the freedom I had.  He is his own person, who shouldn’t have to live his life around proving my points.

But if he did want to go to Brookdale, I’d absolutely steer him to the Honors program, where he would find the academic challenge he deserves.  And I want his counterparts who don’t have the option of moving, whether for financial or familial reasons, to have as good an academic experience as he will.  The study just adds one more argument in favor.