Sunday, June 07, 2015

Creating Emergencies


Jen Ebbeler has a thoughtful and thought-provoking post up about the virtue of skipping the “pilot” stage of a new enterprise and instead jumping in with both feet.

Admittedly, Ebbeler is writing in a different context.  She’s referring specifically to some online courses she taught at UT-Austin in which enrollments exploded unexpectedly.  When the numbers blasted far beyond what she had anticipated, as she put it, “every problem became evident very quickly.”  She couldn’t ignore, finesse, or develop labor-intensive small-scale workarounds; she had to fix the plane as it flew.  She notes that “[m]any of the challenges of large enrollment courses are logistical and are a direct result of scale.  In order to identify and remediate them, the “pilot” of the course has to be run at scale.”

I enjoyed the post immensely, both despite and because of the difference of context.  

Most community colleges don’t have single sections with hundreds of students in them, so the direct example doesn’t transfer cleanly.  But in the context of various experiments around improving student success, the issues of pilots and scale keep coming back.

The advantages of small-scale pilots to test new ideas are several.  Most basically, as every doctor knows, “first, do no harm.”  If a new idea is a train wreck, confining it to model-train scale contains the damage.  Having seen a few ideas crash and burn over the years, I can attest that this is not to be dismissed lightly.

In a community college context, where resources are always an issue, pilots can sometimes be largely or entirely grant-funded.  Grant funding can help the institution develop the capacity to scale up.  At HCC, we’ve used Title III and TAACCCT funding in exactly those ways, and they’ve made real differences.  If we had to wait until we had the money to try things at scale, we might not be able to try some of them at all.  When the pilots pan out, there’s a stronger argument for internal reallocation of resources than “I want to” or “I have a hunch.”  

Pilots also help with collateral damage.  Take self-paced math, for instance.  Yes, that required faculty development and support, and some close work with the registrar.  But it also required working with IT to ensure that the room was properly equipped and set up; working with financial aid to ensure that we were all conversant in drop dates, satisfactory progress, and the like; working with Admissions and Advising to ensure that folks on the “intake” side know how to explain self-paced math, and to whom to explain it; and working with the bookstore to ensure that it was up to speed on “all you can eat” user codes, among others.  It wasn’t the sort of thing that one department, no matter how determined and focused, could do alone.  Given that every department has its own projects, priorities, and constraints -- all valid -- there’s something to be said for keeping the “proof of concept” round relatively manageable.

Still, I have to acknowledge Ebbeler’s point about the issues unique to scale.  With a small pilot group, many of the back-office issues get resolved through customized workarounds.  That’s defensible when you have a cohort of, say, twenty.  But if you’re running thousands of students through at once, it’s impossible to get away with that.  You have to go to the trouble of actually developing system-level fixes.  Assuming the presence of sufficient resources, and significant tolerance for error on the first go-round, the occasional state of emergency can help distinguish real and significant issues from garden-variety foot-dragging.  It requires clear and strong direction from the top, and sustained attention over time, but used sparingly, it can bring clarity.

The “tolerance for error” piece is both cultural and regulatory.  If you have a culture of finger-pointing, some shock therapy may be in order; if haters are gonna hate anyway, there’s no point in trying to meet them halfway. Bold new facts on the ground can, sometimes, change a culture in necessary or helpful ways.  On the regulatory side, though, the cost of certain errors at scale may be simply prohibitive.  At that point, the “start small” approach wins by default.

Starting big can also get around the “too many pilots” problem.  (This was the point of the piece about “Initiative Triage” last week.)  Institutional bandwidth is limited, and every new project takes more than its share.  When you’re running a dozen pilots at the same time, just keeping track of them all becomes a problem.  Deciding as a matter of policy that it’s “go big or go home” necessarily means focusing on only one or two big changes at a time.  As large as those are, they’re actually easier to track than ten or twenty little ones.

In Ebbeler’s case, scale happened accidentally and abruptly.  She had no choice but to step up, and apparently, she did.  And her natural experiment didn’t require much in the way of coordination among departments or institutional silos beyond what already existed.  She had to work herself silly, but she was willing to do that, and now she’s enjoying the well-deserved fruits of her labor.  I tip my cap to her.

Wise and worldly readers, have you seen cases in which it was better to jump in with both feet than to start small?