Tuesday, August 13, 2013

Virtualization



Apparently, this is turning into “begging for ideas” week.  Please forgive the dreadful manners.

I know juuuuuust enough about IT to be dangerous.  I recently heard an IT idea that strikes me as obviously great, but experience has taught me that ideas that look obviously great at first blush can hide great sins among the details.  So I’m hoping that some folks who have been through this can shed some light.

The idea is “virtualization,” and my bowdlerized understanding of it is as follows.  In traditional on-campus computer labs, every computer has its own CPU and performs its own calculations and processes.  The computers are networked to each other and to the internet, for obvious reasons, but each is capable of doing some pretty serious internal processing.  If you want to run a program on all of the computers in a lab, you have to install it on each computer individually.  Although much of what computers do is online now, we still pay for and maintain all those separate computer brains within each station.

In virtualization, as I understand it, the brains are moved to a centralized location (or locations).  So instead of thirty different machines each running its own version of BasketWeaver, thirty terminals are connected to a single server running a sort of uber-version of BasketWeaver.  

The appeal is twofold.  First, it’s easier to maintain one program on one server than to maintain thirty installations on thirty CPU’s.  Secondly, you can get away with “dumb terminals” on the user end, allowing for cheaper upfront cost. less maintenance/repair, and more consistency.  You don’t have to worry that some machines are running BasketWeaver 2010 and others are running BasketWeaver 2013; you can ensure that whatever version is running is running everywhere.  From a teaching perspective, that’s a real gain.

Obviously, virtualization requires some big honkin’ servers -- I think BHS is the technical term -- and plenty of bandwidth.  But if you have those, you can save the cost and headaches of trying to maintain hundreds of CPU’s across campus.  

If that understanding is broadly correct -- I’m basically thinking of the distinction between a chromebook and a laptop, writ large -- then it seems like it should be a no-brainer.

(I’ll grant upfront that there may be some very specialized use cases in which the old model still makes sense: dedicated Macs for Graphic Design, say.  But even if, say, twenty percent of the student-use computers had to remain in the old model, the savings for the college in terms of both money and maintenance would be substantial.)

But true no-brainers are rare.  There has to be a catch.

So this is where I’m hoping that people with battle scars will shed some light.  What’s the catch?  What’s the “you wouldn’t have thought of it, but this detail will kill you” life lesson?