Pages

Wednesday, February 1, 2012

VMs and O/Ses and Bears, Oh My!

IT Nirvana?

Computer systems offer more bang for the buck than ever before. There are more and better options than ever before. Storage is cheap and plentiful and processors are mighty and don't catch fire when you use them.

In theory, the size of data set or processing problem that is now "easy" should be so big that just about anything I do should be easy. In fact, since there this so much computing resource available to help me do it, just about anything I want to do should be simple as well. But I am finding that this implicit promise of simplicity is not being fulfilled, at least not at the system level.


I Am Not Immune

Consider my humble desktop at work. It is an Ubuntu box which is backed up automatically and offsite by the mighty boxbackup utility. Thanks to Virtualbox, I use virtual machines, of various types: a windows7 VM for development and to host my iPhones; various Linux VMs for various special purposes.

Recently we lost power while I was out of the office. My trusty desktop shut down gracefully because I run apcupsd. Hurray! apcupsd even told me when the power went off, although in true open source geek fashion, it reported the time in GMT.

However, the VMs did not shut down gracefully when their host did. Apparently, this is something that I have to set up myself using launchd. One of the VMs was trashed when the host shutdown; the other two were fine.

So on to restoring the trashed VM: the good news is that I found that my boxbackup repository was up to date; the bad news is that it contained a copy of the crashed VM in it. I am a belt-and-suspenders kind of guy, so I have a manual local back up to check: yes, I have an image in the on-site backup which is a few months old, but that is ok: these VMs do not change much over time. So, problem solved in the short-term but not the long-term.

In the long-term, I need to get some professional sys admin time or I need to change into my super tech costume and chase down these issues myself. I strongly favor the first option: the more I know, the more I value the knowledge of others. There is also the factor of money, though: the longer the Great Recession goes on, the less inclined I am to shell out real money unless I have to.


The 21st Century Data Center So Far: Boo

I bring up the plight of my desktop to emphasize that the rant that follows is  not simply screed against any particular sys admin but an observation about the environment in which most sys admins have to do their jobs.

As we struggle to deliver software and service on our client's hardware we constantly run into misconfiguration of hardware, virtual hardware, operating system software and services such as web servers and database servers.

We also run into poorly implemented policy and self-contradictory policy which doesn't help and somehow offends me more: can't we at least agree on a usable definition of what we are trying to do?

As VMs become more and more common, and the ability to deploy them correctly more and more rare, we are being forced to return to the "my software, my hardware, my responsibility" model of the past. Especially when we find that off-brand Unix distributions such as AIX seem to lag so woefully behind current.

In the 1980s and 1990s, we used to drop "departmental servers" into our client's work areas because the company mainframe was too expensive, too dedicated and too central to use. In what is now known as the Apple model, life was good: the client had one contact point and we had a known, stable environment.

In the 2000s, we tried to get with the program and use existing infrastructure such as database servers, DNS and DHCP . This was a nightmare: for one thing, every new IT administration seemed to want to do things differently: Microsoft! Open Source! Back to Microsoft, but maybe running some Open Source software on the windows server! Ok, how about thin clients which were sort of Windows? Oh, were you still using the old DNS? Sorry about that--wait, let's use Active Directory for authentication! Is it set up correctly? Who knows!


A Computer System of One's Own

Now we are worn out debugging other people's hardware configurations and system software deployments. We are looking to provide software-as-a-service on our hardware. We are currently mulling over the following options:

  1. We charge a monthly usage fee for access to a working system that is on our premises, under our control and accessed over the wild and wooly Internet via a VPN
  2. We charge a monthly usage fee for access to a working system on a host or hosts dropped by us into the data center: we set it up, keep it up and back it up: you provide power, A/C and a lack of ambient water. Seriously. From experience, we can say that the flooded machine room is a no-go.

We are cautiously excited about The Cloud; we might very well end up using Amazon's offerings in this area, once we are sure that privacy-conscious, mostly-health care clientele can dig it.

Internal IT have been very resistant so far, but we hope that accounting issues and procedural clarity with triumph. We tried to play nicely with the other children, but they kept peeing into the sandbox.

No comments:

Post a Comment