Pages

Wednesday, September 12, 2012

Mission Creep in IT Consulting

I am an IT consultant specializing in the Health Care domain. When I say "IT consultant, many people think of this:


This is amusing poster comes from http://www.despair.com whose poster generally delight me, except when I am part of the group being mocked. Stupid asymmetrical human psyche

This post is not only amusing, it is also somewhat apropos: sometimes consultants seem to embed themselves into their clients like ticks on a dog, staying on long after the original engagement and doing no-one-seems-sure-what.

(I would argue that when this happens, the client is mostly to blame, but then I am hardly objective on this issue.)

"Mission creep" is military jargon for gradually and unintentionally taking on more and more responsibility until you are stuck in a much larger job than expected. This often results having too little time and too few resources, since the time line and budget were established for the original mission, not the expanded mission.

(I am not a huge fan of using military jargon unless one is on active duty, as I do not want to trivialize what the active military goes through.)

I agree that letting the scope of a project balloon is a common problem and I agree that IT projects, especially ones run by consultants, are prone to this problem. But I want to point out that not all project expansion is bloat and not all consultants are maximizing their billable hours without regard to value or need.

In fact, I find that many of our projects involve horse-trading and, in order to succeed, the scope needs to expand.

In part, this is because there is a boolean aspect to success: either a software solution does the job (automates the process, etc) or it doesn't. It is often not very helpful to partially automate a process. For example, if you can look up the code on the computer, but then you have to go to another computer to enter that code, you are only a bit better off than if you had to look up the code in a book and worse off than if you have memorized a few codes that will work.

In part, this is because requirements gathering is often obstructed or impossible. Often we do not get complete answers to our requirements gathering questions because those questions are asked in a vacuum of sorts (the software does not exist yet) or because those questions expose the answerer to potential grief from their boss.

Consider a prime example of project scope expansion from our practice: some years ago, we created a medical instrument results review interface for a client. It was a glorious success. We had estimated improvements in productivity and after a few weeks of operation, we examined the data to verify those gains. Our examination showed us no real gains.

So we observed the users for a few days and found out that they were still spending the bulk of their time fussing with the input to the machine. When we asked them why, they answered that the intake was problematic: tubes got stuck, or lost, or put in the wrong rack, etc. So instead of just reviewing the results in our software, they checked and rechecked the racks. In order to get them to stop, we added an "overdue" component which alert them to late or missing tubes. Once they felt that our overdue module had proved itself, they trusted it enough rely on it. We examined the logs to see productivity gains and saw about half of what we expected.

Back to the observation phase. This time, we found out that slides were the issue. Problematic specimens are often put on slides for human review. Review takes place somewhere else. Since it is impossible to know that a slide awaits, the user is either interrupted or interrupted herself to go check for slides. In order to get them to stop interrupting ourselves, we added notification of pending slide review requests, so they could stay at the machine with confidence. Now we saw the improvement we expected, and then some.

But when we asked for the glowing review we knew we had earned, there was some abashed resistance: now that the process was so streamlined, the regulatory requirement to audit the interface's operation seemed...onerus. We added an automatic audit feature which produced an audit report and logged the electronic signature of the reviewer. NOW the users were completely happy.

Was this a case of needing to do the entire job, or a case of poorly managed project scope? We would argue the former. Was this a failure of the requirements gathering, or a case of "I don't what I want until I see what I have"? We would argue the former.

To quote Tolkein, Not all those who wander are lost [cite]--and not all those who expand project scopes are unethical consultants.

No comments:

Post a Comment