Wednesday, September 26, 2012

Horse Trading, Deadlines & Budgets

Here is another take on my philosophy that the classical software engineering model is a poor fit for business process automation. This week's topic: horse trading, ie "you give me something and I'll give you something else."

As I understand it, this is the classical software engineering model:
  1. Encounter a problem which can be solved or ameliorated with IT
  2. Do extensive requirements gathering
  3. Define a clear and fixed scope-of-work
  4. Write a detailed design of an implementation which meets the requirements
  5. Create an implementation plan with dates and milestones
  6. Develop, debug, deploy
This is fine for many situations, especially situations which require the creation of something new that is complex and a team effort. I have participated in projects using this model in nearly every role and at nearly every level.

But I found this that model is poorly suited to business process automation, partly because this model is good at building something from scratch but business process automation is almost always a renovation, not a new construction.

A key way in which this model does not suit business process automation is this model's lack of accounting for horse trading, for the give-and-take required to deploy a solution in a working business.

(Horse trading also makes the apparent scope merely the tip of the iceberg, but we will cover that indirectly, below.)

Rather than define horse trading in the abstract, I will take the easy way out and give an example.

My firm specializes in medical information systems, particularly interfaces between medical information systems. Our client had an unusual situation: in addition to their Main Lab, they have a collection of loosely affiliated labs. To the outside world, this distinction is confusing and uninteresting, so this situation created a PR problem: why were some orders slower to process and harder to get results from?

The answer was the invisible divisions between the Main Lab and the Other Labs, but no one really wanted to know why, they just wanted better service. Once the customer grumpiness reached critical levels, the project became a crisis. Since we specialize in the difficult and the critical we were called in because decades of previous attempts had failed.

We were able to connect the three most important Other Labs, one at a time, quickly and cheaply. The secret? Horse trading and the flexibility horse trading requires.

In theory, the job is easy:
  1. encode Other Lab results in an industry-standard format (HL7)
  2. create a simple TCP/IP connection between us and Main Lab
  3. send message in HL7 over TCP/IP connection
Our competitors bid about $50K per connection. We could do this part for $17K for all connections and still have made money, but we knew that the job was not this simple, or it would have already been done. We bid a base of $17K and about the same for each connection. We made money and got the job done, but here is how the project unfolded:

Lab A only processed orders from the Main Lab's parent organization and they had no internal computer system, so they wanted a simple work flow support app in exchange for going along with the computerization. So in addition to the straight interface, we created app A, which is driven entirely from Main Lab order for this lab and which lets Lab A see upcoming orders, confirm receipt of specimens, enter results and create local reports in case their customers want paper.

Lab B processed mostly orders from the Main Lab's parent organization and they also had no internal computer system, but they had automated analysers from which they wanted results automatically entered. So in addition to the straight interface, we created app B which is driven off of the analyser output, has different reports and pulls patient demographics from the Main Lab to flesh out the reports.

Lab C processed mostly orders from outside the Main Lab's parent organization. As a research-oriented organization, they had their own database of results kept in a homegrown app. They wanted to subcontract for a better internal app and database, which we did. We also created app C, a much simpler app mostly about importing results from their new internal app, allowing for verification and electronic signing of the results which were then passed along to the straight interface.

The project ended up giving everyone what they needed and being cheaper to boot. But  the way in which the project played out does not fit the classical model at all.

Would a classical requirements-gathering process have given us the true scope beforehand? I doubt it, since the outside players have no incentive to reveal their agenda until they are sure that they will get what they want. Often players in business processes do not know what they want until the new situation becomes clearer, nor do they know what is actually possible until they seem some progress.

So remember: not all project scope changes are caused unethical consultants or incompetent engineers. Bear in mind that business process automation is almost always renovation and that renovation is almost always a compromise. Renovation often means working around outdated decisions and respecting the need for on-going operations.

Some jobs are bigger than you can see and some situations need a little good faith horse trading in order to arrive at mutually beneficial solution.

Wednesday, September 19, 2012

Getting There...Eventually

Something happened recently in my consulting practice that used to drive me crazy. But thanks to experience, ennui, or both, I don't object any more.

The issue was with an interaction with a client. The situation is all-too familiar: I am creating a tool for the client and he wants to good fast & cheap, which is what client seem to think "agile" means.

As part of this speed and lower cost, he keeps choosing the minimal solution to every problem. He isn't really interested in domain expertise or 30 years experience creating these kind of apps: he knows what he needs and he knows what he wants.

Specifically, he knew that he didn't need help choosing catalogue items on which to report, even though I have had to create multiple look-up tools in the past because choosing the exact catalogue item you mean is complex and difficult. Part of the complication is that the data set contains orders from the previous ordering system, which means that one needs to know the appropriate ordering system, based on order date.

However, he knew that he did not want any bells or whistles, so version 0 was a hypertext link to the legacy web version of their catalogue. After all, he and his colleagues are familiar with the current catalogue, thank you very much.

He was able to find at least some legacy catalogue numbers, but with difficulty. Turns out he didn't know the current catalogue as well as he thought: version 1 added a link to the current web version of their catalogue.

As beta-testing went on, he complained about how hard it was to find the right items, especially if one wanted to report on dates which span the two ordering systems. I showed him one of my interactive look-up tools and he found it usable, but wanting. So version 2 is adding a revamped version of my tool the prototype.

Will we get to where I started, a heavily expert interactive look-up tool? Yes, I think that we will. Is this frustrating? No, not this time around: the client is happy because he is driving the development and is confident that he is not getting more programming than he needs or wants. I am happy because this is relatively untaxing work in a down economy; in fact, I suspect that this indirect route will end up paying me more than letting me do my thing would.

Younger me would have chafed at the lack of respect and taking of credit for my ideas and my work. Current me say, relax, keep your eyes on the prize and get your affirmation somewhere else: IT consulting is unlikely to be a good source of affirmation, at least in my experience.

Wednesday, September 12, 2012

Mission Creep in IT Consulting

I am an IT consultant specializing in the Health Care domain. When I say "IT consultant, many people think of this:

This is amusing poster comes from whose poster generally delight me, except when I am part of the group being mocked. Stupid asymmetrical human psyche

This post is not only amusing, it is also somewhat apropos: sometimes consultants seem to embed themselves into their clients like ticks on a dog, staying on long after the original engagement and doing no-one-seems-sure-what.

(I would argue that when this happens, the client is mostly to blame, but then I am hardly objective on this issue.)

"Mission creep" is military jargon for gradually and unintentionally taking on more and more responsibility until you are stuck in a much larger job than expected. This often results having too little time and too few resources, since the time line and budget were established for the original mission, not the expanded mission.

(I am not a huge fan of using military jargon unless one is on active duty, as I do not want to trivialize what the active military goes through.)

I agree that letting the scope of a project balloon is a common problem and I agree that IT projects, especially ones run by consultants, are prone to this problem. But I want to point out that not all project expansion is bloat and not all consultants are maximizing their billable hours without regard to value or need.

In fact, I find that many of our projects involve horse-trading and, in order to succeed, the scope needs to expand.

In part, this is because there is a boolean aspect to success: either a software solution does the job (automates the process, etc) or it doesn't. It is often not very helpful to partially automate a process. For example, if you can look up the code on the computer, but then you have to go to another computer to enter that code, you are only a bit better off than if you had to look up the code in a book and worse off than if you have memorized a few codes that will work.

In part, this is because requirements gathering is often obstructed or impossible. Often we do not get complete answers to our requirements gathering questions because those questions are asked in a vacuum of sorts (the software does not exist yet) or because those questions expose the answerer to potential grief from their boss.

Consider a prime example of project scope expansion from our practice: some years ago, we created a medical instrument results review interface for a client. It was a glorious success. We had estimated improvements in productivity and after a few weeks of operation, we examined the data to verify those gains. Our examination showed us no real gains.

So we observed the users for a few days and found out that they were still spending the bulk of their time fussing with the input to the machine. When we asked them why, they answered that the intake was problematic: tubes got stuck, or lost, or put in the wrong rack, etc. So instead of just reviewing the results in our software, they checked and rechecked the racks. In order to get them to stop, we added an "overdue" component which alert them to late or missing tubes. Once they felt that our overdue module had proved itself, they trusted it enough rely on it. We examined the logs to see productivity gains and saw about half of what we expected.

Back to the observation phase. This time, we found out that slides were the issue. Problematic specimens are often put on slides for human review. Review takes place somewhere else. Since it is impossible to know that a slide awaits, the user is either interrupted or interrupted herself to go check for slides. In order to get them to stop interrupting ourselves, we added notification of pending slide review requests, so they could stay at the machine with confidence. Now we saw the improvement we expected, and then some.

But when we asked for the glowing review we knew we had earned, there was some abashed resistance: now that the process was so streamlined, the regulatory requirement to audit the interface's operation seemed...onerus. We added an automatic audit feature which produced an audit report and logged the electronic signature of the reviewer. NOW the users were completely happy.

Was this a case of needing to do the entire job, or a case of poorly managed project scope? We would argue the former. Was this a failure of the requirements gathering, or a case of "I don't what I want until I see what I have"? We would argue the former.

To quote Tolkein, Not all those who wander are lost [cite]--and not all those who expand project scopes are unethical consultants.

Wednesday, September 5, 2012

Hope Spring Eternal: the Upgrade Treadmill

For my text today, I take this famous snippet from Alexander Pope:

Hope springs eternal in the human breast;
Man never Is, but always To be blest.
The soul, uneasy, and confin'd from home,
Rests and expatiates in a life to come.”

I found this quotation here:

I found this excellent explanation of the quotation here:

It means that no matter the circumstances, man will always hope for the best - thinks that better things will come down the road. We may not always act our best, but we have the potential to be better in the future. No matter how bad things have been, they can always get better. 

While I believe in the sentiment, that things can always get better, I do not believe that all things always get better, or even that most things get somewhat better.

Specifically, I am appalled by what I call "the upgrade treadmill." By this, I mean the tendency to hope that a major upgrade to a large software package will fix the many inadequacies. To me, this hope can be restated like this:

Perhaps this time the same people and corporate culture which produced bad software last time will produce better software this time, because trying to fix issues in a large code base is easier or more likely than getting it right the first time.

We see our clients jogging along on the upgrade treadmill, mortgaging the present for the hope of a better future, frantically moving forward on the treadmill but not in any other sense.

In some ways this is inspiring and touching: the triumph of hope over experience.

In other ways, it is frustrating and tiresome, because the treadmill tends to slip into a one-size-fits-all excuse: sure, I have a problem in the present, but I do not need to address this problem because this problem might be fixed in a future update. Who knows? It might. It might not. The track record of updates usually does not inspire confidence.

So keep hope alive, because what is life without hope? But don't let the upgrade treadmill distract you from realistic assessment of your situation and periodic review: sometimes the only way to make progress is to stop the mindless forward motion and step off the treadmill.