Pages

Wednesday, October 3, 2012

Logical and Stupid

I am a big fan of logic of many types. I understand predicate logic, boolean logic and the rigorous application of value functions to alternatives, giving logic a solid place in decision-making.

As big a fan of logic as I am, I always apply the final test: "the proof of the pudding is in the eating." In other words, it matters where you end up, even if your chain of reasoning is flawless. If you end up with a stupid conclusion, it matters little how you reached that stupid conclusion.

This is on my mind because I keep running into a stupid conclusion that was arrived at logically. Mercifully, this particular stupidity rarely ruins my day, but it certainly ruins the days of many others.

The day ruining takes the form of endlessly refusing to address any immediate issues because only big issues can be taken up the chain and local power seems to be non-existent. When I tried to push back, in different contexts in different organizations, here is the reasoning that was offered as an explanation:
  1. Premise: the view from 50,000 feet/big picture is always clearer
  2. Therefore all decisions should be made by senior staff
  3. But senior staff cannot get bogged down in details
  4. So only big decisions are worth making
  5. Therefore a collection of smaller decisions, even if that collection has an enormous collective impact, should be ignored forever
I can see how the recent recession and shortness of money has caused organizations to tighten purse strings. I realize that pushing decisions up a level is chic. I know all too well that many low-level managers do not have great decision-making skills. But to end up in a place where nothing is possible and ever-more out-of-touch senior staff make an ever-higher percentage of decisions does not appear to be going so well.

On the bright side, the resulting not-to-be-admitted disasters seem to offer decent opportunities for a consultant such as myself, so perhaps I should just keep my big mouth shut.

References for the interested:
  • http://www.phrases.org.uk/meanings/proof-of-the-pudding.html
  • http://en.wikipedia.org/wiki/Hypothetical_syllogism
  • http://en.wikipedia.org/wiki/Predicate_logic
  • http://en.wikipedia.org/wiki/Boolean_logic

Wednesday, September 26, 2012

Horse Trading, Deadlines & Budgets

Here is another take on my philosophy that the classical software engineering model is a poor fit for business process automation. This week's topic: horse trading, ie "you give me something and I'll give you something else."

As I understand it, this is the classical software engineering model:
  1. Encounter a problem which can be solved or ameliorated with IT
  2. Do extensive requirements gathering
  3. Define a clear and fixed scope-of-work
  4. Write a detailed design of an implementation which meets the requirements
  5. Create an implementation plan with dates and milestones
  6. Develop, debug, deploy
This is fine for many situations, especially situations which require the creation of something new that is complex and a team effort. I have participated in projects using this model in nearly every role and at nearly every level.

But I found this that model is poorly suited to business process automation, partly because this model is good at building something from scratch but business process automation is almost always a renovation, not a new construction.

A key way in which this model does not suit business process automation is this model's lack of accounting for horse trading, for the give-and-take required to deploy a solution in a working business.

(Horse trading also makes the apparent scope merely the tip of the iceberg, but we will cover that indirectly, below.)

Rather than define horse trading in the abstract, I will take the easy way out and give an example.

My firm specializes in medical information systems, particularly interfaces between medical information systems. Our client had an unusual situation: in addition to their Main Lab, they have a collection of loosely affiliated labs. To the outside world, this distinction is confusing and uninteresting, so this situation created a PR problem: why were some orders slower to process and harder to get results from?

The answer was the invisible divisions between the Main Lab and the Other Labs, but no one really wanted to know why, they just wanted better service. Once the customer grumpiness reached critical levels, the project became a crisis. Since we specialize in the difficult and the critical we were called in because decades of previous attempts had failed.

We were able to connect the three most important Other Labs, one at a time, quickly and cheaply. The secret? Horse trading and the flexibility horse trading requires.

In theory, the job is easy:
  1. encode Other Lab results in an industry-standard format (HL7)
  2. create a simple TCP/IP connection between us and Main Lab
  3. send message in HL7 over TCP/IP connection
Our competitors bid about $50K per connection. We could do this part for $17K for all connections and still have made money, but we knew that the job was not this simple, or it would have already been done. We bid a base of $17K and about the same for each connection. We made money and got the job done, but here is how the project unfolded:

Lab A only processed orders from the Main Lab's parent organization and they had no internal computer system, so they wanted a simple work flow support app in exchange for going along with the computerization. So in addition to the straight interface, we created app A, which is driven entirely from Main Lab order for this lab and which lets Lab A see upcoming orders, confirm receipt of specimens, enter results and create local reports in case their customers want paper.

Lab B processed mostly orders from the Main Lab's parent organization and they also had no internal computer system, but they had automated analysers from which they wanted results automatically entered. So in addition to the straight interface, we created app B which is driven off of the analyser output, has different reports and pulls patient demographics from the Main Lab to flesh out the reports.

Lab C processed mostly orders from outside the Main Lab's parent organization. As a research-oriented organization, they had their own database of results kept in a homegrown app. They wanted to subcontract for a better internal app and database, which we did. We also created app C, a much simpler app mostly about importing results from their new internal app, allowing for verification and electronic signing of the results which were then passed along to the straight interface.

The project ended up giving everyone what they needed and being cheaper to boot. But  the way in which the project played out does not fit the classical model at all.

Would a classical requirements-gathering process have given us the true scope beforehand? I doubt it, since the outside players have no incentive to reveal their agenda until they are sure that they will get what they want. Often players in business processes do not know what they want until the new situation becomes clearer, nor do they know what is actually possible until they seem some progress.

So remember: not all project scope changes are caused unethical consultants or incompetent engineers. Bear in mind that business process automation is almost always renovation and that renovation is almost always a compromise. Renovation often means working around outdated decisions and respecting the need for on-going operations.

Some jobs are bigger than you can see and some situations need a little good faith horse trading in order to arrive at mutually beneficial solution.

Wednesday, September 19, 2012

Getting There...Eventually

Something happened recently in my consulting practice that used to drive me crazy. But thanks to experience, ennui, or both, I don't object any more.

The issue was with an interaction with a client. The situation is all-too familiar: I am creating a tool for the client and he wants to good fast & cheap, which is what client seem to think "agile" means.

As part of this speed and lower cost, he keeps choosing the minimal solution to every problem. He isn't really interested in domain expertise or 30 years experience creating these kind of apps: he knows what he needs and he knows what he wants.

Specifically, he knew that he didn't need help choosing catalogue items on which to report, even though I have had to create multiple look-up tools in the past because choosing the exact catalogue item you mean is complex and difficult. Part of the complication is that the data set contains orders from the previous ordering system, which means that one needs to know the appropriate ordering system, based on order date.

However, he knew that he did not want any bells or whistles, so version 0 was a hypertext link to the legacy web version of their catalogue. After all, he and his colleagues are familiar with the current catalogue, thank you very much.

He was able to find at least some legacy catalogue numbers, but with difficulty. Turns out he didn't know the current catalogue as well as he thought: version 1 added a link to the current web version of their catalogue.

As beta-testing went on, he complained about how hard it was to find the right items, especially if one wanted to report on dates which span the two ordering systems. I showed him one of my interactive look-up tools and he found it usable, but wanting. So version 2 is adding a revamped version of my tool the prototype.

Will we get to where I started, a heavily expert interactive look-up tool? Yes, I think that we will. Is this frustrating? No, not this time around: the client is happy because he is driving the development and is confident that he is not getting more programming than he needs or wants. I am happy because this is relatively untaxing work in a down economy; in fact, I suspect that this indirect route will end up paying me more than letting me do my thing would.

Younger me would have chafed at the lack of respect and taking of credit for my ideas and my work. Current me say, relax, keep your eyes on the prize and get your affirmation somewhere else: IT consulting is unlikely to be a good source of affirmation, at least in my experience.

Wednesday, September 12, 2012

Mission Creep in IT Consulting

I am an IT consultant specializing in the Health Care domain. When I say "IT consultant, many people think of this:


This is amusing poster comes from http://www.despair.com whose poster generally delight me, except when I am part of the group being mocked. Stupid asymmetrical human psyche

This post is not only amusing, it is also somewhat apropos: sometimes consultants seem to embed themselves into their clients like ticks on a dog, staying on long after the original engagement and doing no-one-seems-sure-what.

(I would argue that when this happens, the client is mostly to blame, but then I am hardly objective on this issue.)

"Mission creep" is military jargon for gradually and unintentionally taking on more and more responsibility until you are stuck in a much larger job than expected. This often results having too little time and too few resources, since the time line and budget were established for the original mission, not the expanded mission.

(I am not a huge fan of using military jargon unless one is on active duty, as I do not want to trivialize what the active military goes through.)

I agree that letting the scope of a project balloon is a common problem and I agree that IT projects, especially ones run by consultants, are prone to this problem. But I want to point out that not all project expansion is bloat and not all consultants are maximizing their billable hours without regard to value or need.

In fact, I find that many of our projects involve horse-trading and, in order to succeed, the scope needs to expand.

In part, this is because there is a boolean aspect to success: either a software solution does the job (automates the process, etc) or it doesn't. It is often not very helpful to partially automate a process. For example, if you can look up the code on the computer, but then you have to go to another computer to enter that code, you are only a bit better off than if you had to look up the code in a book and worse off than if you have memorized a few codes that will work.

In part, this is because requirements gathering is often obstructed or impossible. Often we do not get complete answers to our requirements gathering questions because those questions are asked in a vacuum of sorts (the software does not exist yet) or because those questions expose the answerer to potential grief from their boss.

Consider a prime example of project scope expansion from our practice: some years ago, we created a medical instrument results review interface for a client. It was a glorious success. We had estimated improvements in productivity and after a few weeks of operation, we examined the data to verify those gains. Our examination showed us no real gains.

So we observed the users for a few days and found out that they were still spending the bulk of their time fussing with the input to the machine. When we asked them why, they answered that the intake was problematic: tubes got stuck, or lost, or put in the wrong rack, etc. So instead of just reviewing the results in our software, they checked and rechecked the racks. In order to get them to stop, we added an "overdue" component which alert them to late or missing tubes. Once they felt that our overdue module had proved itself, they trusted it enough rely on it. We examined the logs to see productivity gains and saw about half of what we expected.

Back to the observation phase. This time, we found out that slides were the issue. Problematic specimens are often put on slides for human review. Review takes place somewhere else. Since it is impossible to know that a slide awaits, the user is either interrupted or interrupted herself to go check for slides. In order to get them to stop interrupting ourselves, we added notification of pending slide review requests, so they could stay at the machine with confidence. Now we saw the improvement we expected, and then some.

But when we asked for the glowing review we knew we had earned, there was some abashed resistance: now that the process was so streamlined, the regulatory requirement to audit the interface's operation seemed...onerus. We added an automatic audit feature which produced an audit report and logged the electronic signature of the reviewer. NOW the users were completely happy.

Was this a case of needing to do the entire job, or a case of poorly managed project scope? We would argue the former. Was this a failure of the requirements gathering, or a case of "I don't what I want until I see what I have"? We would argue the former.

To quote Tolkein, Not all those who wander are lost [cite]--and not all those who expand project scopes are unethical consultants.

Wednesday, September 5, 2012

Hope Spring Eternal: the Upgrade Treadmill

For my text today, I take this famous snippet from Alexander Pope:

Hope springs eternal in the human breast;
Man never Is, but always To be blest.
The soul, uneasy, and confin'd from home,
Rests and expatiates in a life to come.”



I found this quotation here:
http://www.goodreads.com/quotes/10692-hope-springs-eternal-in-the-human-breast-man-never-is

I found this excellent explanation of the quotation here:


It means that no matter the circumstances, man will always hope for the best - thinks that better things will come down the road. We may not always act our best, but we have the potential to be better in the future. No matter how bad things have been, they can always get better.

http://answers.yahoo.com/question/index?qid=20060809101312AANcoOn 

While I believe in the sentiment, that things can always get better, I do not believe that all things always get better, or even that most things get somewhat better.

Specifically, I am appalled by what I call "the upgrade treadmill." By this, I mean the tendency to hope that a major upgrade to a large software package will fix the many inadequacies. To me, this hope can be restated like this:

Perhaps this time the same people and corporate culture which produced bad software last time will produce better software this time, because trying to fix issues in a large code base is easier or more likely than getting it right the first time.

We see our clients jogging along on the upgrade treadmill, mortgaging the present for the hope of a better future, frantically moving forward on the treadmill but not in any other sense.

In some ways this is inspiring and touching: the triumph of hope over experience.

In other ways, it is frustrating and tiresome, because the treadmill tends to slip into a one-size-fits-all excuse: sure, I have a problem in the present, but I do not need to address this problem because this problem might be fixed in a future update. Who knows? It might. It might not. The track record of updates usually does not inspire confidence.

So keep hope alive, because what is life without hope? But don't let the upgrade treadmill distract you from realistic assessment of your situation and periodic review: sometimes the only way to make progress is to stop the mindless forward motion and step off the treadmill.

Wednesday, August 29, 2012

Why Is Interfacing So Hard?

In theory, we provide high-end, sophisticated data analysis and reporting tools. In practice, we provide system-to-system interfaces with data analysis attached. This is because system-to-system interfacing is really hard and is usually done really badly.

We are often asked "why is this so hard?" (sometimes asked as "why were you able to do that when our people could not?" and "why did that take so long?"). So this post is all about why interfacing is hard.

Conceptually, interfacing is easy: one system ("the source") exports a datagram (a complete unit of information) and the other system ("the destination") imports the datagram. In theory, there are many "standard" formats for electronic data interchange if both systems support at least one common format, all should be well.

But all is rarely well in the interface domain. Why not?

The most common source of pain is the mismatch of data models. This sounds complicated, but it isn't and is easily explained with an example:

Consider two medical information systems which we will call A and B. System A is patient-based: the patient is the primary index and activity hangs off patients. System B, on the other hand, is visit-based: the visit is the primary index and activity hangs off visits.

System A: patient->visit->activity

System B: visit->patient->activity

The patient-centered data model is better for patient-centered analysis: it is easy to get all the data associated with a given patient for a particular time period. But the visit-centered model is better for keeping track of the current state of the system.

Regardless of which model is "better" the interfacing is complicated by the fact that the frame of reference is different. Let us work through an example.

  1. System A exports a datagram which contains a visit. System A is patient-based, so the visit is exported as a small part of the patient's data.
  2. System B imports the datagram
    1. translates the datagram into its native System B format
    2. inserts the contents of the datagram into its database
The tricky part is how to resolve ambiguity: System B is visit-based, so if there are issues with the patient demographics, who cares? We can insert the visit and worry about the patient later. But if there are ambiguities with the visit, that is fatal to System B. However, System A has the opposite worldview: so long as the patient is properly identified, the visit can ride along. So System A may have bogus visits (duplicates, unmerged visits, etc) which will choke System B, while System B may have bogus patients (duplicates, patients identified as both Bill and William, etc) which will choke System A.

When the interfacing goes awry, both vendors will be huffy: our system is fine, our export-or-import is correct, the problem must be on the other end. And they are both sort of right.

In our experience, this is where interfacing often goes off the rails: when a mismatch between the two systems would require one or other end to bend. Not only do most vendors defend their model to the death, but many vendors can only imagine their own model: any other model is "wrong". The customer gets caught in the middle.

This is where we come in. If all goes smoothly, an interface is no more than a way to convey an export from one system to be imported into another system. However, in the real world, we have often had to build intermediate systems which take in data with one model, transform that data into another model and export from the transformed model.

So if your systems "don't talk to each other" data model mismatch is a good bet as the culprit. And if your internal group or consultant or vendor quotes a larger-than-expected cost to fix the issue, there is a chance that you are not being ripped off; instead, if you are luckly, you are being forced to pay to bridge the models.

Wednesday, August 22, 2012

Paperless, ha! or, Your Seams are Showing

One of our clients recently had to back off yet another paperless initiative. We are often asked about this very thing: why is it so hard to go paperless, even here in the early days of the 21st century? We thought that this particular instance is a good illustration of a general problem.






In this case, there are many computer systems involved. There is either no integration between these systems or rather modest integration. Here is the process in question:

  1. The customer either uses a third-party app to create an order, which is printed out, or writes directly on a piece of a paper, or uses the preferred app which creates an order, transmits it electronically and prints out the paper.


  2. The paper order is brought to an intake center. In theory, the paper is just a security blanket and the order ID brings up an electronic version of the order. The intake is done, which creates an order in the system used to process the order.
  3. The results of intake and the electronic order arrive in the processing system. The order is processed. The resulting status is sent yet another system, the one which supports status lookup.
Sounds a bit more complicated than one would like. There are more and different systems than one might expect. But historical accident and practical considerations make the process a bit more complex than theory would dictate.

In theory, the integration across systems should be real-time and seamless, which would make the number of actual systems in play almost irrelevant, so long as they all work together. Poor theory, it never gets its way in the IT world.

So why does the paper still exists? Well, consider a real-life scenario from the very recent past:
  • The customer creates an order for items I1, I2, I3, I4, I5
  • The intake system does not understand I5, leaving I1, I2, I3 & I4
  • The processing system does not understand I4, leaving I1, I2 & I3
  • The status system proudly displays an order for I1, I2 & I3
  • Irate customer calls demanding to know what happened to I4 & I5
  • Baffled customer support has no idea those items ever were ordered
  • The customer is very dissatisfied
  • We are called in to help investigate and find item rejections in logs
  • The client sees how this is happening, re-instates the paper order
So the paper copy of the order is once again riding along with the rest of the material. Now when the same thing happens, but the process can compensate.
Instead of an entry in an interface log somewhere, the intake clerk can see that item I5 is an old code, for which there is a new code and the receiving clerk can see that I4 conflicts with I3 and that the customer must have meant to order something else entirely.

In this case, the paper allows the original intent to be accessible at every stage to all the interested parties, in a form everyone can read. If the integration of the various systems was perfect, was "seamless," then there would be no job for the paper to do.

Alas, the integration is imperfect and the seams are very much in evidence. So the paper stays, for now. But pity the poor systems implementer who is supposed to fix this problem, because that job is very hard. Electronic orders arrive in the background; there is no User Interface for systems interfaces and there is no user looking when the order is being processed. So how best to flag orders with issues? Whose responsibility is it to fix the issues?

In our practice, we find that logging is good, flagging is appreciated, but documentation is king: we put the error message as comment on the order so that, at every stage, everyone can see what happened. Specifically, the users who understand the process can see the messages, instead of sys admins who rarely can do anything about business process issue.

We suspect that so long as system integration has seams, critical processes will either have paper or regular failure. So don't sell that paper company stock just yet.