Pages

Wednesday, October 3, 2012

Logical and Stupid

I am a big fan of logic of many types. I understand predicate logic, boolean logic and the rigorous application of value functions to alternatives, giving logic a solid place in decision-making.

As big a fan of logic as I am, I always apply the final test: "the proof of the pudding is in the eating." In other words, it matters where you end up, even if your chain of reasoning is flawless. If you end up with a stupid conclusion, it matters little how you reached that stupid conclusion.

This is on my mind because I keep running into a stupid conclusion that was arrived at logically. Mercifully, this particular stupidity rarely ruins my day, but it certainly ruins the days of many others.

The day ruining takes the form of endlessly refusing to address any immediate issues because only big issues can be taken up the chain and local power seems to be non-existent. When I tried to push back, in different contexts in different organizations, here is the reasoning that was offered as an explanation:
  1. Premise: the view from 50,000 feet/big picture is always clearer
  2. Therefore all decisions should be made by senior staff
  3. But senior staff cannot get bogged down in details
  4. So only big decisions are worth making
  5. Therefore a collection of smaller decisions, even if that collection has an enormous collective impact, should be ignored forever
I can see how the recent recession and shortness of money has caused organizations to tighten purse strings. I realize that pushing decisions up a level is chic. I know all too well that many low-level managers do not have great decision-making skills. But to end up in a place where nothing is possible and ever-more out-of-touch senior staff make an ever-higher percentage of decisions does not appear to be going so well.

On the bright side, the resulting not-to-be-admitted disasters seem to offer decent opportunities for a consultant such as myself, so perhaps I should just keep my big mouth shut.

References for the interested:
  • http://www.phrases.org.uk/meanings/proof-of-the-pudding.html
  • http://en.wikipedia.org/wiki/Hypothetical_syllogism
  • http://en.wikipedia.org/wiki/Predicate_logic
  • http://en.wikipedia.org/wiki/Boolean_logic

Wednesday, September 26, 2012

Horse Trading, Deadlines & Budgets

Here is another take on my philosophy that the classical software engineering model is a poor fit for business process automation. This week's topic: horse trading, ie "you give me something and I'll give you something else."

As I understand it, this is the classical software engineering model:
  1. Encounter a problem which can be solved or ameliorated with IT
  2. Do extensive requirements gathering
  3. Define a clear and fixed scope-of-work
  4. Write a detailed design of an implementation which meets the requirements
  5. Create an implementation plan with dates and milestones
  6. Develop, debug, deploy
This is fine for many situations, especially situations which require the creation of something new that is complex and a team effort. I have participated in projects using this model in nearly every role and at nearly every level.

But I found this that model is poorly suited to business process automation, partly because this model is good at building something from scratch but business process automation is almost always a renovation, not a new construction.

A key way in which this model does not suit business process automation is this model's lack of accounting for horse trading, for the give-and-take required to deploy a solution in a working business.

(Horse trading also makes the apparent scope merely the tip of the iceberg, but we will cover that indirectly, below.)

Rather than define horse trading in the abstract, I will take the easy way out and give an example.

My firm specializes in medical information systems, particularly interfaces between medical information systems. Our client had an unusual situation: in addition to their Main Lab, they have a collection of loosely affiliated labs. To the outside world, this distinction is confusing and uninteresting, so this situation created a PR problem: why were some orders slower to process and harder to get results from?

The answer was the invisible divisions between the Main Lab and the Other Labs, but no one really wanted to know why, they just wanted better service. Once the customer grumpiness reached critical levels, the project became a crisis. Since we specialize in the difficult and the critical we were called in because decades of previous attempts had failed.

We were able to connect the three most important Other Labs, one at a time, quickly and cheaply. The secret? Horse trading and the flexibility horse trading requires.

In theory, the job is easy:
  1. encode Other Lab results in an industry-standard format (HL7)
  2. create a simple TCP/IP connection between us and Main Lab
  3. send message in HL7 over TCP/IP connection
Our competitors bid about $50K per connection. We could do this part for $17K for all connections and still have made money, but we knew that the job was not this simple, or it would have already been done. We bid a base of $17K and about the same for each connection. We made money and got the job done, but here is how the project unfolded:

Lab A only processed orders from the Main Lab's parent organization and they had no internal computer system, so they wanted a simple work flow support app in exchange for going along with the computerization. So in addition to the straight interface, we created app A, which is driven entirely from Main Lab order for this lab and which lets Lab A see upcoming orders, confirm receipt of specimens, enter results and create local reports in case their customers want paper.

Lab B processed mostly orders from the Main Lab's parent organization and they also had no internal computer system, but they had automated analysers from which they wanted results automatically entered. So in addition to the straight interface, we created app B which is driven off of the analyser output, has different reports and pulls patient demographics from the Main Lab to flesh out the reports.

Lab C processed mostly orders from outside the Main Lab's parent organization. As a research-oriented organization, they had their own database of results kept in a homegrown app. They wanted to subcontract for a better internal app and database, which we did. We also created app C, a much simpler app mostly about importing results from their new internal app, allowing for verification and electronic signing of the results which were then passed along to the straight interface.

The project ended up giving everyone what they needed and being cheaper to boot. But  the way in which the project played out does not fit the classical model at all.

Would a classical requirements-gathering process have given us the true scope beforehand? I doubt it, since the outside players have no incentive to reveal their agenda until they are sure that they will get what they want. Often players in business processes do not know what they want until the new situation becomes clearer, nor do they know what is actually possible until they seem some progress.

So remember: not all project scope changes are caused unethical consultants or incompetent engineers. Bear in mind that business process automation is almost always renovation and that renovation is almost always a compromise. Renovation often means working around outdated decisions and respecting the need for on-going operations.

Some jobs are bigger than you can see and some situations need a little good faith horse trading in order to arrive at mutually beneficial solution.

Wednesday, September 19, 2012

Getting There...Eventually

Something happened recently in my consulting practice that used to drive me crazy. But thanks to experience, ennui, or both, I don't object any more.

The issue was with an interaction with a client. The situation is all-too familiar: I am creating a tool for the client and he wants to good fast & cheap, which is what client seem to think "agile" means.

As part of this speed and lower cost, he keeps choosing the minimal solution to every problem. He isn't really interested in domain expertise or 30 years experience creating these kind of apps: he knows what he needs and he knows what he wants.

Specifically, he knew that he didn't need help choosing catalogue items on which to report, even though I have had to create multiple look-up tools in the past because choosing the exact catalogue item you mean is complex and difficult. Part of the complication is that the data set contains orders from the previous ordering system, which means that one needs to know the appropriate ordering system, based on order date.

However, he knew that he did not want any bells or whistles, so version 0 was a hypertext link to the legacy web version of their catalogue. After all, he and his colleagues are familiar with the current catalogue, thank you very much.

He was able to find at least some legacy catalogue numbers, but with difficulty. Turns out he didn't know the current catalogue as well as he thought: version 1 added a link to the current web version of their catalogue.

As beta-testing went on, he complained about how hard it was to find the right items, especially if one wanted to report on dates which span the two ordering systems. I showed him one of my interactive look-up tools and he found it usable, but wanting. So version 2 is adding a revamped version of my tool the prototype.

Will we get to where I started, a heavily expert interactive look-up tool? Yes, I think that we will. Is this frustrating? No, not this time around: the client is happy because he is driving the development and is confident that he is not getting more programming than he needs or wants. I am happy because this is relatively untaxing work in a down economy; in fact, I suspect that this indirect route will end up paying me more than letting me do my thing would.

Younger me would have chafed at the lack of respect and taking of credit for my ideas and my work. Current me say, relax, keep your eyes on the prize and get your affirmation somewhere else: IT consulting is unlikely to be a good source of affirmation, at least in my experience.

Wednesday, September 12, 2012

Mission Creep in IT Consulting

I am an IT consultant specializing in the Health Care domain. When I say "IT consultant, many people think of this:


This is amusing poster comes from http://www.despair.com whose poster generally delight me, except when I am part of the group being mocked. Stupid asymmetrical human psyche

This post is not only amusing, it is also somewhat apropos: sometimes consultants seem to embed themselves into their clients like ticks on a dog, staying on long after the original engagement and doing no-one-seems-sure-what.

(I would argue that when this happens, the client is mostly to blame, but then I am hardly objective on this issue.)

"Mission creep" is military jargon for gradually and unintentionally taking on more and more responsibility until you are stuck in a much larger job than expected. This often results having too little time and too few resources, since the time line and budget were established for the original mission, not the expanded mission.

(I am not a huge fan of using military jargon unless one is on active duty, as I do not want to trivialize what the active military goes through.)

I agree that letting the scope of a project balloon is a common problem and I agree that IT projects, especially ones run by consultants, are prone to this problem. But I want to point out that not all project expansion is bloat and not all consultants are maximizing their billable hours without regard to value or need.

In fact, I find that many of our projects involve horse-trading and, in order to succeed, the scope needs to expand.

In part, this is because there is a boolean aspect to success: either a software solution does the job (automates the process, etc) or it doesn't. It is often not very helpful to partially automate a process. For example, if you can look up the code on the computer, but then you have to go to another computer to enter that code, you are only a bit better off than if you had to look up the code in a book and worse off than if you have memorized a few codes that will work.

In part, this is because requirements gathering is often obstructed or impossible. Often we do not get complete answers to our requirements gathering questions because those questions are asked in a vacuum of sorts (the software does not exist yet) or because those questions expose the answerer to potential grief from their boss.

Consider a prime example of project scope expansion from our practice: some years ago, we created a medical instrument results review interface for a client. It was a glorious success. We had estimated improvements in productivity and after a few weeks of operation, we examined the data to verify those gains. Our examination showed us no real gains.

So we observed the users for a few days and found out that they were still spending the bulk of their time fussing with the input to the machine. When we asked them why, they answered that the intake was problematic: tubes got stuck, or lost, or put in the wrong rack, etc. So instead of just reviewing the results in our software, they checked and rechecked the racks. In order to get them to stop, we added an "overdue" component which alert them to late or missing tubes. Once they felt that our overdue module had proved itself, they trusted it enough rely on it. We examined the logs to see productivity gains and saw about half of what we expected.

Back to the observation phase. This time, we found out that slides were the issue. Problematic specimens are often put on slides for human review. Review takes place somewhere else. Since it is impossible to know that a slide awaits, the user is either interrupted or interrupted herself to go check for slides. In order to get them to stop interrupting ourselves, we added notification of pending slide review requests, so they could stay at the machine with confidence. Now we saw the improvement we expected, and then some.

But when we asked for the glowing review we knew we had earned, there was some abashed resistance: now that the process was so streamlined, the regulatory requirement to audit the interface's operation seemed...onerus. We added an automatic audit feature which produced an audit report and logged the electronic signature of the reviewer. NOW the users were completely happy.

Was this a case of needing to do the entire job, or a case of poorly managed project scope? We would argue the former. Was this a failure of the requirements gathering, or a case of "I don't what I want until I see what I have"? We would argue the former.

To quote Tolkein, Not all those who wander are lost [cite]--and not all those who expand project scopes are unethical consultants.

Wednesday, September 5, 2012

Hope Spring Eternal: the Upgrade Treadmill

For my text today, I take this famous snippet from Alexander Pope:

Hope springs eternal in the human breast;
Man never Is, but always To be blest.
The soul, uneasy, and confin'd from home,
Rests and expatiates in a life to come.”



I found this quotation here:
http://www.goodreads.com/quotes/10692-hope-springs-eternal-in-the-human-breast-man-never-is

I found this excellent explanation of the quotation here:


It means that no matter the circumstances, man will always hope for the best - thinks that better things will come down the road. We may not always act our best, but we have the potential to be better in the future. No matter how bad things have been, they can always get better.

http://answers.yahoo.com/question/index?qid=20060809101312AANcoOn 

While I believe in the sentiment, that things can always get better, I do not believe that all things always get better, or even that most things get somewhat better.

Specifically, I am appalled by what I call "the upgrade treadmill." By this, I mean the tendency to hope that a major upgrade to a large software package will fix the many inadequacies. To me, this hope can be restated like this:

Perhaps this time the same people and corporate culture which produced bad software last time will produce better software this time, because trying to fix issues in a large code base is easier or more likely than getting it right the first time.

We see our clients jogging along on the upgrade treadmill, mortgaging the present for the hope of a better future, frantically moving forward on the treadmill but not in any other sense.

In some ways this is inspiring and touching: the triumph of hope over experience.

In other ways, it is frustrating and tiresome, because the treadmill tends to slip into a one-size-fits-all excuse: sure, I have a problem in the present, but I do not need to address this problem because this problem might be fixed in a future update. Who knows? It might. It might not. The track record of updates usually does not inspire confidence.

So keep hope alive, because what is life without hope? But don't let the upgrade treadmill distract you from realistic assessment of your situation and periodic review: sometimes the only way to make progress is to stop the mindless forward motion and step off the treadmill.

Wednesday, August 29, 2012

Why Is Interfacing So Hard?

In theory, we provide high-end, sophisticated data analysis and reporting tools. In practice, we provide system-to-system interfaces with data analysis attached. This is because system-to-system interfacing is really hard and is usually done really badly.

We are often asked "why is this so hard?" (sometimes asked as "why were you able to do that when our people could not?" and "why did that take so long?"). So this post is all about why interfacing is hard.

Conceptually, interfacing is easy: one system ("the source") exports a datagram (a complete unit of information) and the other system ("the destination") imports the datagram. In theory, there are many "standard" formats for electronic data interchange if both systems support at least one common format, all should be well.

But all is rarely well in the interface domain. Why not?

The most common source of pain is the mismatch of data models. This sounds complicated, but it isn't and is easily explained with an example:

Consider two medical information systems which we will call A and B. System A is patient-based: the patient is the primary index and activity hangs off patients. System B, on the other hand, is visit-based: the visit is the primary index and activity hangs off visits.

System A: patient->visit->activity

System B: visit->patient->activity

The patient-centered data model is better for patient-centered analysis: it is easy to get all the data associated with a given patient for a particular time period. But the visit-centered model is better for keeping track of the current state of the system.

Regardless of which model is "better" the interfacing is complicated by the fact that the frame of reference is different. Let us work through an example.

  1. System A exports a datagram which contains a visit. System A is patient-based, so the visit is exported as a small part of the patient's data.
  2. System B imports the datagram
    1. translates the datagram into its native System B format
    2. inserts the contents of the datagram into its database
The tricky part is how to resolve ambiguity: System B is visit-based, so if there are issues with the patient demographics, who cares? We can insert the visit and worry about the patient later. But if there are ambiguities with the visit, that is fatal to System B. However, System A has the opposite worldview: so long as the patient is properly identified, the visit can ride along. So System A may have bogus visits (duplicates, unmerged visits, etc) which will choke System B, while System B may have bogus patients (duplicates, patients identified as both Bill and William, etc) which will choke System A.

When the interfacing goes awry, both vendors will be huffy: our system is fine, our export-or-import is correct, the problem must be on the other end. And they are both sort of right.

In our experience, this is where interfacing often goes off the rails: when a mismatch between the two systems would require one or other end to bend. Not only do most vendors defend their model to the death, but many vendors can only imagine their own model: any other model is "wrong". The customer gets caught in the middle.

This is where we come in. If all goes smoothly, an interface is no more than a way to convey an export from one system to be imported into another system. However, in the real world, we have often had to build intermediate systems which take in data with one model, transform that data into another model and export from the transformed model.

So if your systems "don't talk to each other" data model mismatch is a good bet as the culprit. And if your internal group or consultant or vendor quotes a larger-than-expected cost to fix the issue, there is a chance that you are not being ripped off; instead, if you are luckly, you are being forced to pay to bridge the models.

Wednesday, August 22, 2012

Paperless, ha! or, Your Seams are Showing

One of our clients recently had to back off yet another paperless initiative. We are often asked about this very thing: why is it so hard to go paperless, even here in the early days of the 21st century? We thought that this particular instance is a good illustration of a general problem.






In this case, there are many computer systems involved. There is either no integration between these systems or rather modest integration. Here is the process in question:

  1. The customer either uses a third-party app to create an order, which is printed out, or writes directly on a piece of a paper, or uses the preferred app which creates an order, transmits it electronically and prints out the paper.


  2. The paper order is brought to an intake center. In theory, the paper is just a security blanket and the order ID brings up an electronic version of the order. The intake is done, which creates an order in the system used to process the order.
  3. The results of intake and the electronic order arrive in the processing system. The order is processed. The resulting status is sent yet another system, the one which supports status lookup.
Sounds a bit more complicated than one would like. There are more and different systems than one might expect. But historical accident and practical considerations make the process a bit more complex than theory would dictate.

In theory, the integration across systems should be real-time and seamless, which would make the number of actual systems in play almost irrelevant, so long as they all work together. Poor theory, it never gets its way in the IT world.

So why does the paper still exists? Well, consider a real-life scenario from the very recent past:
  • The customer creates an order for items I1, I2, I3, I4, I5
  • The intake system does not understand I5, leaving I1, I2, I3 & I4
  • The processing system does not understand I4, leaving I1, I2 & I3
  • The status system proudly displays an order for I1, I2 & I3
  • Irate customer calls demanding to know what happened to I4 & I5
  • Baffled customer support has no idea those items ever were ordered
  • The customer is very dissatisfied
  • We are called in to help investigate and find item rejections in logs
  • The client sees how this is happening, re-instates the paper order
So the paper copy of the order is once again riding along with the rest of the material. Now when the same thing happens, but the process can compensate.
Instead of an entry in an interface log somewhere, the intake clerk can see that item I5 is an old code, for which there is a new code and the receiving clerk can see that I4 conflicts with I3 and that the customer must have meant to order something else entirely.

In this case, the paper allows the original intent to be accessible at every stage to all the interested parties, in a form everyone can read. If the integration of the various systems was perfect, was "seamless," then there would be no job for the paper to do.

Alas, the integration is imperfect and the seams are very much in evidence. So the paper stays, for now. But pity the poor systems implementer who is supposed to fix this problem, because that job is very hard. Electronic orders arrive in the background; there is no User Interface for systems interfaces and there is no user looking when the order is being processed. So how best to flag orders with issues? Whose responsibility is it to fix the issues?

In our practice, we find that logging is good, flagging is appreciated, but documentation is king: we put the error message as comment on the order so that, at every stage, everyone can see what happened. Specifically, the users who understand the process can see the messages, instead of sys admins who rarely can do anything about business process issue.

We suspect that so long as system integration has seams, critical processes will either have paper or regular failure. So don't sell that paper company stock just yet.

Wednesday, August 15, 2012

Managing Up

Today's topic is ticklish: managing up.

As I understand it, this term means trying to influence one's superior's expectations and behavior.

I suppose this term should mean "treating one's superiors as though they were subordinates" but that is overly literal. However, there is a high-level truth to the exaggeration: one is taking responsibility for the attitudes and actions of one's boss(es).

When I first encountered this concept, many moon ago, I found it silly. I was quite comfortable with the idea that those below me should do as I bid and those above me should bid me do whatever it was that I was supposed to do.

Over time, I understood that there was a legitimate "managing up" need: my superiors required context and feedback if they were going to make good decisions. So I grudgingly took on the task of keeping them informed, over and above what was strictly necessary.

Alas, what I see very frequently now in IT is feast or famine: either mid-level IT professionals do nothing but "manage up," leaving their direct reports wandering in the dark, or they do no managing up, leaving their superiors with ever-more-inaccurate world views.

Worse, many newish mid-level IT folks confuse "sucking up" with "managing up." They go out of their way to confirm their bosses prejudices and to avoid correcting mistaken impressions. This turns even well-meaning, talented senior managers into blunder idiots. If you don't know what works and what does not; and if you think the issues are different than they are, you will have great difficulty choosing an effective solution to the most pressing problem.

A sure sign that managing up is not working or not being done is this: there are resources and good staff working hard to accomplish their goals, but things just don't get any better over time. In fact, they get worse, although reorganizations and new software rollouts neither improve or degrade matters.

If you are working hard with decent people but the steady and amazing march of technology is not making your job easier or better, then you may have a managing up problem. If your superiors are reasonable people, start treating them as such. If your superiors punish bearers of bad news, then managing up is out and sucking up is in. May commerce have mercy on your career.

Wednesday, August 8, 2012

I Don't Get No Respect

WARNING: THE POST YOU ARE ABOUT TO READ IS QUITE BITTER

I have been struck recently by the general decline in status of the techno-guru and particular decline of my status as as techno-guru.

Once upon a time, email hit the general public; for me, that was in the early-to-mid 1990s. I had starting using email about a decade before, so email was a bit old hat to me by the time it reached my mother and other trailing edge technology users. I found myself frequently asked about this new email thing by people who knew me as a technology expert. While answering naive questions is not my favourite pastime, I sighed deeply  and answered with all the patience I could muster.

(Had I known that the tsunami of users would destroy the quiet, thoughtful thing that I knew as email, I would not have been so helpful or reassuring. In those ancient times, I only received few emails and each one was welcome. I corresponded with just about anyone I cared to in the ANSI C/Unix world. I remember clearly when I started getting automated "I can't read all my email, I am only accepting email from colleagues" from the big names in the field. I shortly had to do something similar as even minor players such as myself were overwhelmed by the flood of spam.)

More recently, Facebook hit the scene. This time, the situation was exactly reversed. I found that absolutely ignorant people were comfortable lecturing me about Facebook's pitfalls (always some vague concern about privacy or wasted time) or exhorting me to enjoy Facebook's benefits (usually something about the trivia of the lives of mutual acquaintances).

Whether one is pro-Facebook or anti-Facebook is beside my point, which is this: when did technical expertise become utterly irrelevant to technical discussions? Whenever I pushed back on the lectures, I found that the lecturers were happy to dismiss my requests for more detail, for evidence, for explanation. They waved their hands elegantly: they were not computer people, after all, and we were talking about the technical attributes of a piece of software, a distributed information system.

(For a humorous take on the Facebook privacy issue, watch the satirical Onion look at Facebook: http://www.youtube.com/watch?v=cqggW08BWO0&feature=youtube_gdata_player)

Would I have been interested in their personal experience as a user or non-user of social media? Yes, I would have been. That would have been valid and interesting. But no one wanted to talk to me about their personal experience; instead, they all wanted to talk to me about either the pitfalls of the software's design and policies, or the benefits of using this technology without regard to the risks.

Sadly for me, this phenomenon is not confined to parties or family gatherings; I encounter it in the professional arena as well. I find myself being lectured on software choice and deployment by MBAs who are proud of their technical ignorance and their non-technical backgrounds. Somehow, they have convinced themselves that those of us who get our hands dirty in IT are less capable decision makers. We lack the business perspective. We are like children in a toy store, grabbing the shiny technology without regard to how much it costs or how long we will want to play with it.
I suppose that there are IT experts who can't grasp the concept of a budget. There may be IT experts who want to play with shiny new technologies just because the new technologies are new, without regard to the problems to be solved. But I am not such an IT expert and I don't think that I have ever met one.

But until I figure out how to overcome the disadvantage of years of experience and a track record of successful deployment, it won't matter: I won't be part of the conversation. Maybe I need to get an on-line MBA and lie about the last thirty years of my career.

(Why lecture an expert with only ignorance as raw material? When did this become a good idea? Do people now lecture their doctors, their dentists, their accountants, their garage mechanics? From my very limited survey, the general public does lecture experts with confidence. So perhaps this is not limited to me or to my profession. Sigh.)


Wednesday, August 1, 2012

Simplicity: Too Much of a Good Thing?

Like many engineering types, I strive for simplicity. I seek to avoid excess complication. I rely on analysis to tame complexity and I rely on rigour to help me manage it.

That said, I am very cranky about simplistic people and their simplistic thinking and their simplistic solutions to complex problems.

To be concise, simple solutions lack complexity while simplistic solutions suffer from being too simple, from being insufficiently complex.

In practice, I find that simplistic thinking is the result of poor analysis, of not digging deeply enough, of accepting apparently simplicity as the whole story.

I feel rather self-indulgent today, so I will put this into doggerel:

Simple is a virtue,
Simplistic is a vice.
Simple might not hurt you,
Simplistic is never nice.

So what is an example of simplistic thinking? Consider patient identification; one of our systems is used to support dealing with the public, which requires that it support patient identification. At first blush, this is simple:
  • require that the patient give you their unique patient ID, probably on some kind of card you gave them;
or
  • require a photo ID with name, date of birth and sex
Since the client organization already has a patient registration system, they objected to our claim that we needed to build a patient identification support module. We had to do it against their objections and then only bill for it when they saw its value.

What complicates this seemingly simple task? The actual answer is horrifically long and complicated, so I will an abbreviated reply which will still make the point:
  • Women often change their names when they get married; their pre-existing paper work, orders, etc, do not magically change with them, so it is helpful to keep previous versions of names, marked as not canonical, to help ensure that the driver's license / printed report / old information mismatches are not evidence of error.
  • Children do not have photo IDs and are often known by unofficial names. Is young Billy Doe actually Ralph W. Doe? R. William Doe? Bill Doe Jr? Is this the former Babyboy Doe? Perhaps he is the former Boytwin1 Doe--or is he the former Boytwin2 Doe? By keeping a history of these transactions we can support confirming that you have the correct child, even if the caregiver can't remember which kid had the allergy test in July, as opposed to the kid who had the allergy test in June.
  • Alas, people make mistakes and sometimes these mistakes line up with real data. There can be two John Does, one born 4/1/1941 and one born 1/4/1961 and if you mistype and have to search, you might pick the wrong one, in which case a custom module which says "this patient does not have a relevant order, but the other one does: are you sure?" will really save on dangerous and expensive patient identification mistakes, no matter how plausible those mistakes might be.
As a vendor of custom solutions for medical information process automation (yikes! what a mouthful), I am rarely called in because a situation is simple; if the problem had a simple, obvious, excellent solution, someone would have implemented that solution. I am only called in when things are complicated, hairy, ugly or all of the above. While I do what I can to make my designs as simple as I can, some problems can only be solved with involved and complex solutions.
 
Recently this pet peeve of mine has been elevated to a business issue as
our firm more frequently encounters simplistic thinking in our customer's senior management. These managers, from their lofty perches, are issuing clean, simplistic edicts which do not match the reality in which we find ourselves.

Worse, we are losing sales to less ethical competitors who promise clean, simple solutions to murky, complex problems. These solutions don't work, of course, but by the time this is well-established, the situation is the new normal and everybody loses.

So strive for all the simplicity you can muster, but shun the simplistic with all your might.

Wednesday, July 25, 2012

The Decline and Fall of Some IT Groups


We are information systems consultants. As part of our typical engagement we work with our clients' internal IT groups. For this reason, our clients' end users consider us to be experts on their organization's IT groups. Recently a depressingly large number of our clients' end users have come to us to ask why their internal IT groups are doing such a bad job. They ask us this question because it is not an easy question to ask of a colleague; it is easier to ask an outsider.

This barrage of questions has forced us to consider this question, although we usually try to stay out of internal IT politics as much as possible.

When we took a step back to consider the different IT groups in different organizations, a pattern began to emerge. We are seeing an alarming number of them in the following situation:

  1. They are being asked to "do more with less"
  2. Positions are being filled only after long delays if at all
  3. Functions are being taken in-house to save money
The first point is a function of the recession: every one is trimming their budgets. Pity the poor IT group: unless it has technically savvy senior management, the right things are not being cut. We see this directly all the depressing time, budget cutting by size of line item and not by return on investment or the marginal utility of the line item.

The second point is a function of the recession as well: there are so many applications for any given job that managers are tempted to wait for the PERFECT candidate. Here is an excellent column from James Surowiecki's reliably interesting and useful column in the New Yorker.

The third point is a function of muddled managing: if you have cut your IT group's budget, and failed to hire new people, then adding work to their plate is very unlikely to produce good results. The most common outcome we see is a decline in service to the users and a decline in morale within the IT group.

So if your formerly-good IT group is in steep decline, you might want to lobby for their budget to be increased or their headcount to go up. But if that is not an option, then perhaps you can temper your irritation at the decline in service with pity for the IT group itself: it can't be any fun to be on an ever-sinking ship.

Wednesday, July 18, 2012

Managing Consultants

I am currently paid on a consulting basis.

I put it this way, rather than "I am a consultant" because I do not have any great attachment to the business rules under which I get paid. I want to do good work for reasonable people and in return I want to be paid a reasonable amount.

For many reasons, many of them both tedious and beyond my control, consultants are being demonized; we are the new lawyers: a professional group we all love to hate and can agree to mock. For example, consider this poster from the often-hilarious http://www.despair.com:




While I agree that lawyers are often used ineffectively, I do not agree that lawyers are useless busybodies, just as I don't think that HR exists to making hiring difficult or facilities management exists to stop you from having nice office furniture.

(I blush to admit that I was going to use "first thing we do, let's kill all the lawyers" until I looked into it and found that I had that backwards: http://www.spectacle.org/797/finkel.html. Sigh. Another blow to my feelings of omniscience.)

Leaving aside the inherent problems with demonizing any group, I have a problem with demonizing consultants. To me, the big difference between generalizing about lawyers and generalizing about consultants is that lawyers are a fairly well-defined group providing a fairly well-defined service, while consultants are a ridiculously varied group with only a compensation methodology in common.

Mercifully, to make my point today I can leave aside the question of whether or not consultants are evil, because it does not really matter: at this point, almost no medium-sized or large companies can do without them.

(In my experience, small firms benefit most from consultants, but small firms seem better adjusted about this issue: happier and more accepting.)

So pick a side: consultants are a useful tool because you can rent expertise that you need for only as long as you need it, or consultants are parasites whose only goal is to addict you to their services. Either way, there they are.

I assert that in this day and age, managing consultants is a key skill, just as managing women was forty years ago, when women started showing up outside the steno pool.

I accept that consultants are not a panacea: institutional memory is good and employee loyalty is good.

I agree that consulting engagements need to be reviewed periodically to ensure that renting expertise is still the best way to accomplish whatever goal toward which your consultants are working.

What I cannot abide is the current mindless swing toward "No Consultants!" as a policy. This is so rarely practical in today's business environment. I say this not because such policies cut into my paycheck: hilariously, they don't cut into my paycheck, at least not so far. What they do is drive up my paycheck as I stand aside and then wait for the frantic emergency calls that always come--unless I hear, instead, the sickly silence of despair as organizations caught in the death spiral of "doing more with less" until they go out of business or fail.

Love them, hate them, but make sensible use of consultants. Don't ban them and don't cling to them forever. Instead, assess your needs, consider both cost and benefit and do what makes sense. That is more work than issuing clear, simple edicts such as "no consultants!" but that is why managers get paid the big bucks, isn't it?

Wednesday, July 11, 2012

In Praise of a Touch of Humility

I am not a humble person. I don't think that I ever have been. So at first blush it may seem strange that I would be praising humility. In fact, I claim that this is a natural situation: I pride myself on being effective and accurate. In order to be effective and accurate, I need good information and feedback. In order to get good information and feedback, I need to seek it, which requires embracing the fact that I make mistakes and therefore trying to avoid future mistakes and fix past mistakes.

I hasten to point out that I do not think that I suffer from the over-confidence that infects so many leaders in so many workplaces. (I have already ranted about the childish pretence of endless certainty in a previous post but this is a new, slightly different rant.)

As a practical matter, this means that I approach software development in a specific way: I do things more than once. I write "watch dog" programs to monitor processes. I write and use consistency checks. On the plus side, many of my mistakes are caught. On the minus side, many of my mistakes are exposed to public view. However, I sleep better at night knowing that my confidence in my work is based on more than my high self-regard.

As a theoretical matter, consider this interesting Harvard Business Review blog entry: Less-Confident People Are More Successful by Tomas Chamorro-Premuzic. His thesis is this:

There is no bigger cliché in business psychology than the idea that high self-confidence is key to career success. It is time to debunk this myth. In fact, low self-confidence is more likely to make you successful.
After many years of researching and consulting on talent, I've come to the conclusion that self-confidence is only helpful when it's low. Sure, extremely low confidence is not helpful: it inhibits performance by inducing fear, worry, and stress, which may drive people to give up sooner or later. But just-low-enough confidence can help you recalibrate your goals so they are (a) more realistic and (b) attainable. Is that really a problem? Not everyone can be CEO of Coca Cola or the next Steve Jobs.
As a side note,  I point out that a friend of mine offered a gloss on this article for me: he would say that we are talking about humility here, not clinically low self-esteem. I would define humility as the trait of being willing to consider negative feedback; I suspect that there are many working definitions. The dictionary definition I found at www.dictionary.com is this:


 modest opinion or estimate of one's own importance, rank, etc.


I would say "unexaggerated" is a key component of this concept, but perhaps that is only my personal sense.

Regardless of the fine print on the definition of the opposite of over-confidence, I fervently hope Chamorro-Premuzic is right in his assertion that not-overly-confident people are more successful. In my bitter experience, boundless and baseless confidence are richly rewarded, without regard to the outcomes of those manly, clear-cut and confident decisions when complex, nuanced, multi-stage decisions would seem to be required. (With lots of checking to make sure that the path along which we are all running is the right path.)

It would be fabulous to put the Era of Empty Assertion behind us and move on to a more results-based, fact-based, reality-based, merit-based workplace, at least in IT. May it be so.

Wednesday, July 4, 2012

Broke, Broker, Brokest

In response to my last post an old friend who is an organizational psychologist pointed out something that has been on my mind for a while: the tendency of many people to live the dictum "if it ain't broke, don't fix it."

(By the way, the history of this phrase really surprised me: check it out on Wikipedia if you are so inclined.)

I can see why this dictum is useful in large organizations: endless thrashing and reorganization is often the bane of their existence and they need some way to make sure that "fixes" are actually needed. (They also need to follow up those fixes to make sure that they worked and continue to work: see an earlier post.)

But I am seeing many instances of throwing out the baby with the bathwater in this regard. I see simple-minded adherence to this principle which eliminates all of the constant small decisions and leaves only the rare big decisions.

In a nutshell, there is more to business decision evaluation than "broken" vs "not broken". Very few processes either work unacceptably badly or perfectly. Surely we can do better than just these two gradations?

The determination of whether or not an issue or set of issues is worth addressing involves some possibly subtle cost versus benefit analyses. I assert that the deciding test of "Is this process hopelessly broken?" is simply not good enough to produce an effective or efficient organization.

I hear from managers that there is an additional complication: their organizations have tightened the purse strings so much that the level of effort required to push through a small investment seems too great: better to save your strength and only push through large projects. Since the projects are large, the scrutiny is great, so the projects have to be "no-brainers" so better to wait until your department is on the verge of collapse and then propose a giant, sweeping solution.  I do not have a good answer to this logic: if the premises are true, if the organization is that badly out of balance with respect to its decision-making, then perhaps only doing the big, dire things is the way to go.


And yet.... Especially in these recessionary times, with staffing levels so far below idea, don't we need to be as efficient as possible? Mindlessly trying to avoid expenditure while assuming that fixed labor costs will produce greater output under the magic incantation "do more with less!" will produce nothing but burnout and turnover.

So the next time someone suggests incremental refinement and modest investment, try to imagine a future that gets slowly better and better with modest cost and reasonable return.

Wednesday, June 27, 2012

IT Ecosystems or Many Small Benefits

I find my smart phone to be a terrific tool and a significant advance over what went before. I have been contemplating this recently because I see a similarity between trying to explain *why* I am happy with my smart phone to my father-in-law and trying to explain why systemic IT improvements make sense to some of my clients.

In both cases, the audience does not share my basic premises. In both cases, there is no "killer app" and no clear analogy from what they know to what they do not know. In both cases, I am asking for a leap of faith.

For my aged father-in-law, who does not particularly want people to be able to interrupt his day with their inane chatter, the mobile phone concept never made sense. Now, when I try to explain that telephony is the minority of what I want my smart phone to do, he chortles with delighted smugness: an expensive phone that isn't even mostly a phone! Imagine the kind of technocratic, free-spending fathead one would have to be to want that! He is becoming a little uncomfortable with the ever-growing percentage of the general population who have joined me in his native city, but he is still happy to soldier on.

However, my point is not his out-of-stepness; rather my point is that my inability to explain to him what he is rejecting. He is rejecting it and then filling in the reasons later. I have failed utterly to convey that the added value is holistic and lies in the totality of the ecosystem. His eyes glaze over when I list the rather large number of functions the "phone" routinely performs for me: check bank balance? Get driving directions? Surf the web to check the hours of a museum? Share links with friends in text messages from which they can go to the same web page? Track my billable hours and expenses? Take photos of friends and family? Take photos of computer parts I need to purchase? Take notes on the train for later access on my desktop? Listen to music? Buy music? Shop for apps for work? Download apps to help me enjoy the museum in which I am standing? It is all too much and too bizarre for him.

So I feel stuck: what benefits he will allow me to present are not enough to explain the actual value of the device+apps+network yet he will not sit through the (admittedly very boring and somewhat peculiar-to-me) actual list. So we have resolved it this way: he is certain that smart phones are expensive toys and that I own one because I like expensive toys, while I shake my head in wonder at what he is missing.

He is not alone in this situation, even in my experience. I have the exact same feeling when I try to explain to IT managers why a given subsystem or middleware is worth building or improving: an endless list of small benefits just does not seem to move people who are not already intimately involved with the given business process. Like a giant jar of small change, which actually has significant value, proposals of myriad incremental improvements sit in the corner, gathering dust.

Wednesday, June 20, 2012

What You Want To Hear

When I entered the field of computer programming, I had some illusions about knowability, about absolute truth, about right and wrong. I thought that there were real answers to vague questions such as "which of these approaches is better?" "Who is the better coder?" "What should we do to solve this problem?"

Instead of fact-based, rational discussion I find myself often mired in religious and political muck. Specifically, I am watching a client move a data repository from a lovingly handcrafted environment to a standard environment, which standard environment blows up every time.

How did we get here? The data repository is several years old and has been a spectacular success. It was custom built for its purpose and its environment. No radical changes were indicated; so why are we watching them try to move to a dramatically different implementation?

I do not object to the idea of change; I have some sympathy for the desire to update this implementation:

  • The custom environment is getting long in the tooth--almost 6 years old as of this writing--so I find it reasonable to consider replacing it.
  • The custom environment runs Linux which is not the Unix of choice in this particular shop, so I understand their desire to port to their standard, AIX.
  • The custom environment is based on ReiserFS which is  bit out of the mainstream, so I can see reevaluating this implementation.
What strikes me as sad is how the next implementation was chosen: the proposed implementation is what they have lying around. While the current implementation was carefully researched, painstakingly prototyped and carefully constructed for just this purpose, the proposed implementation was none of these things.

So far as I can tell, this is how the technical design decisions were made:

  • The current system is old, so it must be outdated.
  • The proposed system is from IBM, so it must be reliable.
  • The proposed system is expensive, so it must be capable.
Conclusion: what they already bought is just the ticket. With this conclusion, everyone's judgement is validated: the techs picked the right things, the execs paid for the right things, everyone is right. Moving from the custom environment to the standard environment becomes a no-brainer: cheaper, faster, better. As I mentioned before, everyone loves an easy decision, right?

Well, there is a problem: ReiserFS is great at what it does. AIX's standard offerings have not caught up. Neither has whatever their Network Attached Storage (NAS) is running. So they can't get to step 1: copy *most* of the data to the new home. We are working on our third attempt here.

Is it a surprise that the result-oriented process yielded better results than the blind assertion of capability? No; the surprise is that anyone thought that simply declaring what was in hand to be the best solution was a good idea.

When will they admit that this proposed implementation is unworkable? I am guessing that they will never admit that and nothing we say seems to make any difference: it just isn't what they want to hear.

Wednesday, June 13, 2012

Smart Planning & Dumb Luck

Managing software projects requires that one make estimates about the level of effort real people will require to solve abstract problems with concrete software.

There are many boundaries in there: abstraction, concreteness, dates, level of effort, human actors and ideas. Business, and much of academia, abhors honest ignorance. Business requires staffing levels, target dates, deliverables, milestones and budgets. The most common approach I see to resolving the abstract nature of programming with the concrete nature of business is reduction-to-known: how is this project like other (preferably successful) similar projects?

In my experience, this approach is good when there is little novelty involved and is more-or-less a crap shoot when the project is highly novel. Many of our competitors handle novelty by assuming the worst and jacking up the price to cover that worst case scenario. We try to break down the project into novel and known and to do basic research so that we can turn the novel into the known as quickly and safely as possible.

But rarely does anyone, including us, talk about luck. Dumb luck. Pure chance. That sad truth is that sometimes you get lucky, sometimes you don't and sometimes you get unlucky.


(A related topic is inspiration: sometimes divine inspiration strikes, making the difficult simple and the giant task almost easy. But I prefer to think that I am divinely inspired sometimes because (spiritual version) God loves me more than He loves other programmers or (secular version) my superior neurology works this way, so I am reluctant to lump this in with luck. Although I understand that lots of other people do lump inspiration in with luck. I suspect that they are not often inspired, but that is another topic.)

I am aware of the "Luck is the residue of design" school of thought (nice context-setting here), and I mostly agree with it, but sadly sometimes I get lucky without regard to my planning or ability or anything else. Sometimes stuff just works out.

For example: sometimes I mention to a friend over a beer that I am stuck on a particular technical problem and he casually points me to just the right person to ask. Sometimes I find that I need to accomplish something and quickly and easily find just the right technical documentation on the Internet. Sometimes I find that an orphaned AC adapter fits a crucial piece of equipment. Sometimes the device driver I need only runs on an old kernel that I happen to still be running somewhere. Sometimes you get lucky: you crush the project, you beat the deadline, you come in under budget, the users are happy and the client is happy to write that final check.

Sometimes you don't get lucky: things proceed mostly as one would expect, with easy parts and hard parts and unexpected gains mostly offset by unexpected losses. These projects plod along and are delivered on time and get paid for and contribute the illusion that luck is not a concern.

Sometimes you get unlucky: critical personnel have family emergencies, supposedly supported devices don't work in *your* server, someone else's piece does not work as advertised or is late, your design is thwarted by the particular deployment environment, the client's server is plagued by gremlins, etc. These projects fray nerves, eat profit margins, sully reputations. We would all avoid them if we could.

(Beware, though, the development team that never has a bad day, or an unlucky project: it is quite possible that they pad their estimates or grossly underpromise. The only way I know to have a perfect track record is to be chronically unambitious.)

How a team or group or organization handles bad luck is very telling. The first hurdle is recognizing when you hit some bad luck and when you didn't. Very few seem able to distinguish between a failure of their team or process and bad luck.

I am not saying that blithely accepting "we were unlucky" is a good way to deal with failure. I am not defending failures of team members or of management or of process: such failures should be evaluated and corrected, either with retraining or retasking people or refining processes.

Neither do I subscribe to the position that good people never have bad luck. It would be nice if that were true, since it would greatly simplify the task of evaluating failure: there would always be blame.

I am saying that realistic appraisal is critical: neither of these simplistic approaches is a good idea.

Having determined that you got unlucky, what then?

Often the response is to avoid repeating this horrible experience by controlling variability. When practical, controlling variable in the software development process is nice, but it not a good primary goal. This is because controlling variability works best with respect to codified processes expected to produce a consistent result. It is a good way to brew beer or make steel. It is not a good way to solve problems in creative or novel ways.

Similarly, if you turn to scheduling as your protection against bad luck, you may be disappointed because if you want on-time over excellent, then you won't get excellent.

Sometimes you get unlucky. All you can do in that case, in my opinion, is pat your team on the back and let them know you feel their pain--but that you expect a return to excellence the next time.

Wednesday, June 6, 2012

No Substitute for User Input

Whenever I am going to deliver interactive software, I assume that there will be a significant amount of feedback and so I reserver a big chunk of time and energy to deal with that.

I was horrified recently to discover that some programmers feel that I am remiss in gathering requirements, slapdash in my implementation and lacking in my QA. I consider myself to be careful, thorough and to have a high user satisfaction rating.

When I worked through the other programmers' complaints, I discovered that their ideal software development model differed greatly from mine. (And my user satisfaction rating was much higher than theirs, from the same users no less.)

I think that it is instructive to compare and contrast the two models, so that is what I shall do.

I summarize their approach as the classic carpentry mantra, "think twice, cut once." I summarize my approach as "get feedback early and often."

Their model, which is fairly common:
  1. Start with a long and thorough requirements gathering phase. The game is lost or won here.
  2. Write up the requirements into a technical specification. Circulate that spec widely and solicit input.
  3. Code up the spec as best you can. Write detailed user documentation.
  4. Test the implementation. Run it through QA.
  5. Deliver the software to the waiting users.
  6. Handle the resulting bug reports ASAP.
  7. Consider the job done and done well. Begin a schedule of releases and updates.
My model, which is certainly not unique to me:
  1. Have a longish conversation with the users, mostly centered around them telling me what their ideal solution looks like.
  2. Create a working prototype as quickly as possible with basic testing only and get feedback. Do a Quick Start Guide as user documentation.
  3. Process the feedback into work to be done on the prototype. Accepting sweeping changes to the requirements or specs if those changes move you closer to the ideal solution.
  4. As the prototype iterates through revision, it becomes the first version.
  5. Over time, issues arise and so there is recoding through the same process.
My basic premises are these:
  • Software development is a conversation between you and the users; long pauses will kill that conversation.
  • No requirements gathering will ever get you more than 80% of the requirements, so plan for real revision at the outset.
  • Users will happily engage with a dynamic process that produces tangible results
Upon reflection, I could see why my work looks sloppy to them: I put out many versions, I often change my design and implementation, I don't put effort into refining the coding of modules until I am sure that those modules are interested to someone. I don't tend to say "that wasn't in the spec" and I don't complain if things need to change which may seem like weakness or uncertainty to them.

On the other hand, I don't have users who complain that they wait forever for software that doesn't actually do what they need or what it to do.

As is so often the case, each side is not failing at the other's model, each side is succeeding at their own model. I am not impartial, however: I am highly confident that it is better to be one of my users.

Wednesday, May 30, 2012

Often In Error, Never In Doubt

This week I have a personal rant, pure and simple: I am sick of people acting as though all of their choices are perfect, without alternative and obvious.

Long ago and far away, I was introduced to rigorous decision making, a concept invented by the ancient Greeks. As part of this tradition, I was taught to carefully define my premises, apply logic to determine my alternatives and then make value judgments as to which of those alternatives I would support.

In such a methodology, there are facts, which are special because they are objective (universal), there are observations, suggestions and preferences, all of which are subjective (personal).

In attempting to make a decision, especially a decision involving a group of peers or near-peers, it is therefore important to establish the premises and agree upon the facts. Then alternatives can be generated which are both realistic and which tend to achieve the goal. Every member of the group can consider the alternatives, weigh them as they see fit, and discuss the relative merits of the alternatives and perhaps generate hybrid alternatives which have broad appeal.

Instead, I am constantly finding myself on the wrong side of this cycle:

  1. There is an issue, usually not a critical one from my perspective
  2. Someone senior announces a solution
  3. Many someones junior voice objections
  4. Objections are countered with simple denial
    1. "You are wrong, that is a not a problem"
    2. "Your problem exists, but it will handled by the solution"
  5. The solution fails along the expected dimensions
  6. Juniors are trained to pretend that the seniors were right
  7. I am asked to join in the pretense while providing a non-pretend remedy

I am unclear as to why anyone would want to pretend that all problems have a clear, simple, perfect solution. This pretense is obviously absurd and continually contradicted by everyone's life experience. One of the primary skills of adulthood is the acceptance of imperfection and the pursuit of the as-good-as-possible. Why pretend that when at work, suddenly perfections are possible and always achieved by the right process?

This is particularly frustrating for someone in my position, the position of actually having to make systems work in the real world. Unlike senior management, I do not have the option of simply insisting that my work is perfect and that all criticism of it is invalid.

Now that I write that out, I can see that appeal: how awesome that would be! Except for the constantly failing part, which I would hate, even if I could browbeat people into never mentioning it.

My current working hypothesis is this:

  1. We live in an age of astounding possibilities and baffling large numbers of choices
  2. Our culture prizes confidence, at least in men, and rewards the confident whether or not they are correct (Often in error, never in doubt)
  3. In order to be determined confident in complex, uncertain situations, one must be simplistic; keeping it simple is not enough
  4. In order to be simplistic, one has to ignore complicating realities
  5. Voila! we end up where I find myself: listening to falsehood asserted with authority
Of course, I could be wrong: that is part of my point, one can just about always be wrong which is why one should just about always be willing to engage dissenters and answer their objects. Alas, "Are no!" "Are too" does not constitute discussion, however often or loudly this interaction is repeated.

Wednesday, May 23, 2012

Mobile Meditations

We are feeling pressure to enter the mobile space. Everyone else is doing it, so we should too.

As a parent, I tell my daughter that "everyone else is doing it" is not a good enough reason to do something. But as an IT business, we find that our customers sometimes cannot evaluate us on our own merits. Instead, they need proxies to help them decide if we are good at what we do.

Proxy evaluation is sometimes hard to avoidable: I don't go into the kitchen to assess a restaurant; I look at the menu, I assess the ambiance, I rely on word-of-mouth.

So I understand the well-meaning advice that we have to enter the mobile space, to make some of our customers comfortable that we are keeping current and that we are still good at what we do. I understand that more and more people seem to rate a technology company by acquiring that company's free smart phone app and trying it out.

I have even decided to take this piece of advice and to put a mobile app or two on the docket for calendar 2012. I have also decided that discussing this decision will give insight into how we make decisions about what we do and how we do it. This will also serve as an example of an issue that is common in our practice: when something seems obvious at a high-level (create a mobile app!) but is actually quite unclear at the level of action (what kind? how fast? how expensive?)

Specifically, entering this arena raises a number of questions:

  • What is the goal?
    • marketing
    • remove a negative (reassure customers)
    • revenue
    • demonstrate competence in this field
  • What kind of app?
    • client for a server? eg Facebook app is a UI to their server
    • native app? eg native camera app
    • combination? eg some games with a multi-player option
  • What business model?
    • free? (loss leader, eg banking app)
    • freemium? (give away something, charge for more)
    • for-profit? (straight up product)
  • For which platforms?
    • IOS only?
    • Android only?
    • Both IOS and Android?
    • stand-alone, Wifi, 3G/4G, some combination?
    • smart phone, tablet, both?
  • What market?
    • extend our product line to mobile?
    • create a separate mobile app, perhaps an add-on to someone else?
    • both?
  • What development environment?
    • native to target platform
    • use a virtualization that runs on multiple environments
As our thought process matures, I will return to this topic to record how we decide and what we decide.

Wednesday, May 16, 2012

In Praise of "Throwaway" Software

A technical person who worked for one of our clients once dismissed my work as "throwaway software." I think that he meant the comment to be dismissive, but I was not insulted or perturbed; I knew exactly what he meant and I was flattered. His comment mean that I had met my goal of providing just what the users needed just when they needed it.

Like me, he had learned to program in the bad old days when development was largely unassisted and laborious. Computer time was the bottleneck, to so we conserved it. We labored over "large" chunks of code, trying to find all the bugs and typographical errors and silly mental mistakes before we dared waste precious processor time to compile our code.

When code is hard to create, hard to change and likely to be in service for years, it makes sense to move slowly and carefully, to write for the ages. "Nothing ever gets thrown away," we were told. "Anything you write is likely to be recycled and to live forever, so every line should be immaculate and immortal."

If you don't remember assembling programs into punch card stacks and then putting those stacks into wooden trays so sys admins could load them up, then you are going to have to take my word for it: coding used to be a cottage industry: we lovingly hand-crafted every line. We were thoughtful and careful and only did Important Things with the Mighty Mainframe. We were very aware of our High Priest status and we did not want to anger our computer/deity or waste our precious opportunities to interact with it.

These days, thank God, the balance has shifted dramatically: computing power and disk space are cheap, so development is less about writing fantastic code and more about refining your software so it does exactly what it needs to do. The computer helps with with syntax-aware editors, chromacoding, ease of compilation or interpretation, on-line help, etc.

With the advent of "wrong sizing," there are usually too few workers doing too much work. As mentioned before, human attention is at a premium. This includes both the humans writing the code and the humans using the code. This means that developers need to do what makes them productive in terms of writing code, and software needs to do what makes users productive at their jobs.

Often, this means writing special-purpose, "throwaway" tools to help specific users do specific jobs. If you are one of the users, struggling to do an important job, you are not going to like the description of your urgent need as "throwaway." If you need support for "only" a few months, after which the tool will never be used again, so what? You still have the need and computers can still fill it.

(To me, an "app" does a fairly high-level job in a fairly complete way, while a "tool" does a very limited job in a possibly unsophisticated or unpolished way.)

I am struck by the fact that builders don't have this hang up: special purpose scaffolding is not "throwaway building." No one says "what is the point of building concrete formers when you are just going to tear them down later?"

I watched an expert put up a shelving unit for me using a temporary scaffold, which was eye-opening. He was slight and elderly. I was bigger, stronger and younger. I offered to help him put up the unit. He was polite in his refusal of my help. I stuck around to watch because I was baffled. He eyed the unit and wall space. He threw together a scrap wood frame. He and I put the unit on the frame, at which point the unit was right where it needed to be. He screwed it to the studs in the wall with two cordless drills: the left hand drilled pilot holes and right hand used a screwdriver drill bit to sink the screws. With astonishing efficiency, the unit was up. He then removed the frame and, with the screwdriver drill running in reverse. disassembled the frame. Voila! a throwaway frame made the job almost trivial--assuming that one is skilled enough to throw together such a thing, which I am not.

I recently created such a tool to help one of our clients debug an interface. I knew going in that once the interface was working properly, the tool would be of limited use or interest. I also knew that without the tool, without the window into what was going on, the cycle of "we never got it!" being answered with "well, we sent it!" would go on forever.

As is so often the case, in the course of refining the tool and chasing down what the tool revealed, we found that there were many minor issues which appeared (falsely) to be a single, serious issue. Everyone was mostly right. No one was evil or stupid or uncaring. Once the tool gave us a shared context in which to have conversations, months of bickering were settled in days of rational, shared-reality-based email exchanges.

Is the tool "throwaway" software? Perhaps. Was it quick and easy to create? Yes. Does it provide excellent value for money? Absolutely. In this day and age, is "throwaway software" an insult? I would argue only for those living in the past.

(Note that I am not advocating writing buggy, sloppy code. Quite the contrary: weighting the effort toward refinement and debugging, in my experience, produces a lower bug count and a better user experience.)

My advice is to get out there and start whipping off solutions to small- and moderate-sized problems. Your users will thank for it, even if your software only lives as long as it has to.

Wednesday, May 9, 2012

Painless Tech Transition


I just noticed my trusty Palm Tunsten E gathering dust on my desk. For years, it was my trusty and trusted PDA, worthy companion to my various non-smart phones.

I have been trying to tidy up my workspace and overcome my tendency to hang on to technology long past its expiration date--which reminds me, anyone need serial cables or an i-Opener?

The fact of the matter is that I haven't used my Palm in ages. Our consultancy made an utterly painless transition off of the Palm OS platform and on the iOS platform. In other words, we swapped out our various Palms and swapping in iPhones and even an iPod Touch.

I am struck by how well that tech transition went, especially given that so many of our clients experience tech transition as slow, expensive and painful. So why did ours go so well?

I argue that our transition was painless because it was also patient and precise. We made the list of what we needed, what we had in the Palm + phone era and therefore what we expected to have in the iPhone era. Then we waited until we could get everything that we used to have. Simply put, we wanted to stop carrying three devices and starting carrying one.

(Some of us had PDA + phone + pager; others had PDA + phone + MP3 player. Either way, it was two devices too many.)

Our requirements were not very exotic:

  • ability to make and receive phone calls
  • ability to send and receive text messages
  • support for time-and-materials billing
  • contact support
  • ability to send and receive emails
  • ability to make and share notes
  • access to a shared calendar as well as a personal calendar

We wanted to cut-over without leaping into the void, so we didn't wait until our Palms died, although they were clearly wearing out. We started by having one of us get an iPod Touch and use it for a while. It took over a year before all the pieces we wanted were there and validated.

Once the iPod Touch user was up and running, moving to the iPhone was a no-brainer: the cost was significant, but well within reason. Our transition was painless in part because we had had a high degree of uniformity to start with. Her experience is moving from the Palm to the iPhone paved the way.

Originally, I had wanted to make sure that we had at least one Android user, one Blackberry user and one iPhone user. But the lengthy discovery process (largely me striking up conversations with fellow business travelers on trains and air planes) indicated that supporting multiple platforms internally is more resource intensive than I expected or could tolerate. To my surprise, the consensus was that the iPhone ecosystem was more mature and more business-oriented.

While the recent Windows phones are well-reviewed, at the time the Windows phones were so bad that I did long consider them.

I gather that Android is catching up and might be a more of contender now but I don't really know much about it.

What I do know is that the iOS platform has met and exceeded our expectations. The range of applications available is stunning and many of them are actually useful, instead of being merely impressive or amusing. I am shocked that I use the web browsing, mapping, on-line banking, built-in camera, music player and e-book reader as much as I do. I am not much of a gamer, but I even play games on it occasionally, which beats staring into space when I am too tired to read or work.

So a large part of the painless transition is the superior technology to which we transitioned, although I am constantly reminded of a warning given to me by an Android-oriented colleague: the iPhone is a lifestyle choice and one is likely happier if one embraces that. Although I am a Linux guy, I run a VirtualBox virtual machine with Windows in it mostly to support iTunes and to interface with my phone. So far, this has been a small price to pay.

But more than the quality of the target technology, I attribute our success to the lack of artificial deadlines and to existence of clear, precise definitions of success. The only drawback I see so far is the assumption by many of our clients that we got iPhone so we could be part of techno-cool kid herd. That way lies madness. Instead, shoot for making transitions that land you in a place at least as good as where you started. You would be surprised by how many people settle for less.

Wednesday, May 2, 2012

Rigid Engineers, Obsessive MBAs

I feel a good, old-fashioned rant coming on. Just for balance, I will combine two rants: Rigid engineers and obsessive MBAs.

RIGID ENGINEERS

Engineering is a great discipline. I like it. I try to adhere to its basic tenets. I feel that engineering has the following strengths: it is rigorous, it is regular and it focuses on the relevant.

Bridges tend to stay up. Planes tend to fly. Ships tend to float. If you have a house or a car or a smart phone, you owe the engineering discipline thanks.

Software engineering is a little bit different: our building materials are abstract and our rules are a bit fuzzy and our adherence to our rules is not guaranteed.

Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.

However, my rant today is not about loosey-goosey software engineering, but rather its opposite: inflexible software engineering. The discipline of engineering is not good when there is no good rule and some software usage requirements create a very sloppy execution path. Instead of a measured, graduated response to the complex set of requirement, instead of a long chain of events from messy reality to complex model of the messy reality, the rigid engineer gives us a clean abstraction. Even if the clean abstraction does not, in fact, support the actual business process to be automated.

I have had to deal with engineers who actually raised the following objections to proposed designs or changes to their beautiful cathedrals of structured code:

  • "That would break my parser"
  • "That would require a goto"
  • "That would importing a large, messy library"
  • "My structure can't do that"
The subtext is always the same: "I want reality to be changed to suit my model." This often expressed as "the user can do {long complex sequence of instructions} to get the same result. There is no need to change my code."

To me, this is the tail wagging the dog: software should do what it needs to do, first and foremost. It is the job of the engineer to make that happen. It is not the job of the engineer to change the scope of the task to make the software simpler, unless the scope is beyond the ability of the implementor. In that case, the implementation folks should say that and try to work with the users to find a compromise. Decreeing that some functions are a bad idea is not what I mean by "compromise."

This belief that if your implementation cannot support it, then the request is bad and the implementation is good is related to the "great code is not great software" idea I touched on before, here.

OBSESSIVE MBAs

A colleague drew my attention to this great article:

http://www.forbes.com/sites/adamhartung/2012/04/20/sayonara-sony-how-industrial-mba-style-leadership-killed-once-great-company/

This article is a study in how classical MBA management has hurt Sony. The article describes a situation that I find increasing common not just in consumer electronics, but also in IT. (I have ranted about other manifestations of this, notable here.)

This article describes the typical MBA focus on operating efficiency, a la Demming. This focus is great, short-term: assuming that what you are doing today is what you need to be doing tomorrow, then finding better ways to do that is a great idea.

But in technology, figuring out what you should be doing tomorrow is often very important. Operating efficiencies gained with obsolete technology are often smaller than operating efficiencies to be gained by using current or even leading-edge technology.

Taking chances and taking risks as you figure out what it all means to you pays real rewards.

But classical MBA management teaches managers to be wary of the new and to concentrate on the current. In an industry where 18 months is a typical development cycle, a three year plan to increase efficiency will be two cycles out of date by the time it is finished.

I understand that operating efficiency makes sense. I realize that most organizations cannot handle constant flux. I appreciate the need to plan and to budget. But I am horrified at the ever-increasing tendency of the managers we meet to assume that all technology is pretty much the same and that three-to-five year roll-out plans make sense for software, with "freezes" while the implementation is done and evaluated. No wonder so many companies are out of date and out of touch with IT.

Wednesday, April 25, 2012

Programming by Any Other Name

These days it is hard to find even moderately sized systems which do not claim to offer rule-based options for configuration. These systems claim to provide the flexibility of encoding business rules without the hassles or requirements of actual programming.

Ha! say I. Call it what you like, computer programming is computer programming.
Juliet:
"What's in a name? That which we call a rose
By any other name would smell as sweet."
Romeo and Juliet (II, ii, 1-2)

Inspired by the great Shakespeare I say that programming, by any other name, still smells of sweat.

Note that I am not saying that rule-based systems are not useful, or powerful, or neat-o. I am saying that translating real-world polices into absolute statements in some rule-definition language is not a common skill and is so similar to most kinds of computer programming as to be indistinguishable.

Dress it up however you like: if you are not good at programming you likely will not be good at writing rules, let alone debugging rule implementations or grasping the cumulative effect of a large rule set on a large and complex application.

I suppose one could divide up computer programming into "predicate logic" and "computer stuff" (RAM, file I/O, databases, communications, etc). Using this division, I can imagine a person who excels at predicate logic but has no feel for the computer stuff, making this person uninterested in computer programming.

If such a person ended up as a business executive instead of a formal logic academic, (or APL programmer) such a person might be quite good at writing rules. That person might even be able to debug her rules, to cope with the often-wide chasm between the abstraction presented by such systems and the concrete reality presented by the working environment. But how many such people can there be in the world? Certainly not several in every moderately sized company in the world.

So when sales people tell you that, at last! you free of the tedium of dealing with IT professionals, finally free of the tyranny of computer programmers, stop and ask yourself this: are you really ready to eat your own cooking?

As every even junior programmer knows, predicate logic in action is rife with  unintended consequences which are the inevitable-but-unforeseen results of rules as applied to the real world.

My personal favorite example of this lateness and snow storms. Most or all of our clients above moderate size use Kronos Time & Attendance products. Most or all of our clients, in the frenzied enthusiasm of the original roll-out, go overboard with the rules, often with amusing unintended consequences. One client decided to implement a "better never than late" policy: if its employees were more than one hour late for work, they could not swipe in at all. This was intended to enforce punctuality. I do not know how it worked in that regard, but I do know that when I arrived mid-morning after a large snowstorm, I was greeted with a stream of employees leaving work: the snow had delayed them and they could not swipe in; unsure that they would be paid for the day, they were going home. Not much got done that day, but what did get done was mostly done by upper management, who were exempt from that rule.

In this case in particular, I like the "silver bullet" trope because even though there is no magic solution to the problem of translating human policy into machine-readable form, the problem can, werewolf-like, sudden get very ugly very quickly under the right circumstances.

So see a doctor for medical problems, go to an accountant for accounting and get your rules from a machine-readable logic expert. Or don't, but at least go into your decision with your eyes open.

Wednesday, April 18, 2012

The Utility of Bad Examples

Today I am reflecting on lessons I have learned in my long career and how I learned them. The obvious approach is to remember the exceptional individuals who were great at their jobs while being pleasant and consistent and generally a pleasure to know. But who needs to be reminded that we can learn from paragons of excellence? Instead, I am contemplating what I learned from the deeply incompetent, the arrogant and the people I've met who cannot seem to learn from their negative experiences.

Many of us really love a good story about some fathead and his or her hideous blunders but that is what I present here. Like the poor, fatheads will always be with us. Avoiding them is the best course of action, if you have that luxury. Failing that, there are strategies for mitigating the ill effects of fatheadedness, but that is entirely different story. Besides, we often have no choice but let fatheadedness run its course. In those all-too-frequent cases, what is the benefit?

The benefit is experience, precious useful experience and the attendant opportunity to learn from someone else's mistakes. In post, I contemplate the utility of bad examples: what do the actions of fatheads teach us, to make us better at what we do?

In my experience, the truly spectacular fathead (SFH) is arrogant: so arrogant that he or she (usually a he, so I will use that pronoun for convenience) is above mere conventional wisdom. Sometimes "thinking outside the box" is valuable, but most of the time thinking outside the box is a waste of time or worse: for conventional situations, conventional wisdom usually suffices. In order to be worth it, thinking outside the box must present some clear additional benefit to compensate for the greater effort. I try to keep in mind that in today's IT environment, human attention is the most precious resource. Use it wisely.

(So why the endless praise of out-of-the-box thinking? Because it is the employment situation that gives kudos for doing the obvious, even if the obvious is the right choice. And, every once in a great while, thinking outside of the box saves the day. The trick is to pay attention so that you notice when your situation is abnormal, but that is another post.)

Since the SFH is arrogant, he is often an object lesson in understanding conventional wisdom: why do we never do that? Oh, THAT's why. Rather than recount amusing examples of stupid people doing stupid things, I offer this suggestion: keep track of instances of bold, innovative thinking. Follow up months or years later: did the bold innovation do better or worse that conventional wisdom expects? Why? I have learned much about why certain rules of thumb exist this way--mostly as I sat amid the smoking wreckage of some IT disaster, but at least I had something useful to do while I was sitting there.

Often the SFH makes the same kinds of mistake over and over again; after all, it is almost a requirement that the SFH be unable to learn from his mistakes if he is to remain the SFH. Once you see that patterns, the SFH is useful as the embodiment of a certain kind of habitual error: when faces with certain kinds of decision, ask yourself what the SFH would do and then DO SOMETHING ELSE. Ideally, do something that makes sense in light of what your SFH has inadvertently taught you.

Now we come to the most painful methodology: seeking useful feedback. I find that a simple description of an SFH strategy, preferably to someone outside the SFH's organization to avoid accidental embarrassment, often nets useful results. A very good outcome of bouncing your observations off of someone you respect is that they may explain levels to the problem you had not seen. The most useful, but least pleasant outcome of this exercise is the casual observation from your respected sounding board that YOU are guilty of the same bad judgment. If true, this feedback is invaluable in improving yourself.

I say "if true" because some people feel that it is either appropriate or polite to accuse the speaker of any fault the speaker finds in others. But this is not a get-out-of-jail card: not all unpleasant feedback is some kind of tit-for-tat reflex. This means that really have to think about that you said, what they responded and how their insight improves your understanding of your situation. If your understanding is not improved, then discard the feedback--but discard it discreetly and politely. You might need feedback in the future and remember: most people won't be willing to give it at all, so sometimes reflexly negative feedback is better than no feedback at all.

Ignorance is bliss--until it isn't. Think about why bad decisions turned out to be bad and you are on the path to making better decisions in the future.