Pages

Wednesday, September 28, 2011

Technology Audits, RIP

Once upon a time, when computers were not yet "personal" and phone were dumber than rocks, there were technology audits. If you are under 45, you may never even have heard of them, so rare are they now.

What I mean by "technology audit" is a formal, independent review of a piece of technology. These reviews usually consisted of running a predefined set of inputs into the technology, capturing the resulting outputs, and matching the actual outputs with the expected outputs.

Sometimes, if running a test suite through the technology was impractical, we went through the logs and gathered up the actual inputs and the actual outputs for a day, or week, or month, and reviewed the outputs to see if they were within expectations. We often re-implemented business rules or other logic to help us confirm that the outputs were as expected.

Whatever the name and whatever the precise methodology, this concept seems to have become extinct.

It may be that I am wrong. It may be that this process still exists but has a new name, or is done by a specialized sub-industry. But I don't think so: I think that this concept is dead.

One factor, certainly, is the declining cost of much software: if it was cheap or free to acquire, perhaps people feel that it is not cost-effective to audit its operation. I cringe at this explanation because the price tag is not a very good proxy for value to the company and because the cost of malfunction is often very high--especially if one uses human beings to compensate for faulty technology.

Another factor, I think, is the popularity of the notion that self-checking is good enough: software checks itself, sys admins check their own work, etc. I cringe that this explanation because I am all too aware of the blind spots we all have for our work, our own worldview and our own assumptions.

In the interests of full disclosure, I should note that I am a consultant who, in theory, is losing business because I am not being hired to do these audits. I would claim that I suffer more from working in auditless environments, where many key pieces of
technology are not working correctly, but this may be an example of a blind spot.

Many of the business folks and IT folks I encounter are not even clear on what the point of the exercise would be; after all, if their core technology were grossly flawed, they would already know it and already have fixed it, no?

But the usual benefit of a technology audit is not documenting gross failure, obvious misconfiguration or currently out-of-use pathways (although, alas! these all happen sometimes). Rather, the usual benefit of a technology audit is the revelation of
assumptions, the quantification of known undesired situations and the utility of having people looking afresh at something critical, while all sharing the same context.

Wait! I hear you cry, can't one do one's own audit of one's own technical context? Well, yes, one can, but the benefit is often much less. Just as self-inspections have their place, but are not accepted in place of outside inspections, so self-audits are useful but not really the same thing.

(I do often build self-audits into my designs, because I want to avoid the avoidable and to make explicit my assumptions about what "success" means in every context. But self-audits are not a complete solution to this problem. And my clients often scratch their heads at the self-audit, which seems to do nothing but catch mistakes, many of them my own mistakes. Which is precisely what they are supposed to do.)

When I am trying to deploy a piece of technology, the lack of auditing means that when someone else's piece of technology does not work as I expect, I am often without any way of knowing why. For instance, which of the following explanations best fits the situation?

  • Option A: I misunderstand what is supposed to happen; this means that I will have to change once I figure out that I am the problem.

  • Option B: I am in a gray area; one could argue that my expectations are reasonable or that the observed behavior is reasonable. I will likely have to change, but I have a chance to make a case to the other side.

  • Option C: the undesirable behavior is clearly and definitively wrong or not-to-spec. In theory, I should be able to inform the other side and have them fix their stuff.
But without audits, and the awareness of what one should expect from which systems, I am wandering alone in the desert. All too often, the client's in-house staff just shrugs and I am left to work around the issue. More often than one would expect, the undesired behavior is "fixed" in an upgrade or new release and now, suddenly, my piece is "broken". This is not good for the client and, contrary to popular belief, is not really good for me. Getting paid over and over again to keep
the same piece of technology running is not only unrewarding professionally, but it is a terrible advertisement for our services: Buy now! Our stuff works for a while.

A dark fear I have is that audits are out of favor because executives just don't want to know: they don't want to hear that the expensive, hard to buy, hard to deploy, hard to justify technology investment is not working properly, because they don't know how to fix it. I have had more than one manager tell me "don't bring me problems, bring me solutions" which sounds really manly, but is not very useful. How do I know what solutions to bring if I don't know what is wrong? Do I conceal all issues for which the solution is not obvious to me? In theory, once a problem is identified, the whole team can help solve it.

As always, I would love to know if this lack of audits only exists in companies whose focus is not high tech, or if it now has a cooler name or has been folder into something else, such as management consulting or business process engineering.

Wednesday, September 21, 2011

Iterative UI Design

I am a big fan of the tradition software engineering model, which I define as going through the following steps:

  1. Gather requirements
  2. Produce a functional spec
  3. Design a solution
  4. Produce a technical spec
  5. Implement
  6. Test
  7. Document
  8. Deploy
I think that this model is excellent for system-to-system interfaces and other projects without an important User Interface (UI) component. However, I find that this model does not work well for User Interfaces. In fact, it tends to pit the implementers against the users, which is never a good thing.

The problem I encounter is this: large amounts of work go into the pre-design, design and implementation stages; then users get a look at the prototype, or early version, or whatever, and they want changes. In the old days, we called this phenomenon "it is just what I asked for, but not what I want."

In this scenario, the development team gets defensive: they did what they were supposed to do; they don't have resources to largely re-do the project. In fact, they don't feel that they have resources to re-do much of anything: they are planning on only fixing bugs (and probably only major bugs at that). Often there are recriminations from development to the users: "you didn't tell us that" etc. Often the users are frustrated: no one asked them these questions.

I maintain that this is predictable and avoidable by the simple expedient of assuming that the UIs are different and that they require iterations of a functional spec / implementation / user feedback cycle. Get user feedback early and often. Don't plan on a one-and-done implementation.

If you accept the fact that very few people can fully imagine what a given UI would be like to use in the real world, then you won't feel like a failure if you (or your team or your users) can't do it either.

Of course, simply drawing out the development process is inefficient and ineffective: I am not proposing a longer development schedule. Instead, I am proposing a development schedule with less design time and more user-feedback sessions.

Ideally, the UI development process is a conversation between the users and the developers, which means that the feedback / change / review cycle has to be short enough that the conversation stays alive. And someone has to moderate the changes so that the process moves forward, not around in circles. You know you are off track when you start getting feedback of the form "I think I liked it better a few iterations ago."

If you try the traditional engineering model, you are likely to find that the UI is not what the users want, but was exactly what the developers set out to create, which leads to blame for the fact that the UI either has to be redone or has to remain inadequate.

On the other hand, if you iterate properly, you are likely to find that your users are happier and your developers are less grumpy, without having to spend substantially more time on the process. Instead, you spend the time in different ways.

Wednesday, September 14, 2011

False Consensus

As an information systems designer and developer, I specialize in bridging the gaps between existing technologies. As a consequence, my projects often involve more than one group inside the client organization.

Sometimes when I am dealing with multiple groups, I find myself stymied in the requirements-gathering stage of a project. I keep hearing a nice, shallow, consistent story from all the parties and I am highly confident that near-perfect consensus has not actually been achieved.

I should state that I do not worship at the altar of consensus. I prefer consensus. I strive for consensus. But I do not insist on it, or pretend that it is always worth the effort. Sometimes people within an organization have different views or goals and sometimes one side needs to win and the other side needs to lose so that the organization as a whole can win.

Alas, apparent near-perfect agreement is, in my experience, most likely to be based on miscommunication and often deliberate miscommunication. Since so many organizations do worship consensus, it is often seen as unprofessional to disagree. To avoid seeming unprofessional, or antagonizing the boss, people bend their words to simulate agreement. I call this "false consensus" because all parties only seem to be in agreement.

Without public and acknowledged disagreement, there is little chance of conflict resolution. (I do not mean to imply that rudeness or aggression are called for; instead I mean to imply that resolving a conflict usually requires that the conflict first be acknowledged and explored.)

As long as we are in the realm of words, and not in the realm of action, this word-bending works or at least appears to work. But once action is called for, as in the implementation of a piece of software, the false consensus is exposed. Often it is not recognized and is expressed as a software misfeature or bug, which it is not.

For example, some years ago I worked on a project to create an HTML document for a customer. There were some real issues to be decided about the actual purpose of this document:
  • was it supposed to help customers place orders?
  • was it supposed to help customers understand the response to their order?
  • was it supposed to help internal users process the orders?
The answer was "yes". This wonder-document was supposed to serve all three very different audiences at once. I was highly confident that this was impossible, so I suggested that we create three documents instead, but that was deemed unnecessary, and just the sort of thing that consultants do to pump up our fees.

I tried a different approach: which items from the product catalog were to be included? This was an easy answer: all of them: why would any items ever be left out? So I gave up and did as I was asked.

At the first review of the document, there was horror: what were the internal items doing in the document? The folks who dealt with orders wanted the non-ordering items removed. The folks who dealt with post-processing wanted the order-only items removed. While we all agreed that "everything" should be in the document, there was no agreement on what "everything" meant. When I pointed out that "everything" generally means "all of the members of a given set" there was eye-rolling and muttering about the difficulty of communicating with computer people.

Eventually, at the end of the project, when it was most expensive and stressful to do so, we hashed out answers to the questions that the false consensus had obscured.

I still do not have a good strategy for coping with this issue, because often there are powerful internal political and social forces at work driving the people working for the client to pretend to agree utterly on utterly everything. All I can do is try to estimate the project impact of this practice on my projects and charge accordingly.

I would love to hear about ways to get around this and I would love to know if other IT professionals encounter the same issue; maybe I need to get out more.

Tuesday, September 13, 2011

Introduction

I want a soapbox from which to rant about technical matters I encounter in my day job, as a medical information systems creator.

Ideally, from my perspective, this soapbox would be free and easy to use and would not involve social media. After all, if my friends were interested in my ranting, I would not need this outlet in the first place.

Ideally, from your perspective, these rants would be interesting, useful, amusing or--at least theoretically--all three.

Being a soapbox, I would not be required to be formal, businesslike or even fair. You, the reader, would understand that I am not engaging in formal discourse. Although this is public discourse, which is a bit strange. I am hoping that this blog on venerable old Blogspot will fit this bill.

So what do use for a title? Well, I am certainly inclined toward grumpiness, at least professionally; by tech standards I am rather aged (I remember when these were called "web logs" before that became just too much work to say); I work in the business world and most of my work is in information technology, so I am going with "Grumpy Old Business IT".

My intention is to generate a weekly post on Wednesdays. We will see if, in a couple of years, I actually made it past the first month.