Pages

Thursday, October 27, 2011

No Small Software Purchases, Revisited

Here is a post out of the usual Wednesday cycle. I am posting out of cycle because this post is a follow-up to a previous post. That previous post can be found here.

An experienced executive working in a large American corporation had this reaction to that previous post:

Say your point is true: local solutions give corporate IT heartburn. What are they worried about?

  1. Data security/access control/access auditability - ad hoc solutions can be given high levels of data access, but the user level access controls within the solution are maybe not so robust.
  2. Business continuity planning (disaster recovery) - making sure that the local solutions are covered by the disaster recovery plan. (data and software backup)
  3. Training (ISO 9001 audits of training, etc.) - making sure that the local entities have included their customizations in their training efforts in an auditable fashion, and that the global quality documents are written in a way that allows for local solutions (while being usefully specific about the global solution where no local solution exists)
  4. Financial controls - making sure that, if there are financial transactions in the local solution, they have adequate controls.
  5. Maintainabiity - does it break when you update your global software?
  6.  

Accounting for all of these things ad hoc in a customized solution makes it expensive, unless the central software provides a framework for dealing with them all in a consistent manner.

These are all valid and interesting points. Here is how I answer them:

Data Security & Access Control
Security issues always come down to policy and enforcement, so I would phrase the question this way:

Can local solutions be trusted to implement global data security policy?

I claim that the answer is "yes." Implementing data security policies should be a dimension along which the local solution is designed, coded and tested.

In practice, I find that a local solution is not likely be better than the prevailing adherence to data security policy and, admittedly, is often worse. For example, we recently installed a suite of apps for a client for which we provided support to use their global Active Directory (AD) as the source of credentials and the source of permissions associated with those credentials. But it was an uphill battle to get the local users to communicate with the global AD keepers about whose account was allowed to do what. It was not easy to merely establish what existing permissions the app's functions should map to.

Given how security policy implementation in large organizations usually plays out, I can sympathize with this concern: is this new way to access data consistent with our policies? But there is no reason that the answer shouldn't be "yes" unless your policies are poorly understood or your infrastructure is closed or poorly configured.

Disaster Recovery
DR is activity which, like prayer or the tennis lob, is rarely rehearsed and usually called upon only in times of great distress. Can a local solution be fit into the DR plan? Almost always. Is this a question that is usually asked? No, it is not. The good news is that more and more DR is more-or-less automatic: SANs and clusters and remote data centers have relieved many local entities of this responsibility.

To be clear, I understand DR to protect against three specific kinds of data or service loss:
  1. Human error: someone pushed the button and wiped out lots of data.
  2. Hardware failure: a key component, such as a disk drive, has failed.
  3. Disaster: the building/city/state is flooded/burning/radioactive.
There are good solutions for each of these issues and local IT solutions should be capable of adapting to nearly any such solution.

Training
We have often found that getting onto the training docket is not easy, but we have never had a problem fitting into any existing training and documentation regimen. However, there are many issues:
  • Politically, is it safe to expose the local solution to review?
  • Practically, can one providing training materials which do not rely on local experience?
  • Who will keep the training materials up-to-date?
Again, I see no reason that a local solution, per se, could not comply, but I often see local solutions which did not. Often the reason is simply that the hoops are large and while an expensive, global solution can afford a real budget for the documentation of training, local solutions feel that they cannot.

Financial Controls
In theory, financial transactions are covered by whatever data security policies are in place. In practice, this is often not the case.

Finance is a tricky area: giving local consultants information about how financial transactions are constructed, verified and secured really goes against the financial grain. It is an issue for which we have no good solution because we cannot figure out a way out of the following conundrum: the financial controls are so tight that we cannot see what we are doing, so we are forced to have a backdoor into the data stream, which of course is only present in the development environment.

Maintainability
"Does it break when you update your global software?" All too often, the answer is "yes" because your global staff does not know or does not care about local dependencies. But in my experience, the full answer is "yes, probably, but with decent communication your service outages should be infrequent and brief." I say infrequent, because there is enormous pressure on global infrastructure to remain constant. I say brief because your local solution should be aware of its dependencies and adapting to changing infrastructure should have been built in.

In our experience, the ramifications of global change are so great that we suffer from the inverse: we are constantly informed of far-off global changes which do not affect us at all. But better that that the alternative.

Wednesday, October 26, 2011

IT Zombies: SEP In Corporate IT

(SEP stands for Someone Else's Problem.)

Once upon a time, corporate IT people usually had only "the System" to worry about. The distinction between hardware, operating system (O/S) and application (app) was not so stark. Often your mainframe only ran one visible app, which made heavy use of O/S-specific features and the O/S relied heavily on particular features of its hardware.

Whether the System was largely homegrown, or bought commercially, it was around for a while and its minions learned to tend it faithfully. There were even gurus who actually knew just about everything there was to know and who could extend the system if that were required.

As time has gone on, the typical corporate IT infrastructure has become vastly more complex and vastly less stable. It is less stable in both senses of the word: it changes rather frequently and it often behaves in ways one would not have chosen. By this, I mean that its behavior is the sum total effect of all those parts and the net effect was not planned or specified, it just happened.

Because it is often my job to deploy custom-made technology to bridge gaps between existing pieces of the infrastructure, I am frequently in the position of exercising existing pieces in new ways. This is rarely a happy place to be: current IT comes in chunks so big and so complicated that it is rare to find that anyone knows what will happen when configurations are changed and previously inactive features are turned on.

When things do not go as planned, I am frequently in the position of asking in-house staff to investigate issues and perhaps even resolve those issues. There are two happy endings to this story and one unhappy ending.

The first happy ending is that the in-house staff are expert in the particular technology and they either get it to work or tell me how better to interact with it.
The second happy ending is that the in-house staff have a good relationship with the vendor and the vendor is competent to support their own products and the vendor either gets it to work or tells me how better to interact with their product.
The unhappy ending is that the in-house staff do not feel ownership of the offending piece of IT and they just shrug when I ask. In other words, they feel that the malfunctioning infrastructure is someone else's problem (SEP).
Once we feel that something is SEP, we are content to just shrug or, almost as irritating, sympathize but do nothing else.

I suspect that the SEP effect is, at least in part, generated by the Big Purchase phenomenon which I have discussed in a previous posting.

I am pretty sure that "wrong sizing," whereby a workforce is reduced below feasible levels and people are told 'do more with less,' is also a major cause of the SEP attitude. Unclear lines of responsibility, overwork and under appreciation certainly do not help.

But I am not sure and I am not sure that it matters. Once an environment is permeated with SEP, it is very hard to get anything done. Specifically, SEP leads to the following situations:

  • The in-house staff, unwilling or unable to contact the vendor, bombards me with half-baked theories and half-baked solutions, hoping to get me to go away.

  • The in-house staff points me toward some documentation or irrelevant email from the vendor. See, here's your useless answer, now go away.

  • The vendor invites me to a conference call during which they try to convince me that I should not be doing whatever it is that I am doing. This is like the old joke about lazy doctors: Patient: Doctor, it hurts when I chew on the left side of my mouth, what should I do? Doctor: Don't chew on the left side of your mouth.

Once I find myself in any of these situations, I have only one strategy that works me: relentless but polite pursuit of well-defined, simple goals. In my case, this often means taking up medieval quests:
  • actually wandering around an organization to follow up emails
  • camping out in people's cubes
  • talking anyone upon whom I am fobbed off until they send me down the line
  • filling out whatever forms need filling out
  • taking whatever conference calls I need to take
  • disproving whatever theories I have to disprove
I find that so few people these days actually follow up and are willing to go over even low obstacles that I am often able to get what I need. If Mr. Smith is out, I certainly can come back later. I am not above the implied threat either: I willing to tell people that I am stuck until this issue is resolved, so I have nothing else to do until it is resolved, so I can come back just about any time today. Or I can wait here. Or I can come back tomorrow. Really. It is not a problem.

Even though I can cope with it, there is an aspect of this situation that I find tragic: I am watching formerly motivated, useful IT types become deadwood as SEP seeps into their careers. As they become comfortable saying the following tired phrases, they don't realize that their coworkers are writing them off as useless:

  • "I don't know anything about that and I don't know who to ask."
  • "We just set up the database, I don't know how the data gets in there."
  • "We used to do it all ourselves but now the Networking / Security / Server / Database group does that."
I do not think that every corporate IT situation is dismal. I am not in IT despair. But I am on the lookout for the issues in deploying technology that are due solely to IT zombies who have lost their will to chase down issues. They will eat your brain (and your motivation) if you let them.

Wednesday, October 19, 2011

Elaborate UI, Feeble App

A pet peeve of mine is the tendency of apps to have elaborate User Interfaces (UIs) but feeble capability.

I call the UI "the front end" and the other part "the back end." Once upon a time, the back end was where all the magic happened. Where we put our effort. Now, it seems, the pendulum has swung the other way and all the work seems to go into the front ends.

(There is an analogy to our celebrity culture in there somewhere.)

Specifically, I find many apps with elaborate UIs which just don't do all the things one would expect such an app to do. Generally users just shrug: it would be great if it worked that way, they sigh, but it doesn't.

For example, I recently was debugging an interface to a medical information system. I asked the users to verify the data for a given patient on a given day. They asked if I couldn't provide encounter IDs instead. Couldn't they use the app to search for the data in question? No, they could not. Instead, they could use an elaborate GUI to laborious click on all the little icons which represented encounters, scan the cramped screen for the data, and repeat the process until they found what I was looking for. Blech.

While lacking functionality, the UI for each of these same apps is full of bells and whistles: backgrounds that change color when the control has focus, lots of different fonts, dynamically created drop-down list boxes, etc. In fact, one requires a mighty desktop just to run the UI.

My theory is that today's typical GUI is so much work to build that development teams just run out of steam before tackling the functionality.

I have noticed that many current development environments encourage this: Visual C++, for instance, has a development environment that is a GUI, is about GUIs and is not so great for doing other kinds of programming, such as bit-bashing or file manipulation.

I understand how one might arrive at this point: using X-windows and ANSI C in the 1980s was exactly the opposite: the environment did little to support GUI generation and putting GUIs together in that era was painful. So we tended to be modest in our GUI ambitions.

With this generation of development environments has come a melding of front and back ends: you write the front end and the back end gets written in bits and pieces to drive the front end. This makes the distinction between front and back end difficult to maintain, which in turn leads to back ends which are amorphous--and difficult to test (how do you run 1,000 cases?) or to use automatically (how do you call the app automatically from the background?).

In fact, there is rarely more than one front end for any app: if you don't want to use the app interactively in a graphical environment, you are usually out of luck.

The only exception I see to this trend is the plastering of a thin GUI wrapper on something mighty, such as phpMyAdmin for MySQL, or GhostView for Ghostscript. I rather like these apps, but they could not avoid what I consider to be a solid architecture: there is a clean division between the front and the back because the front was done after the fact, by different people at a different time.

This split front vs back end gives the user real flexibility: one can use a command line interface, or choose from a variety of GUIs. Running through many test cases is easy. Running automatically is easy. Why don't more development teams build apps this way?

Wednesday, October 12, 2011

No Small Software Purchases

We seem to have entered a new age of Large Software Purchases.

When I started in this business, the mainframe era was ending and the super mini computer (yes, our terminology was lame even then) was on the rise. Departmental servers, in the shadow of the might core computer, sprang up everywhere. The hardware was expensive ($20K-$50K) and the software generally quite expensive too.

The computers started becoming personal and the range of prices for software used by businesses grew much, much wider than it had been before. There was even overlap between what people bought for home use and what was in use at the office, which surprised all of us, even Microsoft.

Along came Linux and Open Source and the price landscape changed again, for those daring enough or strapped for cash enough to take the plunge.

But recently I see a strong swing back to centralized purchases, central servers and corporate software. I see executives asserting themselves at the senior level, making large software purchases from their 50,000 foot perspective.

This shift is not inherently bad, but this central decision-making is often accompanied by a very unfortunate policy of "no support for local requirements." HQ buys it, you use it. HQ did not want to get mired in the details, so they do not know what, if any, local requirements there are. They even hint that local requirements are a function of local management incompetence.

I have seen a proliferation of large systems which are not well suited to the task at hand, or systems which are not properly configured for local needs, or systems which are both.

This trend has been explained to me as an attempt to conserve management time, which is often a company's most expensive resource. (I find that fact disturbing and there is probably a rant in there somewhere.) It is more efficient for there to be fewer, larger IT decisions and it makes sense to have those few, large, decisions to be made by the decision-making experts, without an undue focus on minutiae.

I can understand that the sort of process problem that I make my living solving just isn't important enough to warrant senior executive attention. But I think that something has gone wrong when large swatches of the organization are making do with inappropriate or misconfigured "enterprise solutions."

I can grasp that sometimes a local problem should be left unsolved for the global good. But I think that such calculations should be explicit and revisited; instead, I see local issues being ignored in such large numbers that it is hard to believe that the sum total of all that procedural losing does not outweigh the cost of acting to prevent at least some of it.

I realize that economy of scale does come into play sometimes and that having a moderately sized central department can be more effective for some purposes than myriad local departments. But I am suspicious of this model when so often there are local needs dictated by regulation, competition or regional variation in the customer base.

I can see that large companies are complex and require some metaphoric distance before you can see the whole picture. But I suspect that this very complexity argues against the "one size fits all" philosophy. This feels to me like the old trap of trying to solve personnel problems or process problems with software. That does not work and I suspect that we will find that large software systems imposed from afar won't work either.

Wednesday, October 5, 2011

Creating GUIs Requires Some Artistic Talent

Prologue: GUIs vs TUI
There are two main kinds of UI, following two different basic human cognitive models: recognize-and-point versus remember-and-verbalize. The underlying cognitive models follow human mental development: infants and young children recognize and point, while slightly older children up through adults acquire language and use that instead. If one is lucky enough to live that long, one may start to lose language in one's dotage and end up back at recognizing and pointing.

GUIs are all about recognize-and-point: you recognize the icon or button and you use the pointer to "shift focus" to the desired object. Text-based UIs, which we used to call simply "UIs" but which I will call here "TUIs," are all about remember-and-type: you remember the right command in this context and you type it.

GUIs should have an underlying metaphor: a physical desktop for operating systems, a physical piece of paper for word processors, a physical scroll of names for music players, etc.

GUIs are really great for two kinds of job:
  1. Software which one does not use often enough to memorize the commands
  2. Software which is aimed at people who do not use software well
TUIs are really great for the inverse:
  1. Software which one uses so often that typing is faster than pointing
  2. Software which is aimed at domain experts who want power over support
TUIs have a built-in metaphor which used to be distinctly non-metaphoric: a glass TTY terminal attached to a keyboard.

Of course, the GUI model is not purely graphical: there is a text-handling component to it because nearly every human/computer interaction, at some point, needs the possibility of text entry and text viewing.

If done well, as is the iTunes interface on an iDevice (iPod, iPad, iPhone), this text entry component is so natural that you don't think about it.

If done badly, as is the SQL window in MS-Access, this text entry component is jarring and much worse than a TUI would be.

Actual Rant
GUIs in general provide many options and lots of power. I have have a problem with most of the actual GUIs I encounter: they are hard to use. In fact, most of them suck.

While the world seems to have gone completely graphical, we also seem to have decided that terrible GUIs are just a fact of life. Dilbert feels my pain:

Dilbert.com

While graphical environment are rife with possibility and provide all kinds of pre-made widgets, they do not provide, inherently, any of the following:
  • a clean, consistent layout
  • a useful visual metaphor for your particular context
  • guidelines for the scope or direction of the guts beneath them
If you want these properties for your GUI, you have to come up with them yourself. If you are a solid software engineer without any great innate graphic arts talent, then you need to get help with the art part and stick to the engineering. Really. We can't all be Michaelangelo, Thomas Jefferson or even Edward Tufte. Better to admit that you need help than to soldier on, creating awful GUIs.

As far as I can tell, many GUIs are designed by analogy: "let's do this the way Microsoft Office would do it, even if we are trying to identify patients who have a particular condition, and Microsoft Office is trying to automate tradition clerical work." Close enough, right? So let's say that the patients are files and their surgeries are documents and maybe their test results are also documents and soon the screen is a maze of tiny controls, all basically alike, leaving the user to remember what the current set of "documents" actually represents.

The fact that you can learn to navigate a bad metaphor doesn't change the fact that it is a bad metaphor.

And yet, the sad fact is that making all your bad GUIs look like Microsoft Word is better than making them bad in their own way. If your users expect a "File" menu bar at the top, whether or not they are dealing with "files," then finding one there is comforting. If you cannot provide a clean, consistent look-and-feel yourself, I suppose copying a popular one is better than creating a original, bad-but-novel UI.

Not all GUIs are terrible: I have watched die-hard Windows users interact with iTunes for Windows, which makes no effort to be Microsoft Office-like, and those die-hard Windows users find iTunes a delight.

So go ahead and create GUIs, if that is your job, but for God's sake do the graphical part, or get someone with graphical talent to do that part for you. Please. Your users will thank you--or at least, curse your name less.