I recently had to review the operation of a fax-based system I created to deliver clinical reports to doctors. Someone claimed that a report had gone astray and in response, I had to dig through the logs and the database rows to confirm that the report had been delivered.
This left us at a very faxly impasse: I say the report arrived and they say that it did not.
This somewhat tiresome trip down memory lane made me reconsider the place of fax technology in the current environment.
Once upon a time I liked facsimile technology. It was the best way to deliver nicely and reliably formatted reports to a specific location with a low chance of eavesdropping.
(Once the paper report came out of the fax machine security was an open issue but it was also someone else's problem.)
I still like the maturity of the technology and the fact that I have lots of mature code that does cool fax-related things.
What I do not like is the usual list of issues with a waning technology as well as some fax-specific issues.
The usual waning technology issue are these:
- The infrastructure (POTS) is shrinking; in fact, since our office went VOIP I cannot debug faxing in our office.
- The hardware is harder to come by; I am having to hoard fax modems to ensure that I have spares.
- The system software is no longer common; it is not installed on servers by default and it is not easy to integrate serial lines into the clustering environment.
The specifically fax issues fall into one of two categories: inherent and acquired. By "inherent" I mean that these issues are a part of the faxing technology itself. By "acquired" I mean that these issues have arisen because our environments and expectations have changed, making faxes seem degraded by comparison with prevailing norms.
The inherent issues are the unreliable delivery and the degradation of retransmission; a fax of a fax is often pretty hard to read. The unreliable delivery is more of a problem: paper jams, ink runs out, fax machines get turned off and phone lines are sometimes busy. I refer to the protocol jargon meaning of unreliable: it may wok most of the time, but I cannot really tell if it worked, at least without calling and asking.
The ways in which our expectations have left faxes behind are these:
- The transfer speed is now rather low.
- The data is not integrated into anything else: the report lands on paper and stays there.
- The report arrives at a fixed physical location but more and more we move around when we work.
- The security is now rather lacking; back in the day, the point-to-point nature of POTS was pretty secure. Now, the lack of passwords and access logging is pretty lame.
My investigation ended with my system claiming to have delivered the report and the user claiming that the fax never arrived. Finally someone in the target office found the paper and all was once again right with the world.
All's well that ends well, but I must confess that I am looking forward to the day that doctors find something to replace faxing. Soon I hope.
Wednesday, December 28, 2011
Wednesday, December 21, 2011
Lack of Feedback = Madness
A mental health professional I know once defined insanity as the state in which the model of the world inside one's head is sufficiently out of alignment with the world outside one's head. I found this to be rather uninspiring at the time, but the older I get, the better this definition seems.
I note that there are at least two ways to end up in the "insane" category: either through some organic problem, ie a malfunctioning body, or through bad input. It is this second category that interests me today because I see a parallel to a common workplace situation.
The common workplace situation is as follows:
I note that there are at least two ways to end up in the "insane" category: either through some organic problem, ie a malfunctioning body, or through bad input. It is this second category that interests me today because I see a parallel to a common workplace situation.
The common workplace situation is as follows:
- A manager makes a strong statement, such as "everyone needs to be using System X by Some Date."
- The rank-and-file try to convert to System X and find issues; they bring the issues to their manager who punishes them for their failure
- Now that they know that "failure is not an option," the rank-and-file claim to be fully converted to System X by Some Date--perhaps even earlier.
- In fact, System X is imperfect (as is every system) and there are myriad hidden workarounds in place.
- Officially, the manager's decree is in full effect and all is well; actually, things are very different.
- The manager's model of the situation diverges ever farther from reality; in effect, the manager is going crazy.
- At some point, there is a crisis; my favorite is the crisis of cutting off funding to consultant running workarounds or to maintainers of systems other than System X. This crisis has measurable, undeniable consequences.
- The manager comes to the painful realization that all is not as he or she thought. He or she feels betrayed and blindsided. The members of his or her organization feel that his or her ignorance (read: insanity) is her or her own fault. Everybody loses
Saturday, December 10, 2011
How Much Technology Is Enough?
I am generally a minimalist, at least when I am designing or implementing information systems. To me, minimalism means doing it all, but not to excess.
So when I go out to dinner, I will have a steak, a baked potato, a spinach salad and some red wine but will have a small steak and a reasonable amount of wine.
I realize that many people would quibble with that definition of minimalism, so instead in this post I will use the term "optimal" to mean "everything you need but nothing that you don't need."
While I feel that the finished product should be optimal, I do not feel that a project's resources should be only just enough. Instead, I believe in what a colleague once referred to as "cheap insurance." By this I mean that I believe in providing an excess of whatever critical resources can be cheaply and easily procured.
For example, I often buy a USB external disk (which we call a "can") or two just for the project to make sure that disk space and back up will not be a problem. USB cans are cheaper than having a project delay or a project disaster. Similarly, I keep a few unused computer system units around because you never know when you will need a special purpose machine or want go provide a contractor with a machine.
All this keeping of stuff makes me s bit of s techno pack rat; although the difference between techno pack rat and project savior is often timing or dumb luck. And shelving is cheaper than failure.
Since we often help customers get off of old platforms, I have another reason (or, according to some people, rationalization) for keeping dated technology around: the ridiculous pace at which the high tech world changes.
Even applied technology tends to be very much of its time: often the dated peripheral has to match the dated system unit and match the dated driver running under the dated operating system. All too often trying to use a shiny new peripheral on a mature system simply does not work; worse, sometimes it works to some extent, sucking up your time as you fiddling a vain attempt to get full functionality.
Another reason that I hoard old tech is that I have a deep respect for working systems. I know from painful experience that every time you fiddle with a working system you risk having a systdm that will never work again. Am I a nervous Nellie or a scarred veteran? Opinion vary, but I will stick to my cautious ways. I like taking chances just fine if I have a fall. Ack position.
Not only do I fear that working systems are only working until you fiddle with them: I also fear that projects of more than trivial scope never go as planned. I know that contingency planning is good but not enough, so I want more: I want options. I want flexibility. So I need lots of extra stuff in order to be able to suddenly decide that the system would work better if there were two servers instead of one, etc.
I find that debugging distributed information systems sometimes requires creating a parallel universe. If the bug or issue or misconfiguration or mismatch is deep enough, you need a testbed in which to try changing fundamental aspects of the system. Sometimes I need a consult from an external expert, in which I want to be able to deliver a working prototype to them while still having a working prototype in house.
So I find that in order to deliver enough product, I need way more support than I expect. However, I notice that the same people who tell me I am going overboard on the technology resources for projects are often also the people whose output is lacking that final polish, those extra features that distinguish adequate from good and good from great. Shelving is relative cheap: I am going to keep pushing for great.
(Although even I have my limits: I have 8 bit NICs and CGA video cards I would be willing to part with at a great price.)
So when I go out to dinner, I will have a steak, a baked potato, a spinach salad and some red wine but will have a small steak and a reasonable amount of wine.
I realize that many people would quibble with that definition of minimalism, so instead in this post I will use the term "optimal" to mean "everything you need but nothing that you don't need."
While I feel that the finished product should be optimal, I do not feel that a project's resources should be only just enough. Instead, I believe in what a colleague once referred to as "cheap insurance." By this I mean that I believe in providing an excess of whatever critical resources can be cheaply and easily procured.
For example, I often buy a USB external disk (which we call a "can") or two just for the project to make sure that disk space and back up will not be a problem. USB cans are cheaper than having a project delay or a project disaster. Similarly, I keep a few unused computer system units around because you never know when you will need a special purpose machine or want go provide a contractor with a machine.
All this keeping of stuff makes me s bit of s techno pack rat; although the difference between techno pack rat and project savior is often timing or dumb luck. And shelving is cheaper than failure.
Since we often help customers get off of old platforms, I have another reason (or, according to some people, rationalization) for keeping dated technology around: the ridiculous pace at which the high tech world changes.
Even applied technology tends to be very much of its time: often the dated peripheral has to match the dated system unit and match the dated driver running under the dated operating system. All too often trying to use a shiny new peripheral on a mature system simply does not work; worse, sometimes it works to some extent, sucking up your time as you fiddling a vain attempt to get full functionality.
Another reason that I hoard old tech is that I have a deep respect for working systems. I know from painful experience that every time you fiddle with a working system you risk having a systdm that will never work again. Am I a nervous Nellie or a scarred veteran? Opinion vary, but I will stick to my cautious ways. I like taking chances just fine if I have a fall. Ack position.
Not only do I fear that working systems are only working until you fiddle with them: I also fear that projects of more than trivial scope never go as planned. I know that contingency planning is good but not enough, so I want more: I want options. I want flexibility. So I need lots of extra stuff in order to be able to suddenly decide that the system would work better if there were two servers instead of one, etc.
I find that debugging distributed information systems sometimes requires creating a parallel universe. If the bug or issue or misconfiguration or mismatch is deep enough, you need a testbed in which to try changing fundamental aspects of the system. Sometimes I need a consult from an external expert, in which I want to be able to deliver a working prototype to them while still having a working prototype in house.
So I find that in order to deliver enough product, I need way more support than I expect. However, I notice that the same people who tell me I am going overboard on the technology resources for projects are often also the people whose output is lacking that final polish, those extra features that distinguish adequate from good and good from great. Shelving is relative cheap: I am going to keep pushing for great.
(Although even I have my limits: I have 8 bit NICs and CGA video cards I would be willing to part with at a great price.)
Wednesday, December 7, 2011
In Praise of "Pivot Technologies"
Our clients are often pushed to adopt new technologies to accommodate political or financial time lines. More and more, we find that our clients are pushed to move from one paradigm, system or methodology to the next one in a single, terrifying leap.
In fact, I wrote a post about that a little while ago which you can find here if you are so inclined. The executive summary in this context is simply that making technology transitions can be done faster, safer and better with evolution instead of revolution, with smaller, well-defined steps instead of jumping out of the airplane, pulling the rip cord and praying for a soft landing.
So if large leaps of faith make me queasy, what do I propose instead? I propose what I call "pivot technologies." To me, this term means technology which can operate in more than one mode. In practical terms, I mean stepping stones from one technology or methodology, which is being phased out, to another replacement technology of methodology.
For example, in our consulting practice we often provide on/the-fly translation from one set of codes to another set of codes. This allows users to enter transactions from the old set of codes into the new information system.
Until recently, getting approval for pivot technology projects was relatively easy; technology shifts were common and everyone understood that pivot technologies were cheap insurance against painful transitions or even failed transitions.
Recently we have run into a new conventional wisdom. Now we hear that pivot technology projects are a bad idea for two reasons: they are a crutch and they never go away.
Executives say pivot technologies are a crutch because they enable users to avoid making the mandated transition. I am not clear about how easing a transition is the same as impeding it, but there you are.
Executives say that pivot technologies never go away; I assume that the point is that one ends up with an environment cluttered with no-longer-needed pivot technologies which are never retired and cleanly removed.
It is true that I have seen that vicious cycle: a pivot technology is rolled out a crutch, then the transition to the next technology falters, or is even held up by people clinging to the pivot, then you have the pivot but not the transition. But that scenario is not inevitable; indeed, it should not even be likely. After all, the whole point of deploying a pivot technology is to have greater control over the transition, not have less control or no control. So make a plan and stick to it. And cover your bets with a pivot: you might be able to jump the stream in a single bound, but why take the chance?
In fact, I wrote a post about that a little while ago which you can find here if you are so inclined. The executive summary in this context is simply that making technology transitions can be done faster, safer and better with evolution instead of revolution, with smaller, well-defined steps instead of jumping out of the airplane, pulling the rip cord and praying for a soft landing.
So if large leaps of faith make me queasy, what do I propose instead? I propose what I call "pivot technologies." To me, this term means technology which can operate in more than one mode. In practical terms, I mean stepping stones from one technology or methodology, which is being phased out, to another replacement technology of methodology.
For example, in our consulting practice we often provide on/the-fly translation from one set of codes to another set of codes. This allows users to enter transactions from the old set of codes into the new information system.
Until recently, getting approval for pivot technology projects was relatively easy; technology shifts were common and everyone understood that pivot technologies were cheap insurance against painful transitions or even failed transitions.
Recently we have run into a new conventional wisdom. Now we hear that pivot technology projects are a bad idea for two reasons: they are a crutch and they never go away.
Executives say pivot technologies are a crutch because they enable users to avoid making the mandated transition. I am not clear about how easing a transition is the same as impeding it, but there you are.
Executives say that pivot technologies never go away; I assume that the point is that one ends up with an environment cluttered with no-longer-needed pivot technologies which are never retired and cleanly removed.
It is true that I have seen that vicious cycle: a pivot technology is rolled out a crutch, then the transition to the next technology falters, or is even held up by people clinging to the pivot, then you have the pivot but not the transition. But that scenario is not inevitable; indeed, it should not even be likely. After all, the whole point of deploying a pivot technology is to have greater control over the transition, not have less control or no control. So make a plan and stick to it. And cover your bets with a pivot: you might be able to jump the stream in a single bound, but why take the chance?
Wednesday, November 30, 2011
Leaping Into The VOIP
Recently we shifted our company from Plain Old Telephone Service (POTS) to Voice over IP (VoIP). To my surprise, even though designing, creating and deploying new technology is our business, I was a bit apprehensive. I am assuming that this is a valuable reminder of what it is like to be on the other side of the table.
POTS is what telephone service has been since it was conceived: copper wires carrying signals from one telephone handset to another, with lots of switching and amplification in between.
I was comfortable with POTS, with faxes, with modems, with copper wires and splitters and the like. When we built new offices some 15 years ago, I had made sure that there were plenty of phone lines coming into our office: some for our internal PBX (that thing known to most as "the phone system"), some for our fax machines, some for our modems.
I was so comfortable with POTS that this transition was put off for a really long time and much of it happened when I wasn't looking.
Over the years, faxing migrated from special purpose fax machine hardware to fax servers and the like (Open Source fans that we are, we used HylaFax). We still needed a fax line, but not so many.
Broadband Internet access and the rise of Virtual Private Networks (VPNs) pretty much killed the need for modems as a way to access client resources from afar, so we stopped using them, even from the road.
The jump to smart phones had been carefully planned: we waited until we felt that they were mature enough to allow us to abandon our Personal Digital Assistants or PDAs (we were using Palm devices) to hold our contacts, handle our calendars, keep our secrets and track our billable time. Once smart phones could do all that, we went to smart phones instead of mobile phone + PDA.
When we finally adopted smart phones (we decided to go with iPhones for a variety of reasons), we suddenly found our desk-bound phones to be kind of a drag. We wanted to deal with only one phone system, but we didn't want to give our mobile phone numbers, nor did we want to lose the basic business functions of having a central phone system.
So we decided to go with a virtual PBX from RingCentral. This gives us the PBX features, but we can also run their smart phone app and use our smart phones as business handsets as well. So our business calls follow us around (during business hours) and I have reclaimed some precious desk space from the hulking handset. Our faxing is also handled through our virtual PBX.
It has been a couple of months now and we are very happy with the lower overhead, the feature set, the convenience and the greater access to our voice mail and our faxes. The computer room is much tidier without all those phone connections, the old hardware PBX and the line conditions, etc.
So why was I apprehensive? It was not the change to VoIP: I know that technology inside and out. In a previous incarnation, over ten years ago, I got Net2Phone's protocol up and running on a VoIP phone we were developing and their tech folks to whom I demonstrated it were very impressed as I was the first external person to get their protocol working on a non-computer. I spent hours making phone calls using VoIP and writing software to support VoIP. I am comfortable that the technology is even better now.
It was not the cost:benefit analysis which our operations person did and clearly showed that it was time.
Upon reflection, I think that it was inertia and simple unreasoning fear of change. Exactly the sort of thing that I have to fight in my clients when I create new technology for them to streamline their processes and better integrate their systems.
I respect fear of change: careful consideration is a good idea before every big decision. I understand better than most that not all change is good and that tearing apart the present does not guarantee that you will assemble a better future.
But I don't respect procrastination or paralysis by analysis: may God grant me the wisdom to know the difference.
POTS is what telephone service has been since it was conceived: copper wires carrying signals from one telephone handset to another, with lots of switching and amplification in between.
I was comfortable with POTS, with faxes, with modems, with copper wires and splitters and the like. When we built new offices some 15 years ago, I had made sure that there were plenty of phone lines coming into our office: some for our internal PBX (that thing known to most as "the phone system"), some for our fax machines, some for our modems.
I was so comfortable with POTS that this transition was put off for a really long time and much of it happened when I wasn't looking.
Over the years, faxing migrated from special purpose fax machine hardware to fax servers and the like (Open Source fans that we are, we used HylaFax). We still needed a fax line, but not so many.
Broadband Internet access and the rise of Virtual Private Networks (VPNs) pretty much killed the need for modems as a way to access client resources from afar, so we stopped using them, even from the road.
The jump to smart phones had been carefully planned: we waited until we felt that they were mature enough to allow us to abandon our Personal Digital Assistants or PDAs (we were using Palm devices) to hold our contacts, handle our calendars, keep our secrets and track our billable time. Once smart phones could do all that, we went to smart phones instead of mobile phone + PDA.
When we finally adopted smart phones (we decided to go with iPhones for a variety of reasons), we suddenly found our desk-bound phones to be kind of a drag. We wanted to deal with only one phone system, but we didn't want to give our mobile phone numbers, nor did we want to lose the basic business functions of having a central phone system.
So we decided to go with a virtual PBX from RingCentral. This gives us the PBX features, but we can also run their smart phone app and use our smart phones as business handsets as well. So our business calls follow us around (during business hours) and I have reclaimed some precious desk space from the hulking handset. Our faxing is also handled through our virtual PBX.
It has been a couple of months now and we are very happy with the lower overhead, the feature set, the convenience and the greater access to our voice mail and our faxes. The computer room is much tidier without all those phone connections, the old hardware PBX and the line conditions, etc.
So why was I apprehensive? It was not the change to VoIP: I know that technology inside and out. In a previous incarnation, over ten years ago, I got Net2Phone's protocol up and running on a VoIP phone we were developing and their tech folks to whom I demonstrated it were very impressed as I was the first external person to get their protocol working on a non-computer. I spent hours making phone calls using VoIP and writing software to support VoIP. I am comfortable that the technology is even better now.
It was not the cost:benefit analysis which our operations person did and clearly showed that it was time.
Upon reflection, I think that it was inertia and simple unreasoning fear of change. Exactly the sort of thing that I have to fight in my clients when I create new technology for them to streamline their processes and better integrate their systems.
I respect fear of change: careful consideration is a good idea before every big decision. I understand better than most that not all change is good and that tearing apart the present does not guarantee that you will assemble a better future.
But I don't respect procrastination or paralysis by analysis: may God grant me the wisdom to know the difference.
Wednesday, November 23, 2011
Doing Without For Now
In these fiscally strapped times I am seeing a resurgence of an old financial misunderstanding: that not spending money is the same thing as saving money.
I understand that one should not spend money that one does not have. I am not advocating deficit spending, accounting magic or blowing your budget. Instead, I am advocating responsible investment.
Too many managers I encounter are rewarded for simply failing to spend money, or fear being penalized for spending money. Being "under budget" implies that you met the organization's goals by spending less than expected. This means either that you are good at getting results or bad at making budgets. "Failing to spend money" is not the same as being under budget: if you don't accomplish your goals, then you are failure who at least didn't waste money while failing.
But simply avoiding investment for the sake of not spending money is not fiscally responsible: it is the opposite of fiscally responsible. To make decisions without regard to future benefit is a mistake. If you want to go down this path, I can save you some time: this analysis leads to paralysis. Using this philosophy, you should never spend any money or take any action: not spending money will leave you with more immediate cash and not taking action will avoid mistakes. Just ignore the fact that not spending money can lead to lack of future money and not taking action can lead to not having a job.
To take a rather tired example from home ownership, not repairing a leaky roof gives you a short-term benefit (your bank account retains the money that you would have spent on the roof) and a long-term liability (your bank account will be hit much harder when the roof and collateral damage become so great you can no longer ignore them.) By the same token, if you don't choose a contractor, you avoid choosing the wrong contractor. So you have that consolation as your roof falls in.
I run into this same behavior in business fairly frequently, in the guise of the following sometimes reasonable statement: "we know we need X, but we’ll get it in the next release / next version / next purchase, so we will do without it right now."
The useful life of most of the systems I encounter is between three and five years. If you put off the decision for a year, you have lost much of the benefit the system can be expected to provide. If you put off the decision for two years, your potential loss is that much greater.
If your investment is a reasonable investment, you are missing the return on that investment every year you defer. In real terms, not spending money, if you spend it wisely, is actually costing you money.
When we speak of IT investments, we speak of more than dollars: IT can provide automation which makes your personnel more productive and less harried. With the time not spent in drudgery that should be done by a machine, your people can actually think about their jobs and improve their situation.
There is also the experience factor: if you make investments early and often, you can often make smaller, incremental investments and be guided by actual experience as you move toward your goal, instead of being guided by marketing literature as you wait until the last possible second and then leap onto a bandwagon.
I have heard a cogent counter-argument from reasonable people, which runs something like this:
I sympathize with the claim that larger organizations have such deep tendencies toward inertia that multi-stage plans are scary. But leadership requires actually taking reasonable risks instead of simply avoiding blame.
I understand that one should not spend money that one does not have. I am not advocating deficit spending, accounting magic or blowing your budget. Instead, I am advocating responsible investment.
Too many managers I encounter are rewarded for simply failing to spend money, or fear being penalized for spending money. Being "under budget" implies that you met the organization's goals by spending less than expected. This means either that you are good at getting results or bad at making budgets. "Failing to spend money" is not the same as being under budget: if you don't accomplish your goals, then you are failure who at least didn't waste money while failing.
But simply avoiding investment for the sake of not spending money is not fiscally responsible: it is the opposite of fiscally responsible. To make decisions without regard to future benefit is a mistake. If you want to go down this path, I can save you some time: this analysis leads to paralysis. Using this philosophy, you should never spend any money or take any action: not spending money will leave you with more immediate cash and not taking action will avoid mistakes. Just ignore the fact that not spending money can lead to lack of future money and not taking action can lead to not having a job.
To take a rather tired example from home ownership, not repairing a leaky roof gives you a short-term benefit (your bank account retains the money that you would have spent on the roof) and a long-term liability (your bank account will be hit much harder when the roof and collateral damage become so great you can no longer ignore them.) By the same token, if you don't choose a contractor, you avoid choosing the wrong contractor. So you have that consolation as your roof falls in.
I run into this same behavior in business fairly frequently, in the guise of the following sometimes reasonable statement: "we know we need X, but we’ll get it in the next release / next version / next purchase, so we will do without it right now."
The useful life of most of the systems I encounter is between three and five years. If you put off the decision for a year, you have lost much of the benefit the system can be expected to provide. If you put off the decision for two years, your potential loss is that much greater.
If your investment is a reasonable investment, you are missing the return on that investment every year you defer. In real terms, not spending money, if you spend it wisely, is actually costing you money.
When we speak of IT investments, we speak of more than dollars: IT can provide automation which makes your personnel more productive and less harried. With the time not spent in drudgery that should be done by a machine, your people can actually think about their jobs and improve their situation.
There is also the experience factor: if you make investments early and often, you can often make smaller, incremental investments and be guided by actual experience as you move toward your goal, instead of being guided by marketing literature as you wait until the last possible second and then leap onto a bandwagon.
I have heard a cogent counter-argument from reasonable people, which runs something like this:
- IT transitions are risky, even if done well
- There will be less risk if there are fewer transitions
- Fewer transitions means bigger ones farther apart
- The individual transitions might be more painful but ultimate time line is shorter and the new systems end up with a bigger footprint
I sympathize with the claim that larger organizations have such deep tendencies toward inertia that multi-stage plans are scary. But leadership requires actually taking reasonable risks instead of simply avoiding blame.
Wednesday, November 16, 2011
Hard vs Soft Mastery
Some years ago, as part of a discussion of computer programming styles, a colleague introduced me to the concept of "hard master" versus "soft master." Be warned: I am emphatically not making any value judgments here: I do not believe that hard masters are smarter, purer, better or more detail-oriented. I do not believe that soft masters are failing at being hard masters; I do not believe that a good hard master can do anything a soft master can do, or vice-versa.
Enough about what I don't mean: what do I mean? A little Google magic got me here, the horse's mouth: Feminism Confronts Technology by Judy Wajcman. Read the selected section for details, but the summary is this: the hard master feels he has to know all the details when using technology while the soft master knows how to get what she wants.
The use of gender-specific pronouns is not accidental: frequently hard masters are male and soft masters are female. Note that while there is a significant pro-male, anti-female bias in computer programming, this is not a bias from which I happen to suffer. I have worked with women for my entire technical career, and have found female technology interaction different but not inferior. In fact, I have found that this discipline has room for, and needs both, kinds of interaction. As a general rule, I want my bit-bashing systems software from an anti-social grumpy guy and I want my application software from an empathetic, flexible and cheerful gal.
I understand that these generalizations are not universal truths, that there are grumpy women who are antisocial and mathematical while there are cheerful men who are sensitive to how technology feels to use, and all possible permutations in between.
But I do not believe that all permutations are equally probable. I find distinct trends: men tend to define technological success and men tend to be hard masters. Women have their own technological approaches and these approaches tend to be deemed inferior by men, mostly men who cannot perform these "soft" tasks themselves.
Given this observation, I was not surprised at a recent recommendation of related reading from another colleague: an article from the Harvard Business Review that considers the assertion that women make your technical team smarter. I am sure that is true, at least for my team.
In the interests of full disclosure, I will admit to being a middle-aged man who thinks of himself as a designer, as something of a non-combatant in this fight. My attitude is that I can code anything I have to, but I am not a coder. Opinions of my coding by coders varies, but mostly I think that both hard and soft masters have reason to despair at my code. So I claim to be of neither camp: I need them both to get my designs realized, unless I end up having to code one or other kind of task myself. Which happens all too often in these recessionary times.
I find the hard versus soft dichotomy in server software versus client software. For example, configuring a web server is a heavily technical task which requires a deep understanding of networking, process management and security. By contract, creating a web page is also web-related, but has utterly different requirements, including a flair for graphic art and an understanding of usability.
I also see this division in the back end (data storage & retrieval) area as opposed to the front end (data acquisition through user interaction) area. Figuring out how to store data effectively is a job for a hard master who wants to think about file systems, database formats and the interaction of caching and performance. Figuring out how to get users to enter data quickly and accurately is about understanding how human beings use software.
From my perspective, there is trouble brewing for the hard masters. It is my observation that as abstraction rises mastery softens. When there was only simple hardware, Unix and ANSI C, someone could be a hardware expert, a firmware expert, a Unix system software expert and complete master of their own code base. Now, with server clusters and interpreted languages and database interfaces and web servers and programming environments embedded in web servers, I just don't see that as a viable or desirable goal. How do hard masters get anything done these days? They must have to restrict themselves to rather small areas of expertise.
Not only is abstraction on the rise, but I see a very real trade off between productivity and mastery which abstraction provides. Women who are happy to cruise along at the GUI-to-create-GUI level, such as Visual BASIC, kick my ass in terms of how fast and how custom they can make apps. Who cares how deeply they understand what underpins that abstraction? (Until that abstraction bites them in the butt; more of that anon.)
There has been a shift in the balance of power between apps creators (soft masters) and systems engineers (hard masters) as time has gone by. So far as I can tell, app programmers now rule: users understand what they do (create apps), economic buyers are willing to pay for what they do. Also for systems engineers, I find that users have no idea what they do, economic buyers don't relate to them or their work and organizations feel that they get the behind-the-scenes stuff for free, as part of buying apps, or smart network hardware, or whatever.
When I started out in the early 1980s, app programmers were the bottom of the ladder, while systems programmers did the real work, got paid the real money, and wrote apps whenever they felt like it. As a designer, I find that I no longer have to start by getting hard masters to buy into my design: in fact, I find that most of my gigs come from soft masters coming to me so that I can create a framework in which they can create the apps that people want. No one is interested in the lowest levels: is this MS-Access to a local database, or MS-Access as a client for a database server? Who cares?
Of course, users should care: bad infrastructure makes for a bad user experience. But users don't care about boring old infrastructure and systems engineering because most of the time they can afford not to.
All this does not bode well for the hard master. These days, I see hard mastery in demand only in those relatively rare instances when the abstraction fails or is incomplete. When you need a DLL to extend the Microsoft desktop, or a shared object to extend the Perl interpreter, you really need that. But how often does that come up? When I started out, we had five or six programmers, all male, two of whom were apps specialists. Over time, we have tended to add apps programmers, who tend to be women, and we have moved to consuming our hard mastery as a pay-as-you-go consulting commodity: we don't need full-time hard masters anymore.
This makes me a bit nervous: will the hard master be there in the future when I need him? But for now, I am busy being part of the problem: I need more soft masters and fewer hard masters I don't see that trend reversing anytime soon.
Enough about what I don't mean: what do I mean? A little Google magic got me here, the horse's mouth: Feminism Confronts Technology by Judy Wajcman. Read the selected section for details, but the summary is this: the hard master feels he has to know all the details when using technology while the soft master knows how to get what she wants.
The use of gender-specific pronouns is not accidental: frequently hard masters are male and soft masters are female. Note that while there is a significant pro-male, anti-female bias in computer programming, this is not a bias from which I happen to suffer. I have worked with women for my entire technical career, and have found female technology interaction different but not inferior. In fact, I have found that this discipline has room for, and needs both, kinds of interaction. As a general rule, I want my bit-bashing systems software from an anti-social grumpy guy and I want my application software from an empathetic, flexible and cheerful gal.
I understand that these generalizations are not universal truths, that there are grumpy women who are antisocial and mathematical while there are cheerful men who are sensitive to how technology feels to use, and all possible permutations in between.
But I do not believe that all permutations are equally probable. I find distinct trends: men tend to define technological success and men tend to be hard masters. Women have their own technological approaches and these approaches tend to be deemed inferior by men, mostly men who cannot perform these "soft" tasks themselves.
Given this observation, I was not surprised at a recent recommendation of related reading from another colleague: an article from the Harvard Business Review that considers the assertion that women make your technical team smarter. I am sure that is true, at least for my team.
In the interests of full disclosure, I will admit to being a middle-aged man who thinks of himself as a designer, as something of a non-combatant in this fight. My attitude is that I can code anything I have to, but I am not a coder. Opinions of my coding by coders varies, but mostly I think that both hard and soft masters have reason to despair at my code. So I claim to be of neither camp: I need them both to get my designs realized, unless I end up having to code one or other kind of task myself. Which happens all too often in these recessionary times.
I find the hard versus soft dichotomy in server software versus client software. For example, configuring a web server is a heavily technical task which requires a deep understanding of networking, process management and security. By contract, creating a web page is also web-related, but has utterly different requirements, including a flair for graphic art and an understanding of usability.
I also see this division in the back end (data storage & retrieval) area as opposed to the front end (data acquisition through user interaction) area. Figuring out how to store data effectively is a job for a hard master who wants to think about file systems, database formats and the interaction of caching and performance. Figuring out how to get users to enter data quickly and accurately is about understanding how human beings use software.
From my perspective, there is trouble brewing for the hard masters. It is my observation that as abstraction rises mastery softens. When there was only simple hardware, Unix and ANSI C, someone could be a hardware expert, a firmware expert, a Unix system software expert and complete master of their own code base. Now, with server clusters and interpreted languages and database interfaces and web servers and programming environments embedded in web servers, I just don't see that as a viable or desirable goal. How do hard masters get anything done these days? They must have to restrict themselves to rather small areas of expertise.
Not only is abstraction on the rise, but I see a very real trade off between productivity and mastery which abstraction provides. Women who are happy to cruise along at the GUI-to-create-GUI level, such as Visual BASIC, kick my ass in terms of how fast and how custom they can make apps. Who cares how deeply they understand what underpins that abstraction? (Until that abstraction bites them in the butt; more of that anon.)
There has been a shift in the balance of power between apps creators (soft masters) and systems engineers (hard masters) as time has gone by. So far as I can tell, app programmers now rule: users understand what they do (create apps), economic buyers are willing to pay for what they do. Also for systems engineers, I find that users have no idea what they do, economic buyers don't relate to them or their work and organizations feel that they get the behind-the-scenes stuff for free, as part of buying apps, or smart network hardware, or whatever.
When I started out in the early 1980s, app programmers were the bottom of the ladder, while systems programmers did the real work, got paid the real money, and wrote apps whenever they felt like it. As a designer, I find that I no longer have to start by getting hard masters to buy into my design: in fact, I find that most of my gigs come from soft masters coming to me so that I can create a framework in which they can create the apps that people want. No one is interested in the lowest levels: is this MS-Access to a local database, or MS-Access as a client for a database server? Who cares?
Of course, users should care: bad infrastructure makes for a bad user experience. But users don't care about boring old infrastructure and systems engineering because most of the time they can afford not to.
All this does not bode well for the hard master. These days, I see hard mastery in demand only in those relatively rare instances when the abstraction fails or is incomplete. When you need a DLL to extend the Microsoft desktop, or a shared object to extend the Perl interpreter, you really need that. But how often does that come up? When I started out, we had five or six programmers, all male, two of whom were apps specialists. Over time, we have tended to add apps programmers, who tend to be women, and we have moved to consuming our hard mastery as a pay-as-you-go consulting commodity: we don't need full-time hard masters anymore.
This makes me a bit nervous: will the hard master be there in the future when I need him? But for now, I am busy being part of the problem: I need more soft masters and fewer hard masters I don't see that trend reversing anytime soon.
Wednesday, November 9, 2011
Too Good to Fire, Too Old to Hire, Too Young to Retire
I find a disturbingly common situation amongst my cohort in the Information Technology realm: they feel stuck in their current job. As is typical with people who feel stuck in their job, they are not as productive as they should be--or as pleasant to be around as they used to be.
Part of this feeling comes from the current economic downturn, but a large part of it was present even during the heady boom times. That large part seems to be professional equivalent of the French idea of "a woman of a certain age." Women of a certain age are desirable, but with an expiration date. These professionals feel that they are required in their jobs, but only for the present.
Why are so many good and very good programmers and sys admins and db admins I know are languishing in limbo, unmotivated by their current job but unable or unwilling to find another job? I would characterize their plight this way:
Too Good To Fire
From the dawn of the business computer era, non-IT people have lived in fear of what would happen if the computer guru quit, taking all his (it was always a man) experience and special knowledge with him.
In or around 1968, I was a young lad interested in computers. Soon after the dawn of computers, I was saving my first programs onto paper tape in a closet at school, dreaming of some day saving my programs to mylar tape. Trying to be supportive of my unfortunate interest (wouldn't I be happier as a doctor or a lawyer?), my mother brought me into her place of work to meet the "the computer guy" (TCG).
It was obvious that at least that TCG in that organziation was regarded as strange and as a necessary evil. They would have liked to fire him, but they could not afford to lose him. My mother recounted in tones of awe that TCG had a light next to his bed in hist apartment across town that alerted him to problems with The Machine, the might mainframe computer. She was impressed with his dedication but also repelled by his lack of boundaries. This was and is a typical response to TCGs the world over. Hence "too good to fire" because it captures both the "we need him" and "we wish he were not here" aspects of many information technology jobs.
Over time, I have come to see that TCG as an archetype: a middle-aged man who is one or two technology waves behind the times, who is still critical to current operations but not part of future planning. He can see only an endless treadmill of doing exactly what he is doing now until he either drops dead in his office, makes it to retirement, or is made obsolete by some technology shift. What a waste: experience and talent turning into sullen bitterness.
Too Old To Hire
Why doesn't TCG just go find another job, in a place more congenial to tech types in general, or to him in particular? That is a good question and one that I have asked various TCGs over the years. The answer is usually "no one will hire me. I've looked."
Is this self-pitying drivel, a reflection of TCG's personality issues, or a prevailing prejudice? I suspect that it is that last one. If you are TCG and you are looking for a new job at a new company, I believe that you have two choices: either the tech-oriented nirvanas such as Google, Apple, Amazon or Microsoft or a tech-oriented division of non-tech company.
(In theory, TCG could start or join a start-up, but that is a rather rarified nitch and requires many more personal resources than being a computer guy requires. However, these days, every TCG seems to be an embittered potential entrepreneur: "I could have started Twitter/Facebook/YouTube.")
The first category is only open to the best techs, since there are more applicants than there are jobs. TCG may be the best {fill-in-the-blank} you have ever met, but he might not be great as compared to our entire industry.
The second category really does seem to have a barrier to entry, a distinct ageism. A tragically common theme in our business is that young hirees are best because:
Too Young to Retire
Sadly, TCG is often too young to retire, and that seems to be his only option. The commonly expected arc of a career seems to be this:
Note that if you were a "people person" you quite likely would not have started slinging code in the first place. So you are stuck with some unappetizing choices:
What Does This Mean To Me?
So now that you have read this far, you may be wondering what the point of this diatribe is. Here is the context-specific point:
To the co-workers of poor old TCG, I say this: remember that his sour puss may have more to do with being stuck than with being a misanthrope. More importantly, you are going to be hard-pressed to find a stick that will motivate TCG: his life already sucks. Try to find carrots instead, such as interesting small projects or chocolate or whatever it is that is safe, legal and appealing to TCG.
To the managers of TCG, I say this: you are probably kidding yourself if you think that TCG doesn't know that you plan to jettison him as soon as you can. You might find that sending him to training and showing other interest in his future is a better way to motivate him than pretending that you value him while counting the days until you can retire the system and fire TCG.
To TCG I say: find an interest in the future, even if you feel stuck. Without an interest in the future, you will end up bitter, hard to get along with and unhappy. Either embrace what you see coming or find another track or work your network to see if there is anything out there for you. Even false hope is better than despair.
Part of this feeling comes from the current economic downturn, but a large part of it was present even during the heady boom times. That large part seems to be professional equivalent of the French idea of "a woman of a certain age." Women of a certain age are desirable, but with an expiration date. These professionals feel that they are required in their jobs, but only for the present.
Why are so many good and very good programmers and sys admins and db admins I know are languishing in limbo, unmotivated by their current job but unable or unwilling to find another job? I would characterize their plight this way:
- I'm too good to fire
- I'm too old to hire
- I'm too young to retire
Too Good To Fire
From the dawn of the business computer era, non-IT people have lived in fear of what would happen if the computer guru quit, taking all his (it was always a man) experience and special knowledge with him.
In or around 1968, I was a young lad interested in computers. Soon after the dawn of computers, I was saving my first programs onto paper tape in a closet at school, dreaming of some day saving my programs to mylar tape. Trying to be supportive of my unfortunate interest (wouldn't I be happier as a doctor or a lawyer?), my mother brought me into her place of work to meet the "the computer guy" (TCG).
It was obvious that at least that TCG in that organziation was regarded as strange and as a necessary evil. They would have liked to fire him, but they could not afford to lose him. My mother recounted in tones of awe that TCG had a light next to his bed in hist apartment across town that alerted him to problems with The Machine, the might mainframe computer. She was impressed with his dedication but also repelled by his lack of boundaries. This was and is a typical response to TCGs the world over. Hence "too good to fire" because it captures both the "we need him" and "we wish he were not here" aspects of many information technology jobs.
Over time, I have come to see that TCG as an archetype: a middle-aged man who is one or two technology waves behind the times, who is still critical to current operations but not part of future planning. He can see only an endless treadmill of doing exactly what he is doing now until he either drops dead in his office, makes it to retirement, or is made obsolete by some technology shift. What a waste: experience and talent turning into sullen bitterness.
Too Old To Hire
Why doesn't TCG just go find another job, in a place more congenial to tech types in general, or to him in particular? That is a good question and one that I have asked various TCGs over the years. The answer is usually "no one will hire me. I've looked."
Is this self-pitying drivel, a reflection of TCG's personality issues, or a prevailing prejudice? I suspect that it is that last one. If you are TCG and you are looking for a new job at a new company, I believe that you have two choices: either the tech-oriented nirvanas such as Google, Apple, Amazon or Microsoft or a tech-oriented division of non-tech company.
(In theory, TCG could start or join a start-up, but that is a rather rarified nitch and requires many more personal resources than being a computer guy requires. However, these days, every TCG seems to be an embittered potential entrepreneur: "I could have started Twitter/Facebook/YouTube.")
The first category is only open to the best techs, since there are more applicants than there are jobs. TCG may be the best {fill-in-the-blank} you have ever met, but he might not be great as compared to our entire industry.
The second category really does seem to have a barrier to entry, a distinct ageism. A tragically common theme in our business is that young hirees are best because:
- they know the new and/or current technology
- they don't cost as much
- they are more adaptable
Too Young to Retire
Sadly, TCG is often too young to retire, and that seems to be his only option. The commonly expected arc of a career seems to be this:
- get hired as a bright young current IT footsoldier
- get even better with real-world experience
- consider management
- if "no" to managemet likely stall as you are pigeon holed in what used to be current tech
- if yes to management, get promoted to team leader
- possibly get promoted to area supervisor
- possibly get promoted to manager,
- possibly get promoted to VP
Note that if you were a "people person" you quite likely would not have started slinging code in the first place. So you are stuck with some unappetizing choices:
- Limbo
- Being a manager even if you don't like dealing with people
What Does This Mean To Me?
So now that you have read this far, you may be wondering what the point of this diatribe is. Here is the context-specific point:
To the co-workers of poor old TCG, I say this: remember that his sour puss may have more to do with being stuck than with being a misanthrope. More importantly, you are going to be hard-pressed to find a stick that will motivate TCG: his life already sucks. Try to find carrots instead, such as interesting small projects or chocolate or whatever it is that is safe, legal and appealing to TCG.
To the managers of TCG, I say this: you are probably kidding yourself if you think that TCG doesn't know that you plan to jettison him as soon as you can. You might find that sending him to training and showing other interest in his future is a better way to motivate him than pretending that you value him while counting the days until you can retire the system and fire TCG.
To TCG I say: find an interest in the future, even if you feel stuck. Without an interest in the future, you will end up bitter, hard to get along with and unhappy. Either embrace what you see coming or find another track or work your network to see if there is anything out there for you. Even false hope is better than despair.
Wednesday, November 2, 2011
Impure Implementations
I am a big fan of using the right tool for the job. When it comes to carpentry or surgery, this concept seems obvious and well-understood and usually followed. But it comes to large IT projects, I find a real tendency in large organizations toward trying to solve all problems with whatever single tool they have blessed. Let us call this tendency the Silver Bullet Assumption.
Many organizations are Visual BASIC shops, or Visual C++ shops, or Python shops, etc. This baffles me: there are many excellent tools out there, but there is no tool that is great for every aspect of a large project.
I find that large projects usually have most or all of these aspects:
Even if such a tool existed, who would use it? Someone who understood all those different domains?
In our consultancy, we have a break down that I think is pretty common, or used to be: we have systems people (who work mostly under Unix) and apps people who work mostly under Windows and Web work lies somewhere in between.
We use MS-Office or web pages to provide UIs on the desktop, web pages and thin clients to provide data entry on the floor, Unix servers to provide print service, file service and web service. It is hard to recall a project of any scope that did not cross these boundaries.
We are constantly asked questions about implementations which assume that everyone does everything: the Unix systems programmer who is supposed to know about MS-Access apps and vice versa. When we push back, we find that many organizations have the notion of "programmer," or even "systems programmer" versus "applications programmer" versus "server guy" but all these programmers are using the same environment: Windows or Unix and it is mostly Windows.
Clear as we are about our design philosophy, even we occasionally have requests for "pure" implementations, with the hope that if the technology under a large project is consistent, that large project will be easier for local IT to understand and support.
But this is often a forlorn hope: if your people do not understand bar code grokking or TCP/IP-based protocols, it very likely won't help if the thing they don't understand is implemented in a familiar technology. AT worst, they will have a false confidence that will lead them to fiddle where they should not fiddle.
(I speak from bitter experience, of course. Ah, the support phone calls which start by saying that some of our technology does not work and end with them admitting that they "fixed" something else right before the mysterious new failure began.)
I just don't buy the premise, that being fluent in systems, apps, networking, infrastructure and databases is a reasonable expectation, let alone the usual case. You know that you need network people, desktop support people, server people, etc. Why do you think that they all should be working in the same environment? What does that even mean, when you think about it: how is a desktop app like a print server?
This illusion of the benefits of purity is encouraged by vendors, so I suppose the customers are not really to blame. The first time I laid hands on Oracle, lo! these many moons ago, I was stunned at all the special-purpose configuration file formats and languages I needed to learn in order to tune the installation. But the client thought of themselves as a pure Oracle shop. This is like saying that all of humanity is part of a pure human language shop--we just use different flavors of language.
Very recently, I worked with a system that was billed as all Windows, all the time. Except that when push came to shove and I needed to debug some of its behavior, I come to find out that the core was a Unix server running a ported COBOL app. Egad! Knowing that it was COBOL through which the data was passing made debugging that systems interface much easier, by the way.
Why tell the customer that they are buying a Windows app running on Windows servers, with some kind of remote back end? I don't know: it must be comforting to someone somewhere.
I prefer to be more upfront with my clients: I will use whatever technology will get the job done, with an eye to accuracy and speed. I want to save my time and their money. I try to use technology that they already own, but I cannot guarantee that--unless they want to pay extra; often LOTS extra.
If I have to use MS-Access at the front, FTP in the middle and Oracle on the back end, then so be it. I find the choice between requiring minimal maintenance, but making local IT uncomfortable, and requiring lots of maintenance, but making local IT (probably falsely) confident, an easy one to make.
Just last month, we shut down a system of ours that had been in continuous operation since early 1984. That's 27 years of service, with an average of under 10 hours of attention per year. This system's impure implementation made local IT nervous, but it also allowed us to adapt to the dramatic infrastructure changes over that time. In the end, it was time to retire it: a 16 bit environment runtime environment under a 32 bit operating system running on a 64 bit architecture is a bit baroque even for me.
So while nothing lasts forever, I claim that the concept is sound: until there is a single, simple, all-encompassing technology, use what makes sense, even if the final product has multiple technology environments under the hood. There is no silver bullet and there never was.
Many organizations are Visual BASIC shops, or Visual C++ shops, or Python shops, etc. This baffles me: there are many excellent tools out there, but there is no tool that is great for every aspect of a large project.
I find that large projects usually have most or all of these aspects:
- an inbound interface for acquiring data
- a collection of data processing functions
- a way to store the data
- a user interface (UI) to view the data
- reports and exports to send the processed data down the line
- an outbound interface for sharing data
Even if such a tool existed, who would use it? Someone who understood all those different domains?
In our consultancy, we have a break down that I think is pretty common, or used to be: we have systems people (who work mostly under Unix) and apps people who work mostly under Windows and Web work lies somewhere in between.
We use MS-Office or web pages to provide UIs on the desktop, web pages and thin clients to provide data entry on the floor, Unix servers to provide print service, file service and web service. It is hard to recall a project of any scope that did not cross these boundaries.
We are constantly asked questions about implementations which assume that everyone does everything: the Unix systems programmer who is supposed to know about MS-Access apps and vice versa. When we push back, we find that many organizations have the notion of "programmer," or even "systems programmer" versus "applications programmer" versus "server guy" but all these programmers are using the same environment: Windows or Unix and it is mostly Windows.
Clear as we are about our design philosophy, even we occasionally have requests for "pure" implementations, with the hope that if the technology under a large project is consistent, that large project will be easier for local IT to understand and support.
But this is often a forlorn hope: if your people do not understand bar code grokking or TCP/IP-based protocols, it very likely won't help if the thing they don't understand is implemented in a familiar technology. AT worst, they will have a false confidence that will lead them to fiddle where they should not fiddle.
(I speak from bitter experience, of course. Ah, the support phone calls which start by saying that some of our technology does not work and end with them admitting that they "fixed" something else right before the mysterious new failure began.)
I just don't buy the premise, that being fluent in systems, apps, networking, infrastructure and databases is a reasonable expectation, let alone the usual case. You know that you need network people, desktop support people, server people, etc. Why do you think that they all should be working in the same environment? What does that even mean, when you think about it: how is a desktop app like a print server?
This illusion of the benefits of purity is encouraged by vendors, so I suppose the customers are not really to blame. The first time I laid hands on Oracle, lo! these many moons ago, I was stunned at all the special-purpose configuration file formats and languages I needed to learn in order to tune the installation. But the client thought of themselves as a pure Oracle shop. This is like saying that all of humanity is part of a pure human language shop--we just use different flavors of language.
Very recently, I worked with a system that was billed as all Windows, all the time. Except that when push came to shove and I needed to debug some of its behavior, I come to find out that the core was a Unix server running a ported COBOL app. Egad! Knowing that it was COBOL through which the data was passing made debugging that systems interface much easier, by the way.
Why tell the customer that they are buying a Windows app running on Windows servers, with some kind of remote back end? I don't know: it must be comforting to someone somewhere.
I prefer to be more upfront with my clients: I will use whatever technology will get the job done, with an eye to accuracy and speed. I want to save my time and their money. I try to use technology that they already own, but I cannot guarantee that--unless they want to pay extra; often LOTS extra.
If I have to use MS-Access at the front, FTP in the middle and Oracle on the back end, then so be it. I find the choice between requiring minimal maintenance, but making local IT uncomfortable, and requiring lots of maintenance, but making local IT (probably falsely) confident, an easy one to make.
Just last month, we shut down a system of ours that had been in continuous operation since early 1984. That's 27 years of service, with an average of under 10 hours of attention per year. This system's impure implementation made local IT nervous, but it also allowed us to adapt to the dramatic infrastructure changes over that time. In the end, it was time to retire it: a 16 bit environment runtime environment under a 32 bit operating system running on a 64 bit architecture is a bit baroque even for me.
So while nothing lasts forever, I claim that the concept is sound: until there is a single, simple, all-encompassing technology, use what makes sense, even if the final product has multiple technology environments under the hood. There is no silver bullet and there never was.
Thursday, October 27, 2011
No Small Software Purchases, Revisited
Here is a post out of the usual Wednesday cycle. I am posting out of cycle because this post is a follow-up to a previous post. That previous post can be found here.
An experienced executive working in a large American corporation had this reaction to that previous post:
Say your point is true: local solutions give corporate IT heartburn. What are they worried about?
Accounting for all of these things ad hoc in a customized solution makes it expensive, unless the central software provides a framework for dealing with them all in a consistent manner.
These are all valid and interesting points. Here is how I answer them:
Data Security & Access Control
Security issues always come down to policy and enforcement, so I would phrase the question this way:
Can local solutions be trusted to implement global data security policy?
I claim that the answer is "yes." Implementing data security policies should be a dimension along which the local solution is designed, coded and tested.
In practice, I find that a local solution is not likely be better than the prevailing adherence to data security policy and, admittedly, is often worse. For example, we recently installed a suite of apps for a client for which we provided support to use their global Active Directory (AD) as the source of credentials and the source of permissions associated with those credentials. But it was an uphill battle to get the local users to communicate with the global AD keepers about whose account was allowed to do what. It was not easy to merely establish what existing permissions the app's functions should map to.
Given how security policy implementation in large organizations usually plays out, I can sympathize with this concern: is this new way to access data consistent with our policies? But there is no reason that the answer shouldn't be "yes" unless your policies are poorly understood or your infrastructure is closed or poorly configured.
Disaster Recovery
DR is activity which, like prayer or the tennis lob, is rarely rehearsed and usually called upon only in times of great distress. Can a local solution be fit into the DR plan? Almost always. Is this a question that is usually asked? No, it is not. The good news is that more and more DR is more-or-less automatic: SANs and clusters and remote data centers have relieved many local entities of this responsibility.
To be clear, I understand DR to protect against three specific kinds of data or service loss:
Training
We have often found that getting onto the training docket is not easy, but we have never had a problem fitting into any existing training and documentation regimen. However, there are many issues:
Financial Controls
In theory, financial transactions are covered by whatever data security policies are in place. In practice, this is often not the case.
Finance is a tricky area: giving local consultants information about how financial transactions are constructed, verified and secured really goes against the financial grain. It is an issue for which we have no good solution because we cannot figure out a way out of the following conundrum: the financial controls are so tight that we cannot see what we are doing, so we are forced to have a backdoor into the data stream, which of course is only present in the development environment.
Maintainability
"Does it break when you update your global software?" All too often, the answer is "yes" because your global staff does not know or does not care about local dependencies. But in my experience, the full answer is "yes, probably, but with decent communication your service outages should be infrequent and brief." I say infrequent, because there is enormous pressure on global infrastructure to remain constant. I say brief because your local solution should be aware of its dependencies and adapting to changing infrastructure should have been built in.
In our experience, the ramifications of global change are so great that we suffer from the inverse: we are constantly informed of far-off global changes which do not affect us at all. But better that that the alternative.
An experienced executive working in a large American corporation had this reaction to that previous post:
Say your point is true: local solutions give corporate IT heartburn. What are they worried about?
- Data security/access control/access auditability - ad hoc solutions can be given high levels of data access, but the user level access controls within the solution are maybe not so robust.
- Business continuity planning (disaster recovery) - making sure that the local solutions are covered by the disaster recovery plan. (data and software backup)
- Training (ISO 9001 audits of training, etc.) - making sure that the local entities have included their customizations in their training efforts in an auditable fashion, and that the global quality documents are written in a way that allows for local solutions (while being usefully specific about the global solution where no local solution exists)
- Financial controls - making sure that, if there are financial transactions in the local solution, they have adequate controls.
- Maintainabiity - does it break when you update your global software?
Accounting for all of these things ad hoc in a customized solution makes it expensive, unless the central software provides a framework for dealing with them all in a consistent manner.
These are all valid and interesting points. Here is how I answer them:
Data Security & Access Control
Security issues always come down to policy and enforcement, so I would phrase the question this way:
Can local solutions be trusted to implement global data security policy?
I claim that the answer is "yes." Implementing data security policies should be a dimension along which the local solution is designed, coded and tested.
In practice, I find that a local solution is not likely be better than the prevailing adherence to data security policy and, admittedly, is often worse. For example, we recently installed a suite of apps for a client for which we provided support to use their global Active Directory (AD) as the source of credentials and the source of permissions associated with those credentials. But it was an uphill battle to get the local users to communicate with the global AD keepers about whose account was allowed to do what. It was not easy to merely establish what existing permissions the app's functions should map to.
Given how security policy implementation in large organizations usually plays out, I can sympathize with this concern: is this new way to access data consistent with our policies? But there is no reason that the answer shouldn't be "yes" unless your policies are poorly understood or your infrastructure is closed or poorly configured.
Disaster Recovery
DR is activity which, like prayer or the tennis lob, is rarely rehearsed and usually called upon only in times of great distress. Can a local solution be fit into the DR plan? Almost always. Is this a question that is usually asked? No, it is not. The good news is that more and more DR is more-or-less automatic: SANs and clusters and remote data centers have relieved many local entities of this responsibility.
To be clear, I understand DR to protect against three specific kinds of data or service loss:
- Human error: someone pushed the button and wiped out lots of data.
- Hardware failure: a key component, such as a disk drive, has failed.
- Disaster: the building/city/state is flooded/burning/radioactive.
Training
We have often found that getting onto the training docket is not easy, but we have never had a problem fitting into any existing training and documentation regimen. However, there are many issues:
- Politically, is it safe to expose the local solution to review?
- Practically, can one providing training materials which do not rely on local experience?
- Who will keep the training materials up-to-date?
Financial Controls
In theory, financial transactions are covered by whatever data security policies are in place. In practice, this is often not the case.
Finance is a tricky area: giving local consultants information about how financial transactions are constructed, verified and secured really goes against the financial grain. It is an issue for which we have no good solution because we cannot figure out a way out of the following conundrum: the financial controls are so tight that we cannot see what we are doing, so we are forced to have a backdoor into the data stream, which of course is only present in the development environment.
Maintainability
"Does it break when you update your global software?" All too often, the answer is "yes" because your global staff does not know or does not care about local dependencies. But in my experience, the full answer is "yes, probably, but with decent communication your service outages should be infrequent and brief." I say infrequent, because there is enormous pressure on global infrastructure to remain constant. I say brief because your local solution should be aware of its dependencies and adapting to changing infrastructure should have been built in.
In our experience, the ramifications of global change are so great that we suffer from the inverse: we are constantly informed of far-off global changes which do not affect us at all. But better that that the alternative.
Wednesday, October 26, 2011
IT Zombies: SEP In Corporate IT
(SEP stands for Someone Else's Problem.)
Once upon a time, corporate IT people usually had only "the System" to worry about. The distinction between hardware, operating system (O/S) and application (app) was not so stark. Often your mainframe only ran one visible app, which made heavy use of O/S-specific features and the O/S relied heavily on particular features of its hardware.
Whether the System was largely homegrown, or bought commercially, it was around for a while and its minions learned to tend it faithfully. There were even gurus who actually knew just about everything there was to know and who could extend the system if that were required.
As time has gone on, the typical corporate IT infrastructure has become vastly more complex and vastly less stable. It is less stable in both senses of the word: it changes rather frequently and it often behaves in ways one would not have chosen. By this, I mean that its behavior is the sum total effect of all those parts and the net effect was not planned or specified, it just happened.
Because it is often my job to deploy custom-made technology to bridge gaps between existing pieces of the infrastructure, I am frequently in the position of exercising existing pieces in new ways. This is rarely a happy place to be: current IT comes in chunks so big and so complicated that it is rare to find that anyone knows what will happen when configurations are changed and previously inactive features are turned on.
When things do not go as planned, I am frequently in the position of asking in-house staff to investigate issues and perhaps even resolve those issues. There are two happy endings to this story and one unhappy ending.
The first happy ending is that the in-house staff are expert in the particular technology and they either get it to work or tell me how better to interact with it.
The second happy ending is that the in-house staff have a good relationship with the vendor and the vendor is competent to support their own products and the vendor either gets it to work or tells me how better to interact with their product.
The unhappy ending is that the in-house staff do not feel ownership of the offending piece of IT and they just shrug when I ask. In other words, they feel that the malfunctioning infrastructure is someone else's problem (SEP).
Once we feel that something is SEP, we are content to just shrug or, almost as irritating, sympathize but do nothing else.
I suspect that the SEP effect is, at least in part, generated by the Big Purchase phenomenon which I have discussed in a previous posting.
I am pretty sure that "wrong sizing," whereby a workforce is reduced below feasible levels and people are told 'do more with less,' is also a major cause of the SEP attitude. Unclear lines of responsibility, overwork and under appreciation certainly do not help.
But I am not sure and I am not sure that it matters. Once an environment is permeated with SEP, it is very hard to get anything done. Specifically, SEP leads to the following situations:
Once I find myself in any of these situations, I have only one strategy that works me: relentless but polite pursuit of well-defined, simple goals. In my case, this often means taking up medieval quests:
Even though I can cope with it, there is an aspect of this situation that I find tragic: I am watching formerly motivated, useful IT types become deadwood as SEP seeps into their careers. As they become comfortable saying the following tired phrases, they don't realize that their coworkers are writing them off as useless:
Once upon a time, corporate IT people usually had only "the System" to worry about. The distinction between hardware, operating system (O/S) and application (app) was not so stark. Often your mainframe only ran one visible app, which made heavy use of O/S-specific features and the O/S relied heavily on particular features of its hardware.
Whether the System was largely homegrown, or bought commercially, it was around for a while and its minions learned to tend it faithfully. There were even gurus who actually knew just about everything there was to know and who could extend the system if that were required.
As time has gone on, the typical corporate IT infrastructure has become vastly more complex and vastly less stable. It is less stable in both senses of the word: it changes rather frequently and it often behaves in ways one would not have chosen. By this, I mean that its behavior is the sum total effect of all those parts and the net effect was not planned or specified, it just happened.
Because it is often my job to deploy custom-made technology to bridge gaps between existing pieces of the infrastructure, I am frequently in the position of exercising existing pieces in new ways. This is rarely a happy place to be: current IT comes in chunks so big and so complicated that it is rare to find that anyone knows what will happen when configurations are changed and previously inactive features are turned on.
When things do not go as planned, I am frequently in the position of asking in-house staff to investigate issues and perhaps even resolve those issues. There are two happy endings to this story and one unhappy ending.
The first happy ending is that the in-house staff are expert in the particular technology and they either get it to work or tell me how better to interact with it.
The second happy ending is that the in-house staff have a good relationship with the vendor and the vendor is competent to support their own products and the vendor either gets it to work or tells me how better to interact with their product.
The unhappy ending is that the in-house staff do not feel ownership of the offending piece of IT and they just shrug when I ask. In other words, they feel that the malfunctioning infrastructure is someone else's problem (SEP).
Once we feel that something is SEP, we are content to just shrug or, almost as irritating, sympathize but do nothing else.
I suspect that the SEP effect is, at least in part, generated by the Big Purchase phenomenon which I have discussed in a previous posting.
I am pretty sure that "wrong sizing," whereby a workforce is reduced below feasible levels and people are told 'do more with less,' is also a major cause of the SEP attitude. Unclear lines of responsibility, overwork and under appreciation certainly do not help.
But I am not sure and I am not sure that it matters. Once an environment is permeated with SEP, it is very hard to get anything done. Specifically, SEP leads to the following situations:
- The in-house staff, unwilling or unable to contact the vendor, bombards me with half-baked theories and half-baked solutions, hoping to get me to go away.
- The in-house staff points me toward some documentation or irrelevant email from the vendor. See, here's your useless answer, now go away.
- The vendor invites me to a conference call during which they try to convince me that I should not be doing whatever it is that I am doing. This is like the old joke about lazy doctors: Patient: Doctor, it hurts when I chew on the left side of my mouth, what should I do? Doctor: Don't chew on the left side of your mouth.
Once I find myself in any of these situations, I have only one strategy that works me: relentless but polite pursuit of well-defined, simple goals. In my case, this often means taking up medieval quests:
- actually wandering around an organization to follow up emails
- camping out in people's cubes
- talking anyone upon whom I am fobbed off until they send me down the line
- filling out whatever forms need filling out
- taking whatever conference calls I need to take
- disproving whatever theories I have to disprove
Even though I can cope with it, there is an aspect of this situation that I find tragic: I am watching formerly motivated, useful IT types become deadwood as SEP seeps into their careers. As they become comfortable saying the following tired phrases, they don't realize that their coworkers are writing them off as useless:
- "I don't know anything about that and I don't know who to ask."
- "We just set up the database, I don't know how the data gets in there."
- "We used to do it all ourselves but now the Networking / Security / Server / Database group does that."
Wednesday, October 19, 2011
Elaborate UI, Feeble App
A pet peeve of mine is the tendency of apps to have elaborate User Interfaces (UIs) but feeble capability.
I call the UI "the front end" and the other part "the back end." Once upon a time, the back end was where all the magic happened. Where we put our effort. Now, it seems, the pendulum has swung the other way and all the work seems to go into the front ends.
(There is an analogy to our celebrity culture in there somewhere.)
Specifically, I find many apps with elaborate UIs which just don't do all the things one would expect such an app to do. Generally users just shrug: it would be great if it worked that way, they sigh, but it doesn't.
For example, I recently was debugging an interface to a medical information system. I asked the users to verify the data for a given patient on a given day. They asked if I couldn't provide encounter IDs instead. Couldn't they use the app to search for the data in question? No, they could not. Instead, they could use an elaborate GUI to laborious click on all the little icons which represented encounters, scan the cramped screen for the data, and repeat the process until they found what I was looking for. Blech.
While lacking functionality, the UI for each of these same apps is full of bells and whistles: backgrounds that change color when the control has focus, lots of different fonts, dynamically created drop-down list boxes, etc. In fact, one requires a mighty desktop just to run the UI.
My theory is that today's typical GUI is so much work to build that development teams just run out of steam before tackling the functionality.
I have noticed that many current development environments encourage this: Visual C++, for instance, has a development environment that is a GUI, is about GUIs and is not so great for doing other kinds of programming, such as bit-bashing or file manipulation.
I understand how one might arrive at this point: using X-windows and ANSI C in the 1980s was exactly the opposite: the environment did little to support GUI generation and putting GUIs together in that era was painful. So we tended to be modest in our GUI ambitions.
With this generation of development environments has come a melding of front and back ends: you write the front end and the back end gets written in bits and pieces to drive the front end. This makes the distinction between front and back end difficult to maintain, which in turn leads to back ends which are amorphous--and difficult to test (how do you run 1,000 cases?) or to use automatically (how do you call the app automatically from the background?).
In fact, there is rarely more than one front end for any app: if you don't want to use the app interactively in a graphical environment, you are usually out of luck.
The only exception I see to this trend is the plastering of a thin GUI wrapper on something mighty, such as phpMyAdmin for MySQL, or GhostView for Ghostscript. I rather like these apps, but they could not avoid what I consider to be a solid architecture: there is a clean division between the front and the back because the front was done after the fact, by different people at a different time.
This split front vs back end gives the user real flexibility: one can use a command line interface, or choose from a variety of GUIs. Running through many test cases is easy. Running automatically is easy. Why don't more development teams build apps this way?
I call the UI "the front end" and the other part "the back end." Once upon a time, the back end was where all the magic happened. Where we put our effort. Now, it seems, the pendulum has swung the other way and all the work seems to go into the front ends.
(There is an analogy to our celebrity culture in there somewhere.)
Specifically, I find many apps with elaborate UIs which just don't do all the things one would expect such an app to do. Generally users just shrug: it would be great if it worked that way, they sigh, but it doesn't.
For example, I recently was debugging an interface to a medical information system. I asked the users to verify the data for a given patient on a given day. They asked if I couldn't provide encounter IDs instead. Couldn't they use the app to search for the data in question? No, they could not. Instead, they could use an elaborate GUI to laborious click on all the little icons which represented encounters, scan the cramped screen for the data, and repeat the process until they found what I was looking for. Blech.
While lacking functionality, the UI for each of these same apps is full of bells and whistles: backgrounds that change color when the control has focus, lots of different fonts, dynamically created drop-down list boxes, etc. In fact, one requires a mighty desktop just to run the UI.
My theory is that today's typical GUI is so much work to build that development teams just run out of steam before tackling the functionality.
I have noticed that many current development environments encourage this: Visual C++, for instance, has a development environment that is a GUI, is about GUIs and is not so great for doing other kinds of programming, such as bit-bashing or file manipulation.
I understand how one might arrive at this point: using X-windows and ANSI C in the 1980s was exactly the opposite: the environment did little to support GUI generation and putting GUIs together in that era was painful. So we tended to be modest in our GUI ambitions.
With this generation of development environments has come a melding of front and back ends: you write the front end and the back end gets written in bits and pieces to drive the front end. This makes the distinction between front and back end difficult to maintain, which in turn leads to back ends which are amorphous--and difficult to test (how do you run 1,000 cases?) or to use automatically (how do you call the app automatically from the background?).
In fact, there is rarely more than one front end for any app: if you don't want to use the app interactively in a graphical environment, you are usually out of luck.
The only exception I see to this trend is the plastering of a thin GUI wrapper on something mighty, such as phpMyAdmin for MySQL, or GhostView for Ghostscript. I rather like these apps, but they could not avoid what I consider to be a solid architecture: there is a clean division between the front and the back because the front was done after the fact, by different people at a different time.
This split front vs back end gives the user real flexibility: one can use a command line interface, or choose from a variety of GUIs. Running through many test cases is easy. Running automatically is easy. Why don't more development teams build apps this way?
Wednesday, October 12, 2011
No Small Software Purchases
We seem to have entered a new age of Large Software Purchases.
When I started in this business, the mainframe era was ending and the super mini computer (yes, our terminology was lame even then) was on the rise. Departmental servers, in the shadow of the might core computer, sprang up everywhere. The hardware was expensive ($20K-$50K) and the software generally quite expensive too.
The computers started becoming personal and the range of prices for software used by businesses grew much, much wider than it had been before. There was even overlap between what people bought for home use and what was in use at the office, which surprised all of us, even Microsoft.
Along came Linux and Open Source and the price landscape changed again, for those daring enough or strapped for cash enough to take the plunge.
But recently I see a strong swing back to centralized purchases, central servers and corporate software. I see executives asserting themselves at the senior level, making large software purchases from their 50,000 foot perspective.
This shift is not inherently bad, but this central decision-making is often accompanied by a very unfortunate policy of "no support for local requirements." HQ buys it, you use it. HQ did not want to get mired in the details, so they do not know what, if any, local requirements there are. They even hint that local requirements are a function of local management incompetence.
I have seen a proliferation of large systems which are not well suited to the task at hand, or systems which are not properly configured for local needs, or systems which are both.
This trend has been explained to me as an attempt to conserve management time, which is often a company's most expensive resource. (I find that fact disturbing and there is probably a rant in there somewhere.) It is more efficient for there to be fewer, larger IT decisions and it makes sense to have those few, large, decisions to be made by the decision-making experts, without an undue focus on minutiae.
I can understand that the sort of process problem that I make my living solving just isn't important enough to warrant senior executive attention. But I think that something has gone wrong when large swatches of the organization are making do with inappropriate or misconfigured "enterprise solutions."
I can grasp that sometimes a local problem should be left unsolved for the global good. But I think that such calculations should be explicit and revisited; instead, I see local issues being ignored in such large numbers that it is hard to believe that the sum total of all that procedural losing does not outweigh the cost of acting to prevent at least some of it.
I realize that economy of scale does come into play sometimes and that having a moderately sized central department can be more effective for some purposes than myriad local departments. But I am suspicious of this model when so often there are local needs dictated by regulation, competition or regional variation in the customer base.
I can see that large companies are complex and require some metaphoric distance before you can see the whole picture. But I suspect that this very complexity argues against the "one size fits all" philosophy. This feels to me like the old trap of trying to solve personnel problems or process problems with software. That does not work and I suspect that we will find that large software systems imposed from afar won't work either.
When I started in this business, the mainframe era was ending and the super mini computer (yes, our terminology was lame even then) was on the rise. Departmental servers, in the shadow of the might core computer, sprang up everywhere. The hardware was expensive ($20K-$50K) and the software generally quite expensive too.
The computers started becoming personal and the range of prices for software used by businesses grew much, much wider than it had been before. There was even overlap between what people bought for home use and what was in use at the office, which surprised all of us, even Microsoft.
Along came Linux and Open Source and the price landscape changed again, for those daring enough or strapped for cash enough to take the plunge.
But recently I see a strong swing back to centralized purchases, central servers and corporate software. I see executives asserting themselves at the senior level, making large software purchases from their 50,000 foot perspective.
This shift is not inherently bad, but this central decision-making is often accompanied by a very unfortunate policy of "no support for local requirements." HQ buys it, you use it. HQ did not want to get mired in the details, so they do not know what, if any, local requirements there are. They even hint that local requirements are a function of local management incompetence.
I have seen a proliferation of large systems which are not well suited to the task at hand, or systems which are not properly configured for local needs, or systems which are both.
This trend has been explained to me as an attempt to conserve management time, which is often a company's most expensive resource. (I find that fact disturbing and there is probably a rant in there somewhere.) It is more efficient for there to be fewer, larger IT decisions and it makes sense to have those few, large, decisions to be made by the decision-making experts, without an undue focus on minutiae.
I can understand that the sort of process problem that I make my living solving just isn't important enough to warrant senior executive attention. But I think that something has gone wrong when large swatches of the organization are making do with inappropriate or misconfigured "enterprise solutions."
I can grasp that sometimes a local problem should be left unsolved for the global good. But I think that such calculations should be explicit and revisited; instead, I see local issues being ignored in such large numbers that it is hard to believe that the sum total of all that procedural losing does not outweigh the cost of acting to prevent at least some of it.
I realize that economy of scale does come into play sometimes and that having a moderately sized central department can be more effective for some purposes than myriad local departments. But I am suspicious of this model when so often there are local needs dictated by regulation, competition or regional variation in the customer base.
I can see that large companies are complex and require some metaphoric distance before you can see the whole picture. But I suspect that this very complexity argues against the "one size fits all" philosophy. This feels to me like the old trap of trying to solve personnel problems or process problems with software. That does not work and I suspect that we will find that large software systems imposed from afar won't work either.
Wednesday, October 5, 2011
Creating GUIs Requires Some Artistic Talent
Prologue: GUIs vs TUI
There are two main kinds of UI, following two different basic human cognitive models: recognize-and-point versus remember-and-verbalize. The underlying cognitive models follow human mental development: infants and young children recognize and point, while slightly older children up through adults acquire language and use that instead. If one is lucky enough to live that long, one may start to lose language in one's dotage and end up back at recognizing and pointing.
GUIs are all about recognize-and-point: you recognize the icon or button and you use the pointer to "shift focus" to the desired object. Text-based UIs, which we used to call simply "UIs" but which I will call here "TUIs," are all about remember-and-type: you remember the right command in this context and you type it.
GUIs should have an underlying metaphor: a physical desktop for operating systems, a physical piece of paper for word processors, a physical scroll of names for music players, etc.
GUIs are really great for two kinds of job:
Of course, the GUI model is not purely graphical: there is a text-handling component to it because nearly every human/computer interaction, at some point, needs the possibility of text entry and text viewing.
If done well, as is the iTunes interface on an iDevice (iPod, iPad, iPhone), this text entry component is so natural that you don't think about it.
If done badly, as is the SQL window in MS-Access, this text entry component is jarring and much worse than a TUI would be.
Actual Rant
GUIs in general provide many options and lots of power. I have have a problem with most of the actual GUIs I encounter: they are hard to use. In fact, most of them suck.
While the world seems to have gone completely graphical, we also seem to have decided that terrible GUIs are just a fact of life. Dilbert feels my pain:
While graphical environment are rife with possibility and provide all kinds of pre-made widgets, they do not provide, inherently, any of the following:
As far as I can tell, many GUIs are designed by analogy: "let's do this the way Microsoft Office would do it, even if we are trying to identify patients who have a particular condition, and Microsoft Office is trying to automate tradition clerical work." Close enough, right? So let's say that the patients are files and their surgeries are documents and maybe their test results are also documents and soon the screen is a maze of tiny controls, all basically alike, leaving the user to remember what the current set of "documents" actually represents.
The fact that you can learn to navigate a bad metaphor doesn't change the fact that it is a bad metaphor.
And yet, the sad fact is that making all your bad GUIs look like Microsoft Word is better than making them bad in their own way. If your users expect a "File" menu bar at the top, whether or not they are dealing with "files," then finding one there is comforting. If you cannot provide a clean, consistent look-and-feel yourself, I suppose copying a popular one is better than creating a original, bad-but-novel UI.
Not all GUIs are terrible: I have watched die-hard Windows users interact with iTunes for Windows, which makes no effort to be Microsoft Office-like, and those die-hard Windows users find iTunes a delight.
So go ahead and create GUIs, if that is your job, but for God's sake do the graphical part, or get someone with graphical talent to do that part for you. Please. Your users will thank you--or at least, curse your name less.
There are two main kinds of UI, following two different basic human cognitive models: recognize-and-point versus remember-and-verbalize. The underlying cognitive models follow human mental development: infants and young children recognize and point, while slightly older children up through adults acquire language and use that instead. If one is lucky enough to live that long, one may start to lose language in one's dotage and end up back at recognizing and pointing.
GUIs are all about recognize-and-point: you recognize the icon or button and you use the pointer to "shift focus" to the desired object. Text-based UIs, which we used to call simply "UIs" but which I will call here "TUIs," are all about remember-and-type: you remember the right command in this context and you type it.
GUIs should have an underlying metaphor: a physical desktop for operating systems, a physical piece of paper for word processors, a physical scroll of names for music players, etc.
GUIs are really great for two kinds of job:
- Software which one does not use often enough to memorize the commands
- Software which is aimed at people who do not use software well
- Software which one uses so often that typing is faster than pointing
- Software which is aimed at domain experts who want power over support
Of course, the GUI model is not purely graphical: there is a text-handling component to it because nearly every human/computer interaction, at some point, needs the possibility of text entry and text viewing.
If done well, as is the iTunes interface on an iDevice (iPod, iPad, iPhone), this text entry component is so natural that you don't think about it.
If done badly, as is the SQL window in MS-Access, this text entry component is jarring and much worse than a TUI would be.
Actual Rant
GUIs in general provide many options and lots of power. I have have a problem with most of the actual GUIs I encounter: they are hard to use. In fact, most of them suck.
While the world seems to have gone completely graphical, we also seem to have decided that terrible GUIs are just a fact of life. Dilbert feels my pain:
While graphical environment are rife with possibility and provide all kinds of pre-made widgets, they do not provide, inherently, any of the following:
- a clean, consistent layout
- a useful visual metaphor for your particular context
- guidelines for the scope or direction of the guts beneath them
As far as I can tell, many GUIs are designed by analogy: "let's do this the way Microsoft Office would do it, even if we are trying to identify patients who have a particular condition, and Microsoft Office is trying to automate tradition clerical work." Close enough, right? So let's say that the patients are files and their surgeries are documents and maybe their test results are also documents and soon the screen is a maze of tiny controls, all basically alike, leaving the user to remember what the current set of "documents" actually represents.
The fact that you can learn to navigate a bad metaphor doesn't change the fact that it is a bad metaphor.
And yet, the sad fact is that making all your bad GUIs look like Microsoft Word is better than making them bad in their own way. If your users expect a "File" menu bar at the top, whether or not they are dealing with "files," then finding one there is comforting. If you cannot provide a clean, consistent look-and-feel yourself, I suppose copying a popular one is better than creating a original, bad-but-novel UI.
Not all GUIs are terrible: I have watched die-hard Windows users interact with iTunes for Windows, which makes no effort to be Microsoft Office-like, and those die-hard Windows users find iTunes a delight.
So go ahead and create GUIs, if that is your job, but for God's sake do the graphical part, or get someone with graphical talent to do that part for you. Please. Your users will thank you--or at least, curse your name less.
Wednesday, September 28, 2011
Technology Audits, RIP
Once upon a time, when computers were not yet "personal" and phone were dumber than rocks, there were technology audits. If you are under 45, you may never even have heard of them, so rare are they now.
What I mean by "technology audit" is a formal, independent review of a piece of technology. These reviews usually consisted of running a predefined set of inputs into the technology, capturing the resulting outputs, and matching the actual outputs with the expected outputs.
Sometimes, if running a test suite through the technology was impractical, we went through the logs and gathered up the actual inputs and the actual outputs for a day, or week, or month, and reviewed the outputs to see if they were within expectations. We often re-implemented business rules or other logic to help us confirm that the outputs were as expected.
Whatever the name and whatever the precise methodology, this concept seems to have become extinct.
It may be that I am wrong. It may be that this process still exists but has a new name, or is done by a specialized sub-industry. But I don't think so: I think that this concept is dead.
One factor, certainly, is the declining cost of much software: if it was cheap or free to acquire, perhaps people feel that it is not cost-effective to audit its operation. I cringe at this explanation because the price tag is not a very good proxy for value to the company and because the cost of malfunction is often very high--especially if one uses human beings to compensate for faulty technology.
Another factor, I think, is the popularity of the notion that self-checking is good enough: software checks itself, sys admins check their own work, etc. I cringe that this explanation because I am all too aware of the blind spots we all have for our work, our own worldview and our own assumptions.
In the interests of full disclosure, I should note that I am a consultant who, in theory, is losing business because I am not being hired to do these audits. I would claim that I suffer more from working in auditless environments, where many key pieces of
technology are not working correctly, but this may be an example of a blind spot.
Many of the business folks and IT folks I encounter are not even clear on what the point of the exercise would be; after all, if their core technology were grossly flawed, they would already know it and already have fixed it, no?
But the usual benefit of a technology audit is not documenting gross failure, obvious misconfiguration or currently out-of-use pathways (although, alas! these all happen sometimes). Rather, the usual benefit of a technology audit is the revelation of
assumptions, the quantification of known undesired situations and the utility of having people looking afresh at something critical, while all sharing the same context.
Wait! I hear you cry, can't one do one's own audit of one's own technical context? Well, yes, one can, but the benefit is often much less. Just as self-inspections have their place, but are not accepted in place of outside inspections, so self-audits are useful but not really the same thing.
(I do often build self-audits into my designs, because I want to avoid the avoidable and to make explicit my assumptions about what "success" means in every context. But self-audits are not a complete solution to this problem. And my clients often scratch their heads at the self-audit, which seems to do nothing but catch mistakes, many of them my own mistakes. Which is precisely what they are supposed to do.)
When I am trying to deploy a piece of technology, the lack of auditing means that when someone else's piece of technology does not work as I expect, I am often without any way of knowing why. For instance, which of the following explanations best fits the situation?
the same piece of technology running is not only unrewarding professionally, but it is a terrible advertisement for our services: Buy now! Our stuff works for a while.
A dark fear I have is that audits are out of favor because executives just don't want to know: they don't want to hear that the expensive, hard to buy, hard to deploy, hard to justify technology investment is not working properly, because they don't know how to fix it. I have had more than one manager tell me "don't bring me problems, bring me solutions" which sounds really manly, but is not very useful. How do I know what solutions to bring if I don't know what is wrong? Do I conceal all issues for which the solution is not obvious to me? In theory, once a problem is identified, the whole team can help solve it.
As always, I would love to know if this lack of audits only exists in companies whose focus is not high tech, or if it now has a cooler name or has been folder into something else, such as management consulting or business process engineering.
What I mean by "technology audit" is a formal, independent review of a piece of technology. These reviews usually consisted of running a predefined set of inputs into the technology, capturing the resulting outputs, and matching the actual outputs with the expected outputs.
Sometimes, if running a test suite through the technology was impractical, we went through the logs and gathered up the actual inputs and the actual outputs for a day, or week, or month, and reviewed the outputs to see if they were within expectations. We often re-implemented business rules or other logic to help us confirm that the outputs were as expected.
Whatever the name and whatever the precise methodology, this concept seems to have become extinct.
It may be that I am wrong. It may be that this process still exists but has a new name, or is done by a specialized sub-industry. But I don't think so: I think that this concept is dead.
One factor, certainly, is the declining cost of much software: if it was cheap or free to acquire, perhaps people feel that it is not cost-effective to audit its operation. I cringe at this explanation because the price tag is not a very good proxy for value to the company and because the cost of malfunction is often very high--especially if one uses human beings to compensate for faulty technology.
Another factor, I think, is the popularity of the notion that self-checking is good enough: software checks itself, sys admins check their own work, etc. I cringe that this explanation because I am all too aware of the blind spots we all have for our work, our own worldview and our own assumptions.
In the interests of full disclosure, I should note that I am a consultant who, in theory, is losing business because I am not being hired to do these audits. I would claim that I suffer more from working in auditless environments, where many key pieces of
technology are not working correctly, but this may be an example of a blind spot.
Many of the business folks and IT folks I encounter are not even clear on what the point of the exercise would be; after all, if their core technology were grossly flawed, they would already know it and already have fixed it, no?
But the usual benefit of a technology audit is not documenting gross failure, obvious misconfiguration or currently out-of-use pathways (although, alas! these all happen sometimes). Rather, the usual benefit of a technology audit is the revelation of
assumptions, the quantification of known undesired situations and the utility of having people looking afresh at something critical, while all sharing the same context.
Wait! I hear you cry, can't one do one's own audit of one's own technical context? Well, yes, one can, but the benefit is often much less. Just as self-inspections have their place, but are not accepted in place of outside inspections, so self-audits are useful but not really the same thing.
(I do often build self-audits into my designs, because I want to avoid the avoidable and to make explicit my assumptions about what "success" means in every context. But self-audits are not a complete solution to this problem. And my clients often scratch their heads at the self-audit, which seems to do nothing but catch mistakes, many of them my own mistakes. Which is precisely what they are supposed to do.)
When I am trying to deploy a piece of technology, the lack of auditing means that when someone else's piece of technology does not work as I expect, I am often without any way of knowing why. For instance, which of the following explanations best fits the situation?
- Option A: I misunderstand what is supposed to happen; this means that I will have to change once I figure out that I am the problem.
- Option B: I am in a gray area; one could argue that my expectations are reasonable or that the observed behavior is reasonable. I will likely have to change, but I have a chance to make a case to the other side.
- Option C: the undesirable behavior is clearly and definitively wrong or not-to-spec. In theory, I should be able to inform the other side and have them fix their stuff.
the same piece of technology running is not only unrewarding professionally, but it is a terrible advertisement for our services: Buy now! Our stuff works for a while.
A dark fear I have is that audits are out of favor because executives just don't want to know: they don't want to hear that the expensive, hard to buy, hard to deploy, hard to justify technology investment is not working properly, because they don't know how to fix it. I have had more than one manager tell me "don't bring me problems, bring me solutions" which sounds really manly, but is not very useful. How do I know what solutions to bring if I don't know what is wrong? Do I conceal all issues for which the solution is not obvious to me? In theory, once a problem is identified, the whole team can help solve it.
As always, I would love to know if this lack of audits only exists in companies whose focus is not high tech, or if it now has a cooler name or has been folder into something else, such as management consulting or business process engineering.
Wednesday, September 21, 2011
Iterative UI Design
I am a big fan of the tradition software engineering model, which I define as going through the following steps:
The problem I encounter is this: large amounts of work go into the pre-design, design and implementation stages; then users get a look at the prototype, or early version, or whatever, and they want changes. In the old days, we called this phenomenon "it is just what I asked for, but not what I want."
In this scenario, the development team gets defensive: they did what they were supposed to do; they don't have resources to largely re-do the project. In fact, they don't feel that they have resources to re-do much of anything: they are planning on only fixing bugs (and probably only major bugs at that). Often there are recriminations from development to the users: "you didn't tell us that" etc. Often the users are frustrated: no one asked them these questions.
I maintain that this is predictable and avoidable by the simple expedient of assuming that the UIs are different and that they require iterations of a functional spec / implementation / user feedback cycle. Get user feedback early and often. Don't plan on a one-and-done implementation.
If you accept the fact that very few people can fully imagine what a given UI would be like to use in the real world, then you won't feel like a failure if you (or your team or your users) can't do it either.
Of course, simply drawing out the development process is inefficient and ineffective: I am not proposing a longer development schedule. Instead, I am proposing a development schedule with less design time and more user-feedback sessions.
Ideally, the UI development process is a conversation between the users and the developers, which means that the feedback / change / review cycle has to be short enough that the conversation stays alive. And someone has to moderate the changes so that the process moves forward, not around in circles. You know you are off track when you start getting feedback of the form "I think I liked it better a few iterations ago."
If you try the traditional engineering model, you are likely to find that the UI is not what the users want, but was exactly what the developers set out to create, which leads to blame for the fact that the UI either has to be redone or has to remain inadequate.
On the other hand, if you iterate properly, you are likely to find that your users are happier and your developers are less grumpy, without having to spend substantially more time on the process. Instead, you spend the time in different ways.
- Gather requirements
- Produce a functional spec
- Design a solution
- Produce a technical spec
- Implement
- Test
- Document
- Deploy
The problem I encounter is this: large amounts of work go into the pre-design, design and implementation stages; then users get a look at the prototype, or early version, or whatever, and they want changes. In the old days, we called this phenomenon "it is just what I asked for, but not what I want."
In this scenario, the development team gets defensive: they did what they were supposed to do; they don't have resources to largely re-do the project. In fact, they don't feel that they have resources to re-do much of anything: they are planning on only fixing bugs (and probably only major bugs at that). Often there are recriminations from development to the users: "you didn't tell us that" etc. Often the users are frustrated: no one asked them these questions.
I maintain that this is predictable and avoidable by the simple expedient of assuming that the UIs are different and that they require iterations of a functional spec / implementation / user feedback cycle. Get user feedback early and often. Don't plan on a one-and-done implementation.
If you accept the fact that very few people can fully imagine what a given UI would be like to use in the real world, then you won't feel like a failure if you (or your team or your users) can't do it either.
Of course, simply drawing out the development process is inefficient and ineffective: I am not proposing a longer development schedule. Instead, I am proposing a development schedule with less design time and more user-feedback sessions.
Ideally, the UI development process is a conversation between the users and the developers, which means that the feedback / change / review cycle has to be short enough that the conversation stays alive. And someone has to moderate the changes so that the process moves forward, not around in circles. You know you are off track when you start getting feedback of the form "I think I liked it better a few iterations ago."
If you try the traditional engineering model, you are likely to find that the UI is not what the users want, but was exactly what the developers set out to create, which leads to blame for the fact that the UI either has to be redone or has to remain inadequate.
On the other hand, if you iterate properly, you are likely to find that your users are happier and your developers are less grumpy, without having to spend substantially more time on the process. Instead, you spend the time in different ways.
Wednesday, September 14, 2011
False Consensus
As an information systems designer and developer, I specialize in bridging the gaps between existing technologies. As a consequence, my projects often involve more than one group inside the client organization.
Sometimes when I am dealing with multiple groups, I find myself stymied in the requirements-gathering stage of a project. I keep hearing a nice, shallow, consistent story from all the parties and I am highly confident that near-perfect consensus has not actually been achieved.
I should state that I do not worship at the altar of consensus. I prefer consensus. I strive for consensus. But I do not insist on it, or pretend that it is always worth the effort. Sometimes people within an organization have different views or goals and sometimes one side needs to win and the other side needs to lose so that the organization as a whole can win.
Alas, apparent near-perfect agreement is, in my experience, most likely to be based on miscommunication and often deliberate miscommunication. Since so many organizations do worship consensus, it is often seen as unprofessional to disagree. To avoid seeming unprofessional, or antagonizing the boss, people bend their words to simulate agreement. I call this "false consensus" because all parties only seem to be in agreement.
Without public and acknowledged disagreement, there is little chance of conflict resolution. (I do not mean to imply that rudeness or aggression are called for; instead I mean to imply that resolving a conflict usually requires that the conflict first be acknowledged and explored.)
As long as we are in the realm of words, and not in the realm of action, this word-bending works or at least appears to work. But once action is called for, as in the implementation of a piece of software, the false consensus is exposed. Often it is not recognized and is expressed as a software misfeature or bug, which it is not.
For example, some years ago I worked on a project to create an HTML document for a customer. There were some real issues to be decided about the actual purpose of this document:
I tried a different approach: which items from the product catalog were to be included? This was an easy answer: all of them: why would any items ever be left out? So I gave up and did as I was asked.
At the first review of the document, there was horror: what were the internal items doing in the document? The folks who dealt with orders wanted the non-ordering items removed. The folks who dealt with post-processing wanted the order-only items removed. While we all agreed that "everything" should be in the document, there was no agreement on what "everything" meant. When I pointed out that "everything" generally means "all of the members of a given set" there was eye-rolling and muttering about the difficulty of communicating with computer people.
Eventually, at the end of the project, when it was most expensive and stressful to do so, we hashed out answers to the questions that the false consensus had obscured.
I still do not have a good strategy for coping with this issue, because often there are powerful internal political and social forces at work driving the people working for the client to pretend to agree utterly on utterly everything. All I can do is try to estimate the project impact of this practice on my projects and charge accordingly.
I would love to hear about ways to get around this and I would love to know if other IT professionals encounter the same issue; maybe I need to get out more.
Sometimes when I am dealing with multiple groups, I find myself stymied in the requirements-gathering stage of a project. I keep hearing a nice, shallow, consistent story from all the parties and I am highly confident that near-perfect consensus has not actually been achieved.
I should state that I do not worship at the altar of consensus. I prefer consensus. I strive for consensus. But I do not insist on it, or pretend that it is always worth the effort. Sometimes people within an organization have different views or goals and sometimes one side needs to win and the other side needs to lose so that the organization as a whole can win.
Alas, apparent near-perfect agreement is, in my experience, most likely to be based on miscommunication and often deliberate miscommunication. Since so many organizations do worship consensus, it is often seen as unprofessional to disagree. To avoid seeming unprofessional, or antagonizing the boss, people bend their words to simulate agreement. I call this "false consensus" because all parties only seem to be in agreement.
Without public and acknowledged disagreement, there is little chance of conflict resolution. (I do not mean to imply that rudeness or aggression are called for; instead I mean to imply that resolving a conflict usually requires that the conflict first be acknowledged and explored.)
As long as we are in the realm of words, and not in the realm of action, this word-bending works or at least appears to work. But once action is called for, as in the implementation of a piece of software, the false consensus is exposed. Often it is not recognized and is expressed as a software misfeature or bug, which it is not.
For example, some years ago I worked on a project to create an HTML document for a customer. There were some real issues to be decided about the actual purpose of this document:
- was it supposed to help customers place orders?
- was it supposed to help customers understand the response to their order?
- was it supposed to help internal users process the orders?
I tried a different approach: which items from the product catalog were to be included? This was an easy answer: all of them: why would any items ever be left out? So I gave up and did as I was asked.
At the first review of the document, there was horror: what were the internal items doing in the document? The folks who dealt with orders wanted the non-ordering items removed. The folks who dealt with post-processing wanted the order-only items removed. While we all agreed that "everything" should be in the document, there was no agreement on what "everything" meant. When I pointed out that "everything" generally means "all of the members of a given set" there was eye-rolling and muttering about the difficulty of communicating with computer people.
Eventually, at the end of the project, when it was most expensive and stressful to do so, we hashed out answers to the questions that the false consensus had obscured.
I still do not have a good strategy for coping with this issue, because often there are powerful internal political and social forces at work driving the people working for the client to pretend to agree utterly on utterly everything. All I can do is try to estimate the project impact of this practice on my projects and charge accordingly.
I would love to hear about ways to get around this and I would love to know if other IT professionals encounter the same issue; maybe I need to get out more.
Tuesday, September 13, 2011
Introduction
I want a soapbox from which to rant about technical matters I encounter in my day job, as a medical information systems creator.
Ideally, from my perspective, this soapbox would be free and easy to use and would not involve social media. After all, if my friends were interested in my ranting, I would not need this outlet in the first place.
Ideally, from your perspective, these rants would be interesting, useful, amusing or--at least theoretically--all three.
Being a soapbox, I would not be required to be formal, businesslike or even fair. You, the reader, would understand that I am not engaging in formal discourse. Although this is public discourse, which is a bit strange. I am hoping that this blog on venerable old Blogspot will fit this bill.
So what do use for a title? Well, I am certainly inclined toward grumpiness, at least professionally; by tech standards I am rather aged (I remember when these were called "web logs" before that became just too much work to say); I work in the business world and most of my work is in information technology, so I am going with "Grumpy Old Business IT".
My intention is to generate a weekly post on Wednesdays. We will see if, in a couple of years, I actually made it past the first month.
Ideally, from my perspective, this soapbox would be free and easy to use and would not involve social media. After all, if my friends were interested in my ranting, I would not need this outlet in the first place.
Ideally, from your perspective, these rants would be interesting, useful, amusing or--at least theoretically--all three.
Being a soapbox, I would not be required to be formal, businesslike or even fair. You, the reader, would understand that I am not engaging in formal discourse. Although this is public discourse, which is a bit strange. I am hoping that this blog on venerable old Blogspot will fit this bill.
So what do use for a title? Well, I am certainly inclined toward grumpiness, at least professionally; by tech standards I am rather aged (I remember when these were called "web logs" before that became just too much work to say); I work in the business world and most of my work is in information technology, so I am going with "Grumpy Old Business IT".
My intention is to generate a weekly post on Wednesdays. We will see if, in a couple of years, I actually made it past the first month.
Subscribe to:
Posts (Atom)