I recently had to review the operation of a fax-based system I created to deliver clinical reports to doctors. Someone claimed that a report had gone astray and in response, I had to dig through the logs and the database rows to confirm that the report had been delivered.
This left us at a very faxly impasse: I say the report arrived and they say that it did not.
This somewhat tiresome trip down memory lane made me reconsider the place of fax technology in the current environment.
Once upon a time I liked facsimile technology. It was the best way to deliver nicely and reliably formatted reports to a specific location with a low chance of eavesdropping.
(Once the paper report came out of the fax machine security was an open issue but it was also someone else's problem.)
I still like the maturity of the technology and the fact that I have lots of mature code that does cool fax-related things.
What I do not like is the usual list of issues with a waning technology as well as some fax-specific issues.
The usual waning technology issue are these:
- The infrastructure (POTS) is shrinking; in fact, since our office went VOIP I cannot debug faxing in our office.
- The hardware is harder to come by; I am having to hoard fax modems to ensure that I have spares.
- The system software is no longer common; it is not installed on servers by default and it is not easy to integrate serial lines into the clustering environment.
The specifically fax issues fall into one of two categories: inherent and acquired. By "inherent" I mean that these issues are a part of the faxing technology itself. By "acquired" I mean that these issues have arisen because our environments and expectations have changed, making faxes seem degraded by comparison with prevailing norms.
The inherent issues are the unreliable delivery and the degradation of retransmission; a fax of a fax is often pretty hard to read. The unreliable delivery is more of a problem: paper jams, ink runs out, fax machines get turned off and phone lines are sometimes busy. I refer to the protocol jargon meaning of unreliable: it may wok most of the time, but I cannot really tell if it worked, at least without calling and asking.
The ways in which our expectations have left faxes behind are these:
- The transfer speed is now rather low.
- The data is not integrated into anything else: the report lands on paper and stays there.
- The report arrives at a fixed physical location but more and more we move around when we work.
- The security is now rather lacking; back in the day, the point-to-point nature of POTS was pretty secure. Now, the lack of passwords and access logging is pretty lame.
My investigation ended with my system claiming to have delivered the report and the user claiming that the fax never arrived. Finally someone in the target office found the paper and all was once again right with the world.
All's well that ends well, but I must confess that I am looking forward to the day that doctors find something to replace faxing. Soon I hope.
Wednesday, December 28, 2011
Wednesday, December 21, 2011
Lack of Feedback = Madness
A mental health professional I know once defined insanity as the state in which the model of the world inside one's head is sufficiently out of alignment with the world outside one's head. I found this to be rather uninspiring at the time, but the older I get, the better this definition seems.
I note that there are at least two ways to end up in the "insane" category: either through some organic problem, ie a malfunctioning body, or through bad input. It is this second category that interests me today because I see a parallel to a common workplace situation.
The common workplace situation is as follows:
I note that there are at least two ways to end up in the "insane" category: either through some organic problem, ie a malfunctioning body, or through bad input. It is this second category that interests me today because I see a parallel to a common workplace situation.
The common workplace situation is as follows:
- A manager makes a strong statement, such as "everyone needs to be using System X by Some Date."
- The rank-and-file try to convert to System X and find issues; they bring the issues to their manager who punishes them for their failure
- Now that they know that "failure is not an option," the rank-and-file claim to be fully converted to System X by Some Date--perhaps even earlier.
- In fact, System X is imperfect (as is every system) and there are myriad hidden workarounds in place.
- Officially, the manager's decree is in full effect and all is well; actually, things are very different.
- The manager's model of the situation diverges ever farther from reality; in effect, the manager is going crazy.
- At some point, there is a crisis; my favorite is the crisis of cutting off funding to consultant running workarounds or to maintainers of systems other than System X. This crisis has measurable, undeniable consequences.
- The manager comes to the painful realization that all is not as he or she thought. He or she feels betrayed and blindsided. The members of his or her organization feel that his or her ignorance (read: insanity) is her or her own fault. Everybody loses
Saturday, December 10, 2011
How Much Technology Is Enough?
I am generally a minimalist, at least when I am designing or implementing information systems. To me, minimalism means doing it all, but not to excess.
So when I go out to dinner, I will have a steak, a baked potato, a spinach salad and some red wine but will have a small steak and a reasonable amount of wine.
I realize that many people would quibble with that definition of minimalism, so instead in this post I will use the term "optimal" to mean "everything you need but nothing that you don't need."
While I feel that the finished product should be optimal, I do not feel that a project's resources should be only just enough. Instead, I believe in what a colleague once referred to as "cheap insurance." By this I mean that I believe in providing an excess of whatever critical resources can be cheaply and easily procured.
For example, I often buy a USB external disk (which we call a "can") or two just for the project to make sure that disk space and back up will not be a problem. USB cans are cheaper than having a project delay or a project disaster. Similarly, I keep a few unused computer system units around because you never know when you will need a special purpose machine or want go provide a contractor with a machine.
All this keeping of stuff makes me s bit of s techno pack rat; although the difference between techno pack rat and project savior is often timing or dumb luck. And shelving is cheaper than failure.
Since we often help customers get off of old platforms, I have another reason (or, according to some people, rationalization) for keeping dated technology around: the ridiculous pace at which the high tech world changes.
Even applied technology tends to be very much of its time: often the dated peripheral has to match the dated system unit and match the dated driver running under the dated operating system. All too often trying to use a shiny new peripheral on a mature system simply does not work; worse, sometimes it works to some extent, sucking up your time as you fiddling a vain attempt to get full functionality.
Another reason that I hoard old tech is that I have a deep respect for working systems. I know from painful experience that every time you fiddle with a working system you risk having a systdm that will never work again. Am I a nervous Nellie or a scarred veteran? Opinion vary, but I will stick to my cautious ways. I like taking chances just fine if I have a fall. Ack position.
Not only do I fear that working systems are only working until you fiddle with them: I also fear that projects of more than trivial scope never go as planned. I know that contingency planning is good but not enough, so I want more: I want options. I want flexibility. So I need lots of extra stuff in order to be able to suddenly decide that the system would work better if there were two servers instead of one, etc.
I find that debugging distributed information systems sometimes requires creating a parallel universe. If the bug or issue or misconfiguration or mismatch is deep enough, you need a testbed in which to try changing fundamental aspects of the system. Sometimes I need a consult from an external expert, in which I want to be able to deliver a working prototype to them while still having a working prototype in house.
So I find that in order to deliver enough product, I need way more support than I expect. However, I notice that the same people who tell me I am going overboard on the technology resources for projects are often also the people whose output is lacking that final polish, those extra features that distinguish adequate from good and good from great. Shelving is relative cheap: I am going to keep pushing for great.
(Although even I have my limits: I have 8 bit NICs and CGA video cards I would be willing to part with at a great price.)
So when I go out to dinner, I will have a steak, a baked potato, a spinach salad and some red wine but will have a small steak and a reasonable amount of wine.
I realize that many people would quibble with that definition of minimalism, so instead in this post I will use the term "optimal" to mean "everything you need but nothing that you don't need."
While I feel that the finished product should be optimal, I do not feel that a project's resources should be only just enough. Instead, I believe in what a colleague once referred to as "cheap insurance." By this I mean that I believe in providing an excess of whatever critical resources can be cheaply and easily procured.
For example, I often buy a USB external disk (which we call a "can") or two just for the project to make sure that disk space and back up will not be a problem. USB cans are cheaper than having a project delay or a project disaster. Similarly, I keep a few unused computer system units around because you never know when you will need a special purpose machine or want go provide a contractor with a machine.
All this keeping of stuff makes me s bit of s techno pack rat; although the difference between techno pack rat and project savior is often timing or dumb luck. And shelving is cheaper than failure.
Since we often help customers get off of old platforms, I have another reason (or, according to some people, rationalization) for keeping dated technology around: the ridiculous pace at which the high tech world changes.
Even applied technology tends to be very much of its time: often the dated peripheral has to match the dated system unit and match the dated driver running under the dated operating system. All too often trying to use a shiny new peripheral on a mature system simply does not work; worse, sometimes it works to some extent, sucking up your time as you fiddling a vain attempt to get full functionality.
Another reason that I hoard old tech is that I have a deep respect for working systems. I know from painful experience that every time you fiddle with a working system you risk having a systdm that will never work again. Am I a nervous Nellie or a scarred veteran? Opinion vary, but I will stick to my cautious ways. I like taking chances just fine if I have a fall. Ack position.
Not only do I fear that working systems are only working until you fiddle with them: I also fear that projects of more than trivial scope never go as planned. I know that contingency planning is good but not enough, so I want more: I want options. I want flexibility. So I need lots of extra stuff in order to be able to suddenly decide that the system would work better if there were two servers instead of one, etc.
I find that debugging distributed information systems sometimes requires creating a parallel universe. If the bug or issue or misconfiguration or mismatch is deep enough, you need a testbed in which to try changing fundamental aspects of the system. Sometimes I need a consult from an external expert, in which I want to be able to deliver a working prototype to them while still having a working prototype in house.
So I find that in order to deliver enough product, I need way more support than I expect. However, I notice that the same people who tell me I am going overboard on the technology resources for projects are often also the people whose output is lacking that final polish, those extra features that distinguish adequate from good and good from great. Shelving is relative cheap: I am going to keep pushing for great.
(Although even I have my limits: I have 8 bit NICs and CGA video cards I would be willing to part with at a great price.)
Wednesday, December 7, 2011
In Praise of "Pivot Technologies"
Our clients are often pushed to adopt new technologies to accommodate political or financial time lines. More and more, we find that our clients are pushed to move from one paradigm, system or methodology to the next one in a single, terrifying leap.
In fact, I wrote a post about that a little while ago which you can find here if you are so inclined. The executive summary in this context is simply that making technology transitions can be done faster, safer and better with evolution instead of revolution, with smaller, well-defined steps instead of jumping out of the airplane, pulling the rip cord and praying for a soft landing.
So if large leaps of faith make me queasy, what do I propose instead? I propose what I call "pivot technologies." To me, this term means technology which can operate in more than one mode. In practical terms, I mean stepping stones from one technology or methodology, which is being phased out, to another replacement technology of methodology.
For example, in our consulting practice we often provide on/the-fly translation from one set of codes to another set of codes. This allows users to enter transactions from the old set of codes into the new information system.
Until recently, getting approval for pivot technology projects was relatively easy; technology shifts were common and everyone understood that pivot technologies were cheap insurance against painful transitions or even failed transitions.
Recently we have run into a new conventional wisdom. Now we hear that pivot technology projects are a bad idea for two reasons: they are a crutch and they never go away.
Executives say pivot technologies are a crutch because they enable users to avoid making the mandated transition. I am not clear about how easing a transition is the same as impeding it, but there you are.
Executives say that pivot technologies never go away; I assume that the point is that one ends up with an environment cluttered with no-longer-needed pivot technologies which are never retired and cleanly removed.
It is true that I have seen that vicious cycle: a pivot technology is rolled out a crutch, then the transition to the next technology falters, or is even held up by people clinging to the pivot, then you have the pivot but not the transition. But that scenario is not inevitable; indeed, it should not even be likely. After all, the whole point of deploying a pivot technology is to have greater control over the transition, not have less control or no control. So make a plan and stick to it. And cover your bets with a pivot: you might be able to jump the stream in a single bound, but why take the chance?
In fact, I wrote a post about that a little while ago which you can find here if you are so inclined. The executive summary in this context is simply that making technology transitions can be done faster, safer and better with evolution instead of revolution, with smaller, well-defined steps instead of jumping out of the airplane, pulling the rip cord and praying for a soft landing.
So if large leaps of faith make me queasy, what do I propose instead? I propose what I call "pivot technologies." To me, this term means technology which can operate in more than one mode. In practical terms, I mean stepping stones from one technology or methodology, which is being phased out, to another replacement technology of methodology.
For example, in our consulting practice we often provide on/the-fly translation from one set of codes to another set of codes. This allows users to enter transactions from the old set of codes into the new information system.
Until recently, getting approval for pivot technology projects was relatively easy; technology shifts were common and everyone understood that pivot technologies were cheap insurance against painful transitions or even failed transitions.
Recently we have run into a new conventional wisdom. Now we hear that pivot technology projects are a bad idea for two reasons: they are a crutch and they never go away.
Executives say pivot technologies are a crutch because they enable users to avoid making the mandated transition. I am not clear about how easing a transition is the same as impeding it, but there you are.
Executives say that pivot technologies never go away; I assume that the point is that one ends up with an environment cluttered with no-longer-needed pivot technologies which are never retired and cleanly removed.
It is true that I have seen that vicious cycle: a pivot technology is rolled out a crutch, then the transition to the next technology falters, or is even held up by people clinging to the pivot, then you have the pivot but not the transition. But that scenario is not inevitable; indeed, it should not even be likely. After all, the whole point of deploying a pivot technology is to have greater control over the transition, not have less control or no control. So make a plan and stick to it. And cover your bets with a pivot: you might be able to jump the stream in a single bound, but why take the chance?
Subscribe to:
Posts (Atom)