I find my smart phone to be a terrific tool and a significant advance over what went before. I have been contemplating this recently because I see a similarity between trying to explain *why* I am happy with my smart phone to my father-in-law and trying to explain why systemic IT improvements make sense to some of my clients.
In both cases, the audience does not share my basic premises. In both cases, there is no "killer app" and no clear analogy from what they know to what they do not know. In both cases, I am asking for a leap of faith.
For my aged father-in-law, who does not particularly want people to be able to interrupt his day with their inane chatter, the mobile phone concept never made sense. Now, when I try to explain that telephony is the minority of what I want my smart phone to do, he chortles with delighted smugness: an expensive phone that isn't even mostly a phone! Imagine the kind of technocratic, free-spending fathead one would have to be to want that! He is becoming a little uncomfortable with the ever-growing percentage of the general population who have joined me in his native city, but he is still happy to soldier on.
However, my point is not his out-of-stepness; rather my point is that my inability to explain to him what he is rejecting. He is rejecting it and then filling in the reasons later. I have failed utterly to convey that the added value is holistic and lies in the totality of the ecosystem. His eyes glaze over when I list the rather large number of functions the "phone" routinely performs for me: check bank balance? Get driving directions? Surf the web to check the hours of a museum? Share links with friends in text messages from which they can go to the same web page? Track my billable hours and expenses? Take photos of friends and family? Take photos of computer parts I need to purchase? Take notes on the train for later access on my desktop? Listen to music? Buy music? Shop for apps for work? Download apps to help me enjoy the museum in which I am standing? It is all too much and too bizarre for him.
So I feel stuck: what benefits he will allow me to present are not enough to explain the actual value of the device+apps+network yet he will not sit through the (admittedly very boring and somewhat peculiar-to-me) actual list. So we have resolved it this way: he is certain that smart phones are expensive toys and that I own one because I like expensive toys, while I shake my head in wonder at what he is missing.
He is not alone in this situation, even in my experience. I have the exact same feeling when I try to explain to IT managers why a given subsystem or middleware is worth building or improving: an endless list of small benefits just does not seem to move people who are not already intimately involved with the given business process. Like a giant jar of small change, which actually has significant value, proposals of myriad incremental improvements sit in the corner, gathering dust.
Wednesday, June 27, 2012
Wednesday, June 20, 2012
What You Want To Hear
When I entered the field of computer programming, I had some illusions about knowability, about absolute truth, about right and wrong. I thought that there were real answers to vague questions such as "which of these approaches is better?" "Who is the better coder?" "What should we do to solve this problem?"
Instead of fact-based, rational discussion I find myself often mired in religious and political muck. Specifically, I am watching a client move a data repository from a lovingly handcrafted environment to a standard environment, which standard environment blows up every time.
How did we get here? The data repository is several years old and has been a spectacular success. It was custom built for its purpose and its environment. No radical changes were indicated; so why are we watching them try to move to a dramatically different implementation?
I do not object to the idea of change; I have some sympathy for the desire to update this implementation:
So far as I can tell, this is how the technical design decisions were made:
Well, there is a problem: ReiserFS is great at what it does. AIX's standard offerings have not caught up. Neither has whatever their Network Attached Storage (NAS) is running. So they can't get to step 1: copy *most* of the data to the new home. We are working on our third attempt here.
Is it a surprise that the result-oriented process yielded better results than the blind assertion of capability? No; the surprise is that anyone thought that simply declaring what was in hand to be the best solution was a good idea.
When will they admit that this proposed implementation is unworkable? I am guessing that they will never admit that and nothing we say seems to make any difference: it just isn't what they want to hear.
Instead of fact-based, rational discussion I find myself often mired in religious and political muck. Specifically, I am watching a client move a data repository from a lovingly handcrafted environment to a standard environment, which standard environment blows up every time.
How did we get here? The data repository is several years old and has been a spectacular success. It was custom built for its purpose and its environment. No radical changes were indicated; so why are we watching them try to move to a dramatically different implementation?
I do not object to the idea of change; I have some sympathy for the desire to update this implementation:
- The custom environment is getting long in the tooth--almost 6 years old as of this writing--so I find it reasonable to consider replacing it.
- The custom environment runs Linux which is not the Unix of choice in this particular shop, so I understand their desire to port to their standard, AIX.
- The custom environment is based on ReiserFS which is bit out of the mainstream, so I can see reevaluating this implementation.
So far as I can tell, this is how the technical design decisions were made:
- The current system is old, so it must be outdated.
- The proposed system is from IBM, so it must be reliable.
- The proposed system is expensive, so it must be capable.
Well, there is a problem: ReiserFS is great at what it does. AIX's standard offerings have not caught up. Neither has whatever their Network Attached Storage (NAS) is running. So they can't get to step 1: copy *most* of the data to the new home. We are working on our third attempt here.
Is it a surprise that the result-oriented process yielded better results than the blind assertion of capability? No; the surprise is that anyone thought that simply declaring what was in hand to be the best solution was a good idea.
When will they admit that this proposed implementation is unworkable? I am guessing that they will never admit that and nothing we say seems to make any difference: it just isn't what they want to hear.
Wednesday, June 13, 2012
Smart Planning & Dumb Luck
Managing software projects requires that one make estimates about the level of effort real people will require to solve abstract problems with concrete software.
There are many boundaries in there: abstraction, concreteness, dates, level of effort, human actors and ideas. Business, and much of academia, abhors honest ignorance. Business requires staffing levels, target dates, deliverables, milestones and budgets. The most common approach I see to resolving the abstract nature of programming with the concrete nature of business is reduction-to-known: how is this project like other (preferably successful) similar projects?
In my experience, this approach is good when there is little novelty involved and is more-or-less a crap shoot when the project is highly novel. Many of our competitors handle novelty by assuming the worst and jacking up the price to cover that worst case scenario. We try to break down the project into novel and known and to do basic research so that we can turn the novel into the known as quickly and safely as possible.
But rarely does anyone, including us, talk about luck. Dumb luck. Pure chance. That sad truth is that sometimes you get lucky, sometimes you don't and sometimes you get unlucky.
(A related topic is inspiration: sometimes divine inspiration strikes, making the difficult simple and the giant task almost easy. But I prefer to think that I am divinely inspired sometimes because (spiritual version) God loves me more than He loves other programmers or (secular version) my superior neurology works this way, so I am reluctant to lump this in with luck. Although I understand that lots of other people do lump inspiration in with luck. I suspect that they are not often inspired, but that is another topic.)
I am aware of the "Luck is the residue of design" school of thought (nice context-setting here), and I mostly agree with it, but sadly sometimes I get lucky without regard to my planning or ability or anything else. Sometimes stuff just works out.
For example: sometimes I mention to a friend over a beer that I am stuck on a particular technical problem and he casually points me to just the right person to ask. Sometimes I find that I need to accomplish something and quickly and easily find just the right technical documentation on the Internet. Sometimes I find that an orphaned AC adapter fits a crucial piece of equipment. Sometimes the device driver I need only runs on an old kernel that I happen to still be running somewhere. Sometimes you get lucky: you crush the project, you beat the deadline, you come in under budget, the users are happy and the client is happy to write that final check.
Sometimes you don't get lucky: things proceed mostly as one would expect, with easy parts and hard parts and unexpected gains mostly offset by unexpected losses. These projects plod along and are delivered on time and get paid for and contribute the illusion that luck is not a concern.
Sometimes you get unlucky: critical personnel have family emergencies, supposedly supported devices don't work in *your* server, someone else's piece does not work as advertised or is late, your design is thwarted by the particular deployment environment, the client's server is plagued by gremlins, etc. These projects fray nerves, eat profit margins, sully reputations. We would all avoid them if we could.
(Beware, though, the development team that never has a bad day, or an unlucky project: it is quite possible that they pad their estimates or grossly underpromise. The only way I know to have a perfect track record is to be chronically unambitious.)
How a team or group or organization handles bad luck is very telling. The first hurdle is recognizing when you hit some bad luck and when you didn't. Very few seem able to distinguish between a failure of their team or process and bad luck.
I am not saying that blithely accepting "we were unlucky" is a good way to deal with failure. I am not defending failures of team members or of management or of process: such failures should be evaluated and corrected, either with retraining or retasking people or refining processes.
Neither do I subscribe to the position that good people never have bad luck. It would be nice if that were true, since it would greatly simplify the task of evaluating failure: there would always be blame.
I am saying that realistic appraisal is critical: neither of these simplistic approaches is a good idea.
Having determined that you got unlucky, what then?
Often the response is to avoid repeating this horrible experience by controlling variability. When practical, controlling variable in the software development process is nice, but it not a good primary goal. This is because controlling variability works best with respect to codified processes expected to produce a consistent result. It is a good way to brew beer or make steel. It is not a good way to solve problems in creative or novel ways.
Similarly, if you turn to scheduling as your protection against bad luck, you may be disappointed because if you want on-time over excellent, then you won't get excellent.
Sometimes you get unlucky. All you can do in that case, in my opinion, is pat your team on the back and let them know you feel their pain--but that you expect a return to excellence the next time.
There are many boundaries in there: abstraction, concreteness, dates, level of effort, human actors and ideas. Business, and much of academia, abhors honest ignorance. Business requires staffing levels, target dates, deliverables, milestones and budgets. The most common approach I see to resolving the abstract nature of programming with the concrete nature of business is reduction-to-known: how is this project like other (preferably successful) similar projects?
In my experience, this approach is good when there is little novelty involved and is more-or-less a crap shoot when the project is highly novel. Many of our competitors handle novelty by assuming the worst and jacking up the price to cover that worst case scenario. We try to break down the project into novel and known and to do basic research so that we can turn the novel into the known as quickly and safely as possible.
But rarely does anyone, including us, talk about luck. Dumb luck. Pure chance. That sad truth is that sometimes you get lucky, sometimes you don't and sometimes you get unlucky.
(A related topic is inspiration: sometimes divine inspiration strikes, making the difficult simple and the giant task almost easy. But I prefer to think that I am divinely inspired sometimes because (spiritual version) God loves me more than He loves other programmers or (secular version) my superior neurology works this way, so I am reluctant to lump this in with luck. Although I understand that lots of other people do lump inspiration in with luck. I suspect that they are not often inspired, but that is another topic.)
I am aware of the "Luck is the residue of design" school of thought (nice context-setting here), and I mostly agree with it, but sadly sometimes I get lucky without regard to my planning or ability or anything else. Sometimes stuff just works out.
For example: sometimes I mention to a friend over a beer that I am stuck on a particular technical problem and he casually points me to just the right person to ask. Sometimes I find that I need to accomplish something and quickly and easily find just the right technical documentation on the Internet. Sometimes I find that an orphaned AC adapter fits a crucial piece of equipment. Sometimes the device driver I need only runs on an old kernel that I happen to still be running somewhere. Sometimes you get lucky: you crush the project, you beat the deadline, you come in under budget, the users are happy and the client is happy to write that final check.
Sometimes you don't get lucky: things proceed mostly as one would expect, with easy parts and hard parts and unexpected gains mostly offset by unexpected losses. These projects plod along and are delivered on time and get paid for and contribute the illusion that luck is not a concern.
Sometimes you get unlucky: critical personnel have family emergencies, supposedly supported devices don't work in *your* server, someone else's piece does not work as advertised or is late, your design is thwarted by the particular deployment environment, the client's server is plagued by gremlins, etc. These projects fray nerves, eat profit margins, sully reputations. We would all avoid them if we could.
(Beware, though, the development team that never has a bad day, or an unlucky project: it is quite possible that they pad their estimates or grossly underpromise. The only way I know to have a perfect track record is to be chronically unambitious.)
How a team or group or organization handles bad luck is very telling. The first hurdle is recognizing when you hit some bad luck and when you didn't. Very few seem able to distinguish between a failure of their team or process and bad luck.
I am not saying that blithely accepting "we were unlucky" is a good way to deal with failure. I am not defending failures of team members or of management or of process: such failures should be evaluated and corrected, either with retraining or retasking people or refining processes.
Neither do I subscribe to the position that good people never have bad luck. It would be nice if that were true, since it would greatly simplify the task of evaluating failure: there would always be blame.
I am saying that realistic appraisal is critical: neither of these simplistic approaches is a good idea.
Having determined that you got unlucky, what then?
Often the response is to avoid repeating this horrible experience by controlling variability. When practical, controlling variable in the software development process is nice, but it not a good primary goal. This is because controlling variability works best with respect to codified processes expected to produce a consistent result. It is a good way to brew beer or make steel. It is not a good way to solve problems in creative or novel ways.
Similarly, if you turn to scheduling as your protection against bad luck, you may be disappointed because if you want on-time over excellent, then you won't get excellent.
Sometimes you get unlucky. All you can do in that case, in my opinion, is pat your team on the back and let them know you feel their pain--but that you expect a return to excellence the next time.
Wednesday, June 6, 2012
No Substitute for User Input
Whenever I am going to deliver interactive software, I assume that there will be a significant amount of feedback and so I reserver a big chunk of time and energy to deal with that.
I was horrified recently to discover that some programmers feel that I am remiss in gathering requirements, slapdash in my implementation and lacking in my QA. I consider myself to be careful, thorough and to have a high user satisfaction rating.
When I worked through the other programmers' complaints, I discovered that their ideal software development model differed greatly from mine. (And my user satisfaction rating was much higher than theirs, from the same users no less.)
I think that it is instructive to compare and contrast the two models, so that is what I shall do.
I summarize their approach as the classic carpentry mantra, "think twice, cut once." I summarize my approach as "get feedback early and often."
Their model, which is fairly common:
On the other hand, I don't have users who complain that they wait forever for software that doesn't actually do what they need or what it to do.
As is so often the case, each side is not failing at the other's model, each side is succeeding at their own model. I am not impartial, however: I am highly confident that it is better to be one of my users.
I was horrified recently to discover that some programmers feel that I am remiss in gathering requirements, slapdash in my implementation and lacking in my QA. I consider myself to be careful, thorough and to have a high user satisfaction rating.
When I worked through the other programmers' complaints, I discovered that their ideal software development model differed greatly from mine. (And my user satisfaction rating was much higher than theirs, from the same users no less.)
I think that it is instructive to compare and contrast the two models, so that is what I shall do.
I summarize their approach as the classic carpentry mantra, "think twice, cut once." I summarize my approach as "get feedback early and often."
Their model, which is fairly common:
- Start with a long and thorough requirements gathering phase. The game is lost or won here.
- Write up the requirements into a technical specification. Circulate that spec widely and solicit input.
- Code up the spec as best you can. Write detailed user documentation.
- Test the implementation. Run it through QA.
- Deliver the software to the waiting users.
- Handle the resulting bug reports ASAP.
- Consider the job done and done well. Begin a schedule of releases and updates.
- Have a longish conversation with the users, mostly centered around them telling me what their ideal solution looks like.
- Create a working prototype as quickly as possible with basic testing only and get feedback. Do a Quick Start Guide as user documentation.
- Process the feedback into work to be done on the prototype. Accepting sweeping changes to the requirements or specs if those changes move you closer to the ideal solution.
- As the prototype iterates through revision, it becomes the first version.
- Over time, issues arise and so there is recoding through the same process.
- Software development is a conversation between you and the users; long pauses will kill that conversation.
- No requirements gathering will ever get you more than 80% of the requirements, so plan for real revision at the outset.
- Users will happily engage with a dynamic process that produces tangible results
On the other hand, I don't have users who complain that they wait forever for software that doesn't actually do what they need or what it to do.
As is so often the case, each side is not failing at the other's model, each side is succeeding at their own model. I am not impartial, however: I am highly confident that it is better to be one of my users.
Subscribe to:
Posts (Atom)