What I mean by "technology audit" is a formal, independent review of a piece of technology. These reviews usually consisted of running a predefined set of inputs into the technology, capturing the resulting outputs, and matching the actual outputs with the expected outputs.
Sometimes, if running a test suite through the technology was impractical, we went through the logs and gathered up the actual inputs and the actual outputs for a day, or week, or month, and reviewed the outputs to see if they were within expectations. We often re-implemented business rules or other logic to help us confirm that the outputs were as expected.
Whatever the name and whatever the precise methodology, this concept seems to have become extinct.
It may be that I am wrong. It may be that this process still exists but has a new name, or is done by a specialized sub-industry. But I don't think so: I think that this concept is dead.
One factor, certainly, is the declining cost of much software: if it was cheap or free to acquire, perhaps people feel that it is not cost-effective to audit its operation. I cringe at this explanation because the price tag is not a very good proxy for value to the company and because the cost of malfunction is often very high--especially if one uses human beings to compensate for faulty technology.
Another factor, I think, is the popularity of the notion that self-checking is good enough: software checks itself, sys admins check their own work, etc. I cringe that this explanation because I am all too aware of the blind spots we all have for our work, our own worldview and our own assumptions.
In the interests of full disclosure, I should note that I am a consultant who, in theory, is losing business because I am not being hired to do these audits. I would claim that I suffer more from working in auditless environments, where many key pieces of
technology are not working correctly, but this may be an example of a blind spot.
Many of the business folks and IT folks I encounter are not even clear on what the point of the exercise would be; after all, if their core technology were grossly flawed, they would already know it and already have fixed it, no?
But the usual benefit of a technology audit is not documenting gross failure, obvious misconfiguration or currently out-of-use pathways (although, alas! these all happen sometimes). Rather, the usual benefit of a technology audit is the revelation of
assumptions, the quantification of known undesired situations and the utility of having people looking afresh at something critical, while all sharing the same context.
Wait! I hear you cry, can't one do one's own audit of one's own technical context? Well, yes, one can, but the benefit is often much less. Just as self-inspections have their place, but are not accepted in place of outside inspections, so self-audits are useful but not really the same thing.
(I do often build self-audits into my designs, because I want to avoid the avoidable and to make explicit my assumptions about what "success" means in every context. But self-audits are not a complete solution to this problem. And my clients often scratch their heads at the self-audit, which seems to do nothing but catch mistakes, many of them my own mistakes. Which is precisely what they are supposed to do.)
When I am trying to deploy a piece of technology, the lack of auditing means that when someone else's piece of technology does not work as I expect, I am often without any way of knowing why. For instance, which of the following explanations best fits the situation?
- Option A: I misunderstand what is supposed to happen; this means that I will have to change once I figure out that I am the problem.
- Option B: I am in a gray area; one could argue that my expectations are reasonable or that the observed behavior is reasonable. I will likely have to change, but I have a chance to make a case to the other side.
- Option C: the undesirable behavior is clearly and definitively wrong or not-to-spec. In theory, I should be able to inform the other side and have them fix their stuff.
the same piece of technology running is not only unrewarding professionally, but it is a terrible advertisement for our services: Buy now! Our stuff works for a while.
A dark fear I have is that audits are out of favor because executives just don't want to know: they don't want to hear that the expensive, hard to buy, hard to deploy, hard to justify technology investment is not working properly, because they don't know how to fix it. I have had more than one manager tell me "don't bring me problems, bring me solutions" which sounds really manly, but is not very useful. How do I know what solutions to bring if I don't know what is wrong? Do I conceal all issues for which the solution is not obvious to me? In theory, once a problem is identified, the whole team can help solve it.
As always, I would love to know if this lack of audits only exists in companies whose focus is not high tech, or if it now has a cooler name or has been folder into something else, such as management consulting or business process engineering.