Change Avoidance-Driven Design (or CADD, for short) is one of those things you don’t learn about in school. It has no place in Computer Science, little place in Computer Information Systems and none in the other STEM disciplines. I suppose one of those rare software engineering degrees might talk about it, but I have my doubts.
CADD is what happens when the desire to avoid changing certain pieces of a software ecosystem exercises a disproportionate level of influence on the approach taken to fix a bug or add a feature. Anyone who has worked in software for a couple of years will recognize this phenomenon almost immediately. In terms of recognizing CADD, it is almost easier to spot it by way of absence of changes in key areas rather than the presence of them.
Astronomy gives us a pretty good idea of how this looks in another discipline with the study of exoplanets1. All of our telescopes are based on gathering electromagnetic emissions. Whether it we are talking about light rays, x-rays or something else, it all involves some form of radiation being caught by our instruments and analyzed. Any known planetoid would typically give off little by way of electromagnetic radiation being far outshone by other celestial bodies.
One of the more common methods of detection is to look for signs of a star wobbling on its axis for reasons not otherwise explicable. It is looking for something unseen, almost invisible exerting forces on other parts of the system. CADD is detected in much the same way.
Imagine you have several interacting systems that look something like the figure below.
Then let us imagine that we are going to add a feature by changing the systems indicated in the next diagram.
Isn’t it suspicious that several spokes on a wheel are changing but not the hub? That is the most generalized heuristic for detecting CADD at work.
The other thing I have noticed with regards to most instances of CADD is that they are almost always motivated by fear - fear being, of course, the most human thing about software design. The source of fear varies, of course. Sometimes it is fear of an unstable codebase with the propensity to break at the slightest provocation. Sometimes the fear is political in nature, in the form of distrust regarding a team or process. Sometimes it is the fear of a particular technology, either for its age or for its poor fit to the problem.
“I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.”
– Frank Herbert, Dune
A paradox follows naturally from this state of affairs. Fear is seldom entirely without warrant. Acrophobia may not be irrational, but a fall from a height can kill you. Ophidiophobia may not be irrational, but people have died from snake bites. CADD works the same way. The codebase may be poor, the process may be painful, the team may be difficult, but, long term, good seldom comes from designing around a problem in perpetuity. There is always a reckoning.
So what do we do about it? Proceeding with caution is only rational, but we cannot do anything to make things better if we all tip-toe around a problem without even acknowledging it. The solution to an unknown technology is to learn, the solution to a poor codebase is refactoring and the solution to missed bugs is testing (automated testing being preferable). The commonality is that doing nothing solves nothing.
The hardest part is, as always, human and not technical. Convincing ourselves and then others to tackle hard problems and take the long view is hard. It is hard because it runs contrary to ease and relief in the short run. We are almost hardwired to maximize short-term gains, even at the expense of larger long-term ones. Every major methodology is about making short-term sacrifices for long-term gains. It is easier to write messy code - until we have to read older bits of code. It is easier to skip automated testing - until we are hit by a regression bug. The list goes on.
Balance, as always, is key. We cannot fix everything at once, or even everything at all, but we can apply the boy scout rule at the macroscopic level as well as the microscopic level of code.
This applies to other aspects of astronomy as well, but I like thinking about exoplanets. ↩