Explainable software

The value of a software system today is given by its external functionality. Tomorrow, its value is given by how well you can adapt it. This depends on your ability to understand the system’s internals enough to guide its evolution.

The explainability of your systems must become an explicit focus as though your business depends on it. Because it does.

How explanations are created matters. They are useful only when they relate to reality.

The internals of systems comprise technical issues. Shouldn’t explaining these be the realm of technical people? Why should a manager care?

Two reasons. First, it’s the largest cost. Second, all decisions, both the technical and the business ones, must be based on accurate information.

Put it into perspective: Your system is much larger than humans can read in a reasonable amount of time. A report about your system that is built manually will be at least inaccurate, but most likely wrong.

Typical decisions are based tpday on manually gathered information wrapped in stories.

All decisions about your system must relate to the reality of that system. Everyone must care about how reliable and representative the information is.

That’s not only a technical issue. It’s a business one, too.

For software systems to remain valuable, they have to be adapted to changes in the environment. The evolution challenge is posed by its internal structure. As the dependency on software increases and the need to change it becomes ever more critical, it is no longer enough to treat software as a black box: the ability to reason and decide about its internal structure is critical, and software assessment becomes a strategic skill.

This is relevant both when working with in- house systems, and when working with external providers. The assessment skill offers an infrared-like ability to identify and react to problems before they escalate.

It’s like data science for software. Automating how information is gathered from the system reduces risks and frees energy that can be used for experimenting and acting.

Decisions should be based on information gathered directly from the system and narratives through custom tools.

Examples of explainations

Views and examples provide a basic level of explainability. Once these are in place, we can weave them into larger narratives as well. Like with any piece of data, there are many narratives to tell about code, to capture multiple facets. This document is a narrative. So is a document explaining an algorithm like the one found in Explaining the squarified treemap algorithm. The difference between this page and the one explaining the algorithm is that the algorithm explanation is provided by the system itself. The little text acts mostly as glue.

Narratives are powerful tools. Only when it comes to software, it’s the system that has to carry it. The principle is that as soon as a human deems a narrative as meaningful, it should be the reponsibility of the system to provide it.