They say you can’t control what you can’t measure. How else would you know if it’s getting better or worse, right? This is true, sort of, if you know what you’re doing.
You can measure the weather but you can’t control it. What you measure, and how, sets limits to how useful your measurements can be for making improvements. In this post I’ll explain what is a useful metric, and show a way to maximize the useful impact of metrics in your organization.
How useful can you get?
First, a definition. The complexity of a metric is the somewhat subjective complexity of the system being measured. The lines of code produced per day is a simple metric. The corporate operating profit is a complex metric.
The actionability is the ease at which you can choose clear actions to help improve the reading for a metric. Yes, “actionability” is a word. I found it on the interwebs.
Let’s draw this relationship into a chart:
When the metric is simple (lines of code) , then the actions to improve results for that metric are clear (learn to cut’n'paste, duh). When the metric is complex (operating profit), then the actions to improve results are not so clear. As the metrics get more complex, it quickly becomes harder to figure out what to do to improve.
In short, complex metrics suck if you want to use them to figure out what to do about the results.
There’s a similar relationship between alignment of metrics with high level goals:
This chart show us that simple metrics suck because they’re not connected to your high level goals at all.
To be really useful, a metric should be both easy to act upon and aligned with the high level goals. Putting this in the form of an equation, we get:
usefulness = actionability × alignment
Multiplying the two charts above together gives this, with staggering mathematical precision:
Not very encouraging, huh? At best, metrics are barely above useless. But it’s really true. There is no single metric which would be directly connected to your bottom line and at the same time be really easy to convert into simple steps to improve the results. There is no silver bullet.
This chart gives us another useful piece of information. The most useful metrics are the ones in the middle of the complexity scale. The useful ones leave room for an inductive leap or two to both directions, so everyone gets to actually use their brain. That can’t be bad.
This may look like science, but in reality this is only my opinion. Multiplying random functions together may make you twitch, and rightfully so. There is no body of research where I’m drawing this from, just my personal experience.
Using metrics in your R&D
My recommendation is something called metric of the month. In metric of the month, you choose a problem area for the next month. Then you try to find the best metric which is connected to that problem. Run with the metric for one month, try to improve the reading as much as you can, and then throw the metric away. Start over with a new metric for the next month.
This way you can maximize on the excitement of the fast initial progress with a new metric. Our compilation warning chart is no longer useful to us, for example. It has lost it’s mojo. So, it makes sense to change focus as soon as that happens, and pick the low hanging fruit from a new area. It keeps things improving, and helps keep things interesting.
Whatever you do, don’t tell developers to improve on a metric like the corporate operating profit, EBIT, the stock price, or anything like that. You could just as well tell them to control the weather.