Sometimes the best of plans go sideways. I was planning on covering a totally different session but the speaker wasn’t able to be there. Being that AST is an organization that believes in adapting and the fact that a lot of people attending are frequent speakers, Paul Holland stepped in and said he could talk to Bad Metrics and a majority of people said they were down with listening to it, myself included.
A Metric is a standard of measurement, and for most people, they are either helpful or they are banes of existence.
A key thought is “when a measure becomes a target, it ceases to be a good measure”. Targeting metrics invites gaming of metrics or losing the focus on what’s important. An example Paul shared was a company where there was a target of each testing completing ten test cases per week. One of the testers had only completed two. Upon further querying, it turned out that this tester was looking at solving/fixing multiple bugs and those took priority. Fortunately, the senior manager understood the needs and said: “No, that takes priority”.
Some elements of Bad Metrics are:
- Measuring and/or comparing elements that are inconsistent in size or composition (apples vs. oranges)
- Create competition between individuals and/or teams (“Always Be Closing”)
- Easy to “game” or circumvent the desired intention (run more tests to change a percentage value)
- Contain misleading information or gives a false sense of completeness (!00% code coverage, but how determined?)
Bad Metrics can give a false sense of security or concern. Decisions being made from bad metrics can cause organizations to make bad decisions.
Some bad metric examples are:
- Percentage of Automated test cases
- Pass Rate
- Number of tests
- Number of bugs
So what can we do? It’s possible to track expected effort rather than test cases. Tracking time to do something is at least relevant to income. I’m fond of the idea behind making defined tasks and listing them. Sticky Notes are helpful and perhaps making a personal Kanban board with sections for work to be done, work in progress, completed and canceled. This is an example of an Information Radiator. It gives anyone who sees it a clearer feeling of what is getting done, what’s not and it is open to tweaking and refinement. Reports can help if it’s a preferred method of distribution due to people being remotely located, but if people are in the same space, Information Radiators would likely be more effective (always in sight, always on our minds).
If you want to make just one tweak, stop counting, list out items instead.