Stay away from Vanity metrics


Pinclipart image on metrics
Pinclipart image on metrics


Continuous improvement is a term we hear often when referring to Agile teams and Agile product development. It is in the DNA of the agile movement and just as present in Lean through relentless improvement and in Kanban with kaizen. A commitment to getting better and better.


Commonly we use metrics to know how well we are performing. We define a target and we need to check against this target. It is this simple. Or it should be. I do believe problems arise because many times we do not understand what we are measuring or have no strong grounds on the assumptions of what the target should be. We can sometimes drown in a multitude of metrics believing they will reveal a "complete picture" of how fast we are shipping our product or how productive people are. On top of being a complete waste of time, they usually end up being cumbersome to collect, to render, to understand and to report. Big companies might even have a whole department responsible to try and make sense of these numbers and yet they still have lots of projects going over-budget and late, unhappy customers and growing employee churn.


So... what then seems to be the problem in this scenario? Vanity metrics can be the problem, I would say.


But what is vanity metric?


This term was coined in the world of the Lean Startup mindset and I like to think that the name vanity gives it away. A vanity metric makes your business look good (somehow) to the world but it is actually not helping you. You cannot understand your business performance in a way that informs future strategy through them. In other words, they are, fr the most part, useless.


Think Youtube videos. I f a content producer has 100 thousand subscribers she definitely looks powerful. But look again: how many views her videos actually have? Looking even further, how many of those views actually translate to money via people watching the ads or clicking to buy their product?


Vanity metrics can many times be the easy ones to collect, such as the total number of unique users of a product, but they can be a little bit more complicated to obtain, such as time spent on a page (if your product or service is online). Sometimes they are even just data that is available and because of that availability one might think they can collect it and then try and give it a meaning afterwards.


Think about the users of a product. A lot of people subscribe to test new features and products, try free or basic versions, never to come back or convert to premium. So, number of customers or users per se might not be the number you are after. How many of those are actually paying customers, might be a better one. How many of them are leaving positive reviews so others can build trust and become clients themselves?


Agile teams are not immune


The examples above were all related to products and services, which is at heart of any business, Agile or not in their practices. But we should also measure success on team practices and even team performance. Anything we want to improve, we should be measuring, otherwise there is no baseline for comparison and we are left with guessing work.


Delving into Agile practices, we can surely find examples of measuring the wrong thing. My favorite is Velocity, a measure of work done in Agile software development. It is assumed that by estimating work to be done in story points (complexity) one can follow the evolution of the metrics to understand the distance a team travels to reach to the sprint goal. If the team says they can complete 60 story points this sprint and they completed 40 story points so far and we are mid-sprint, we are left with 20 story points. What does that say? Are we gonna make it?


I don't believe it says much. There are many articles that question the validity of this metric as a gauge on planning effectiveness or team performance. I personally find it misleading because:

- we cannot guarantee that the remaining 20 points will be completed by the end of the sprint. Complexity does not equal duration.

- the thing we are measuring with velocity is how fast or slow we are burning the points that represent estimates. In other words, we are measuring the accuracy of our guess, not the actual progress of work, even though they can correlate.

- if a team is pressed to show more work, they can inflate those numbers by estimating a much higher complexity for their work items. So, comes next sprint they will be delivering 80 story points even if the work delivered is smaller than past sprints. But hey, we have the appearance of speed. We are faster!


And yet we are not delivering fast enough to hit some market milestones.


Another metric that can be misleading in the software development world is number of bugs. Either the development team or a QA team captures the amount of non-conforming items and logs them as bugs, items to be fixed. That usually happens while developing the product. Teams can read that metric as improving the quality of their product when they list less bugs per sprint. Or they could just be filing less bugs in the official way. Another team can decide they are improving when they fix more bugs per sprint instead of waiting for a long time to fix them through a backlog prioritization. Or are they separating the bugs in smaller issues that are solved separately and therefore we actually cannot know much about our quality?


We can measure anything and measuring is an important step in assessing improvement and progress. But when metrics are used for micromanagement or imposing any sort of pressure over people, be them positive or negative, they will game the system and either invent good-looking metrics or tweak numbers of existing metrics.


Metrics have to make sense first. We must start with a goal.


Look instead for Actionable metrics


Once again I find that the name helps to understand. Actionable metrics are those that help us take action. Is the product successful? How are we faring on quality? Are we improving as a team? Those are the aspirational questions that we ultimately want answered, but without concrete relationship with our product and teams and methods of development we are basically blind.


If we are interested in the quality of a product, we could start from the end result backwards. As a very simple example, suppose we define quality as something perceived by the clients. Therefore, defects are defined by mismatches on what clients expect versus what is being delivered. One can considered then that escaped defects tells a more compelling story than the bugs found during the development cycle. Those bugs in development should definitely be fixed and should probably not even exist in the first place. But ultimately, what will tell the quality of our product is how the clients are reacting. We would want less or even zero defects. One could argue that from the eyes of the customer, zero defects will mean they are either not seeing them or do not care enough to file a defect ticket. The answer would be that this is not a problem. That zero defect tell us that as per our definition we achieved the minimum quality to satisfy the client. We can then decide on concentrating our efforts on delight (and hopefully we have metrics for that as well) or something else.


This was just one possible example and outcome for quality.


If we want to go back to the example of our team and try to understand our productivity, we could then look into some factual element about our delivery, instead of the guessing world of our estimates. We could then look at our throughput instead of our velocity. Now, just as with quality, we need to define what the metric is, what is the granularity of the element we consider "done" or "delivered". Are we talking user stories done? Components delivered? Tickets closed in the system? Whatever the definition, we also need to understand what is the objective of that metric and observe it ; manage it. That means that the definition behind the granularity of the work done, to be a valid throughput, needs to imply the impact over a business outcome, just like our quality did. It is not a solitary exercise of a team detached from how the reality of their business. The team can break down work in any way they want. The measured items however should come from a breakdown that abstracts the team and clearly demonstrates when business outcomes are achieved. There is where we set our eyes for throughput.


That is also, just one possible metric to try and understand a team's performance.