In the blog post, Getting the most from metrics, I suggested that the acronym METRICS could be used in order to show what metrics could, or probably should be, in order to ensure that you are getting maximum value from them. I have previously taken a look at Meaningful Metrics and Evolving Metrics. In this post I consider ‘Trusted Metrics’.
For a metric to have value, it needs to be trusted. In order to be trusted some, if not all, of the following questions may need to be answered:-
- what metric(s) is/are being produced?
- who is producing the metric(s)?
- why is/are the metric(s) being produced?
- how is/are the metric(s) being produced?
- when is/are the metric(s) being produced?
- where will the metric(s) be shared?
What metric(s) is/are being produced?
This may seem like a odd question as you would expect a team to know if one or more metrics are being produced based upon the work that they are undertaking and delivering. However, I have seen organisations produce metrics outside of the team and without their knowledge.
There have been two similar reasons for the production of metrics outside of the team. The first is to allow for roadmap planning. An organisation may want to understand the average velocity that a team/department is working at in order to plan subsequent deliveries. I would argue that this data is already likely to be collated by the team in order for them to plan each Sprint and therefore the additional production of a metric outside of the team is likely to raise suspicions and create a divide rather that increase trust between the team and the organisation.
The second reason that metrics may be produced outside of the team is to monitor the team. Again, this activity will have the opposite effect and create a divide between the team and the organisation
Each team should be aware of the metrics that are being produced about their activity. This will help them to understand how their behaviour and the way that they inspect and adapt their processes could affect the metrics that are being calculated.
Who is producing the metric(s)?
The production of metrics will often fall to the Scrum Master. Within a Scrum Team they are likely to be the member with capacity and knowledge that will allow for the initial collation and manipulation of data before conducting the initial analysis and sharing their finding with the team at a metrics review or Sprint Retrospective. The team will also be aware that this activity is ongoing, with the information often being used to assisting the team in understanding what they can achieve within a Sprint or as a basis for making process improvements.
As mentioned above, if metrics are produced outside of the team, it is possible that trust could be lost in those producing the metric (or the organisation) if the reason for the metrics production is not shared and understood.
Why is/are the metric(s) being produced?
In order to be trusted, there needs to be a reason why the metrics are being produced. Just as user stories should have value for the customer, metrics should have value for value for the person/group that they are being produced for. It is therefore essential that before any metric is created and shared, the person/team producing the metric speaks to their customer to understand their requirements.
Once a customers requirements are understood it will be possible to create an initial metric. Just as the user stories that reach a definition of done will be inspected and reviewed by a Product Owner, metrics should also be reviewed by the requester. This will allow for them to be iterated upon, and incrementally improved.
Producing a number of metrics and them giving them to an individual/team saying ‘I think that these may be of use to you’ is likely to result in the metrics being ignored and rarely used.
How are the metric(s) being produced?
Many Agile delivery metrics that are produced are a by-product of the data captured from the use of electronic tools for recording the content and progress of user stories. The times and dates of status changes are logged within the tools. This data can be used by functionality within the tools to created automated charts, or, this data can be extracted and manipulated manually in order to create bespoke charts.
In order to ensure the accuracy of the metrics and for reliable conclusions to be drawn, it is essential that the tools are updated in real time (or as close to it as possible). Delays in updating the movement of user stories could lead to issues being identified in the wrong part of the process. This could then lead to changes being implemented that do not have the anticipated benefits.
Those viewing the metric should have an understanding of the source of the data and the method use to produce it. This allows for reliability considerations to be made when drawing conclusions and identifying possible solutions. It also allows for the team to understand how their behaviour can influence changes in the metrics. This understanding can help to prevent the gaming of metrics, or, highlight where metrics are being gamed.
When is/are the metric(s) being produced?
Different metrics will be produced at different intervals. For example, a burn-down chart may be updated each and every time a task/user story moved to ‘done’. If this chart is produced through an electronic tools or automated query, it may be automatically be updated each time that metric is selected/opened.
However, a metric such a average velocity requires a Sprint (or defined period) to be completed to allow the calculation to be undertaken (total story points divided by completed Sprints/periods). Other metrics such as phased cycle time need sufficient data items to allow conclusions to be drawn. This may mean that months of data are required in order to be able to draw inferences and make recommendations.
The different time scales for metric production can mean that this is an ongoing activity if production is manual. Anything that can be done to automate the production of metrics will assist the person that needs to produce/collate the metrics as well as allowing for the most update to date information to be shared and reviewed to highlight patterns or trends.
Where will the metric(s) be shared?
As noted above, the vast majority of metrics should be used by the development team in order to understand local progress. Metrics can help to identify if a team are on track within a Sprint/Release, how internal/external factors affect the delivery and if process improvement experiments are working or need to be revisited.
The use of electronic tools will often mean that the creation of metric dashboards. Such dashboards allow for the metrics to be placed in a single location, but mean that team members will have to proactively look for the metrics. If metrics are to be shared via a dashboard, it is better practice to have these displayed on a TV screen in the teams area making them visual and more likely to be viewed and discussed.
Even if metrics are displayed digitally, there is still value in printing them on paper and putting them up on a wall for the team, and others, to see. Physical charts allow for the team to write on them if/when something occurs. For example, a flatline on a burn-down/up could be the result of environment issues and the amount of downtime could be noted for future reference. Similarly, decisions that mean a increase/decrease in scope could be noted.
The use of metric reviews, either as a specific meeting or as part of a Sprint Retrospective, can be extremely beneficial to the inspect and adapt process for process improvements. An example of review metrics and making process improvements can be seen in my blog post ‘Together we are, The Three Amigos’.
Some metrics may find their way into weekly/monthly management reports. This is ok. There should be transparency from the team with regards to the progress that is being made, but there also need to be a recognition from the managers within the organisation that the metrics are first and foremost for use by the team and should not be used out of context.
Understanding more about the way in which metrics are produced is likely to result in an increase in their value and use in making process improvements.
I challenge you to review your metrics and identify if you and your team ‘trust’ them.