Over the course of my career I have worked both the public and private sectors in Agile and non-Agile roles. One of the common themes that I have identified in the roles that I have undertaken is that of data, and the analysis of that data. I considered this in more detail in a previous blog post looking back at my Agile journey.
I have found that whatever my role I have followed a similar process in order to turn the raw data into actionable information and beyond:-
- Gather data – I have gathered (raw) data from one or more sources to help identify and show the current situation
- Manipulate data – the raw data will often need to be manipulated to make it useable, i.e. identifying and filling gaps, totalling, averaging and pivoting data
- Turn the data into information – the manipulated data can then be used for the creation of tables/charts, turning the data into information
- Undertake analysis – once you have information, this can be analysed to identify any patterns or trends
- Draw conclusions – the identification of patterns, trends, peaks or troughs may will help to identify where there may be best practice that can be shared or problems that may need to be addressed
- Make recommendations – based on the conclusions, recommendations can be made for the adoption of better/best practice, or, for one or more changes to be implemented in order to try and improve the situation
- Implement change – once agreed the change(s) can be implemented and monitored
- Gather data – following the implementation of a change, more data will need to be gathered, manipulated and analysed to identify the effect that the change had – were the expected results seen?
This is a cyclical process that allows for continual inspection of adaption.
The above process can be used for the gathering, collation and publication of metrics.
There are lots of metrics that can be created and used to show the progress of a team (or lack of) within a Sprint/Release to help estimate a delivery/completion date, or, to identify areas where improvements could be made. However, I believe that there should be more to metrics than just numbers in a table or lines on a graph. I have used the acronym METRICS in order to show what metrics could, or more probably should be, in order to ensure that you are getting maximum value from them.
M – Meaningful
E – Evolving
T – Trusted
R – Repeatable
I – Imaginative
C – Concise
S – Starting Point
Meaningful – When producing a metric, it should have meaning. A change in a metric that has no meaning can lead to mis-interpretation and provide a false indication as to what has taken place, what is currently happening and/or where things are heading. Meaningful metrics will have context, be relevant and be actionable.
Evolving – Metrics will evolve naturally over time (figures/percentages will go up and down), but it may also be necessary to evolve (change) the metrics you use over time. This could be undertaken by changing the variables being analysed and/or combining metrics. It may also be necessary to create new metrics in order to baseline and/or show the change seen by a new experiment/initiative.
Trusted – In order for a metric to have value, it needs to be trusted. Those viewing the metric may need to understand the source of the data and the method being used to produce it. They may also need to understand the reliability and quality of the data being used.
Repeatable – It is very rare that a ‘one-off’ metric is required. In the majority of cases metrics will be produced on a regular basis allowing them to be compared over time, which will allow for the identifications and discussion of patterns, trends, and changes from the norm.
Imaginative – It can be very easy to produce the same thing week after week, Sprint after Sprint, Release after Release. Getting into a rut may mean that little or no notice is taken of the metrics being produced. Sometime it may be necessary to change the type of graph/table being produced and/or the location it is shared/displayed.
Concise – No-one wants to have to read pages and pages of information in order to be able to understand the current situation. Any metrics produced should be simple to understand and tell a story, without the need for a detailed explanation. If necessary, details of where more information could be found or a person to contact could be included.
Starting Point – Any metric produced should be the starting point for further discussion. The discussions may result in the need to look more closely at the underlying data, the consideration of more than one metric and/or the period of time being assessed. Conclusions draw from the metrics may allow for recommendations to be made and changes to be implemented.
Over the coming weeks I will be sharing blog posts on each of these areas, providing additional detail and background as to how I think that following the METRICS acronym will improve your metrics.
What you describe has significant parallels with ‘scientific method’. Has any comparative work been undertaken and can either or both systems benefit from an understanding of the other?