To deploy code to production we need to reach a level of confidence in its quality. In an organization that isn’t accustomed to writing automated tests, this is typically accomplished via manual testing. With this approach, the time required to achieve the desired level of confidence grows over time. It often grows larger than we are comfortable with and you are faced with a hard decision – accept that it will take longer or lower the requisite confidence level.
In an Agile/DevOps transformation, we are looking for the following outcomes (among others):
- Increase the speed of delivery of value
- Increase the confidence in quality
One step in this direction is to replace the manual testing with automated testing (unit, integration, e2e). In doing so, the tendency is to measure progress in terms of code coverage and test count. While these are reasonable measures to ensure the activity is happening, they are not necessarily correlated with the outcomes we are after. At some level, code coverage stops having a meaningful impact on our confidence in quality. And, in more extreme scenarios, code coverage actually provides a false sense of confidence.
Measuring outcomes, on the other hand, puts us in a better position to ensure the automated testing is delivering the value we expected. It pushes us toward optimizing our activity to achieve the desired outcomes. In the best case scenario this results in fewer tests and lower coverage, but a much higher confidence in quality.
For quality we might measure change failure rate, which is the frequency at which a given release requires a follow-up release to address bugs or issues.
For speed of delivery, we might measure lead time and deploy frequency.
Another benefit is these measures shed light on the fact that automated testing alone isn’t enough to achieve the outcomes. Given that, we can adjust our priorities and approach to better align with the outcomes.
Connect with the author