The cloud revolution and the subsequent ‘As-a-Service’ economy has been wildly successful primarily because of the attitude of continuous progression. You don’t find enterprises saturating when they reach a status quo; instead you find them continuously exploring ways to automate processes, access meaningful data, and advance self-learning capabilities in a secure, trusted environment.
At Qruize, we’ve been toying with continuous delivery and we’re going to take a deep dive into some related topics a while later. For now, we want to make our readers understand continuous delivery and what metrics make sense when deploying often, faster.
Continuous Delivery is a set of practices and principles aimed at building, testing and releasing software faster and more frequently. This allows us to do three things: Deploy more often, get feedback, and fix problems, all much faster than before.
Here’s a rule of thumb: you’re probably doing continuous delivery right if your software is deployable throughout its lifecycle. Now that we’ve got the basics out of the way, we can move on to the juicier details.
Everyone relies on data and metrics to measure success. Logically, the software development process can’t be improved upon unless the change implemented is quantifiable. It is no wonder that strong development teams are metrics-driven. However, the trick is in identifying what should be measured. The metric to be monitored for determining success/failure is bound to have a significant effect on team behaviour as well.
For instance, if lesser lines of code are seen as a positive metric, developers will write many short lines of code. If success is measured by the number of defects fixed, testers will log bugs that could be fixed with minimal effort, and so on.
The bottom line is, there is no point in removing bottlenecks that aren’t actually constraining the delivery process. This is why it is critical to rely on a global metric to determine if the delivery process as a whole has a problem; and for software delivery, that metric is cycle time.
In its barebones, cycle time is the time elapsed between moving a unit of work from the beginning to the end of a physical process. Dave Farley and Jez Humble who wrote the book ‘Continuous Delivery’ define it as “the time between deciding that a feature needs to be implemented and having that feature released to users”.
How long would it take for an organization to deploy a change? And what about when this is done on a repeatable, reliable basis? Cycle time, the time it takes between deciding that a feature needs to be implemented and having that feature released to users is hard to measure because it covers many parts of the software delivery process—from analysis, through development, to release.
There are ways around the difficulty – a proper implementation of the deployment pipeline helps in calculating parts of the cycle time associated with check-in to release. It also reveals the lead time from check-in to each stage of the deployment process – thus baring bottlenecks.
Sometimes, the bottlenecks playing havoc on your cycle time could be external. Subordinating all other processes to an external constraint may be the only viable option, therefore, while the CD process runs along smoothly, deployments could be slammed.
A way around this has been to record not just the total cycle time but also the number of deployments into each environment which offers an efficiency metric that pinpoints where the issues were and record how our work affected them. Some other diagnostics that warn of potential problems are: number of defects, velocity (rate of delivery of tested, ready to use code), number of builds/day number of build failures/day and duration builds among others.
All in all, selling continuous delivery is a combination of visibility, risk mitigation and responsiveness (cycle time) of the development team.
Leave a Reply