Acceptance Criteria for Release Management Maturity Model

I was asked a good question by a colleague a couple of days ago.

On p419 of my book Jez and I show and I describe a “Configuration and Release Management Maturity Model”. My colleague asked: “What are sensible acceptance criteria associated with this model?”.

I am not sure that I can say anything too definitive, but I think that looking for acceptance criteria is a good way to think about it. So here are a few thoughts about some measurable attributes of a project that may help to steer it in the right direction:

The bullet points that follow the evaluation matrix, in the book, give some steer towards more measurable things.

In general Jez and I recommend that you focus on the most painful problems first. So, depending on where the biggest hurdles are, you can use cycle-time, defect-count, velocity and down-time as potential sources of measurement. These measures should enable you to set sensible incremental targets for improvement. Over time you can ratchet up your expectations and so continue to move the organisation on.

For the highest levels in the matrix I would expect values along the lines of:

Cycle-time: World-class is measured in the minutes. For a big system as complex ours, I would expect a cycle time of an hour or two but anything under a day is pretty good based on industry averages. LMAX is currently at about 2 – 3 hours I think.

Defect-count: Not sure that I can set too many expectations here, so much is too project specific. In general though I think that trends rather than numbers are important in the early days of establishing process.

A lot of people talk about zero defect processes and I am a big believer in that for many projects. However for a large team, and/or project that covers a large surface-area of features, I think that this is often impractical. This is not because you can’t achieve the quality, you can, but it is often complex to differentiate between a bug and a new feature. This means that it can be valid to maintain a backlog of “bugs” that are less important than your backlog of features.

A significant area of interest, and so useful data to collect, is where defects are found. So keeping metrics of defects count by deployment-pipeline stage and aiming to find the VAST majority before you get anywhere close to production is key.

Regression is an important metric here too I think. If your automated testing is good enough you should expect no regression problems.

Velocity: The value of a Continuous Delivery process is to get new features delivered and in use. Tracking the actual rate of delivery of complete working code is important, and again it is about trends, there is no absolute measure that makes sense. However, velocity is the only real measure of continuous improvement, a cornerstone of any agile process.

For most projects I would expect to see a, relatively short-term, initial dip in velocity for existing teams adopting CD as a process, as the organisation adapts, people learn how to cope with the new processes and techniques. Then I would expect to see velocity begin to build. In part this is a measure of the team’s maturity WRT agile process. High performing teams tend to see a steady increase in velocity over a long period of time, eventually it will begin to plateau a bit.

Actually it is not a consistent growth rate. Even very good teams tend to achieve improvements in stretches. Subjectively, I don’t have any data, teams seem to make significant progress, then go through a phase of stability, drop-off a little bit as they get a bit lax about the process, then make another significant move forward as they apply more effort to improve.

Teams newer to the agile approach seem to take a longer time to adopt the “just fix it” mentality that is essential to continuous improvement and so their curve is more like an ‘S’ with a flat-ish start and a strong growth phase. Often followed by longer flat periods and more significant “drop-offs”.

Down-time: This is a fairly blunt measure of quality, but is also a measure of deployment efficiency – to some extent. You can use it to encourage and direct effort towards shorter more efficient deployments. So you need to include the time that the system is unavailable during release in your downtime calculations.

For world-class performance, where it makes sense, I would expect essentially no downtime caused by the Continuous Delivery process. It is perfectly possible to release without stopping even complex applications – but of course it is harder 😉

Your business model may mean that you don’t have to go that far, that some downtime to release is perfectly acceptable. Nevertheless your goal should be to minimize the time it takes, the release, start-up and deployment tests should be quick, a few minutes at most – let’s say 5 minutes as a goal. However if you need to perform significant data-migration as part of your release process you may incur some additional, unavoidable, time penalty for that.

My ideal is that a release should take a few minutes in total. Enough time so that I am happy to log on, select a release candidate, push the “Release Now” button and sit and watch it succeed. Even where you have significant data-migration to perform it is worth doing some work to make this efficient, otherwise when you eventually get to the no-downtime release ideal you will have a more complex problem to move state between the old version and the new. The smaller the window in which change can happen in the old version while you are trying to release the new the better.

Clearly some of this is subjective but using relative improvements in the measurements of your project can take you a long way forward in being able to steer towards the more global improvements that you want to make.

This entry was posted in Agile Development, Continuous Delivery and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *