How long to retain build output?

Martin Fowler has recently made a post on the topic of the importance of reproducible builds. This is a vital principle for any process of continuous integration. The ability to recreate any given version of your system is essential, but there are several routes to it if you follow a process of Continuous Delivery (CD).

Depending on the nature of your application reproducibility will generally involve significantly more than only source code. So in the achievement of the ability to step-back in time to the precise change-set that constituted a particular release version of your software, the source code, while significant, is just a fragment of what you need to consider.

Martin outlines some of the important benefits of the ability to accurately, even precisely, reproduce any given release. When it comes to CD there is another. The ability to reproduce a build pushes you in the direction of deployment flexibility. By the time a given release candidate arrives in production it will have been deployed many times in other environments and for CD to make sense, these preceding deployments will be as close as possible to the deployment into production.

In order to achieve these benefits we must then be able to recover more than just the build, we must be able to reconstitute the environment in which that version of the code that your development team created ran. If I want to run a version of my application from a few months ago, I will almost certainly have changed the data-schemas that underly the storage that I am using. The configuration of my application, application server or messaging system may well have changed too.

In that time I have probably upgraded my operating system version, the version of my web server or the version of Java that we are running too. If we genuinely need to recreate the system that we were running a few of months ago all of these attributes may be relevant.

Jez and I describe approaches and mechanisms to achieve this in our book. An essential attribute of the ability of having a reproducible build is to have a single identifier for a release that identifies all of things that represent the release, the code, the configuration, 3rd party dependency versions, even the underlying operating system.

There are many routes to this, but fundamentally they all depend on all of these pieces of the system being held in some form of versioned storage and all related together by a single key. In Continuous Delivery it makes an enormous amount of sense to to use a build number to relate all of these things together.

The important part of this, in the context of reproducible builds, is that talking about the binary vs the source is less the issue than the scope of the reproduction that you need. If you are building an application that runs in on an end-users system, perhaps within a variety of versions of supporting operating environments, then just recreating the output of your commit build maybe enough. However if you are building a large-scale system, composed of many moving parts, then it is likely that the versions of third-party components of your system maybe important to it’s operation. In this instance you must be able to reproduce the whole works if you want to validate a bug and so rebuilding from source is not enough. You may need to be able to rebuild from source, but you will also need to recover the versions of the web-server, java, database, schema, configuration and so on.

Unless your system is simple enough to be able to store everything in source code control, you will have to have some alternative versioned storage. In our book we describe this as the artifact repository. Depending on the complexity of your system this may be a simple single store or a distributed collection of stores linked together through by the relationships between the keys that represent each versioned artefact. Of course the release candidate’s id sits at the root of these relationships so that for any given release candidate we can be definitive about the version of any other dependency.

Whatever the mechanism, if you want genuinely reproducible builds it is vital that the relationships between the important components of your system is stored somewhere and this somewhere should be along with the source code. So your committed code should include some kind of map for ANY system components that your software depends upon. This map is then used by your automated deployment tools to completely reproduce the state of the operating environment for that particular build. Perhaps by retrieving virtual machine images from some versioned storage, or perhaps running some scripts to rebuild those systems to the appropriate starting state.

Because in CD we retain these, usually 3rd party, binary dependencies, and must do so if we want to reproduce a given version of the system, then in most cases we recreate versions from binaries of our code as well as those dependencies because it is quicker and more efficient. On my current project we have never, in more than 3 years, rebuilt a release candidate from source code. However, storing complete, deployable instances of the application can take a lot of storage and while storage is cheap it isn’t free.

So how long is it sensible to retain complete deployable instances of your system? In CD each instance is referred to as a “release candidate” each release candidate has status associated with it indicating that candidate’s progress through the deployment pipeline. The length of time that it makes sense to hold onto any given candidate depends on that status.

Candidates with a status of “committed” are only interesting for a relatively short period. At LMAX we purge committed release candidates that have not been acceptance tested, those that have been skipped-over because a newer candidate was available when the acceptance test stage ran or those that failed acceptance testing. Actually we dump any candidate that fails any stage in the deployment pipeline.

The decision of when to delete candidates that pass later stages is a bit more complex. We keep all release candidates that have made it into production. The combination of rules that I have described so far leaves us with candidates that were good enough to make it into production but weren’t selected (we release at the end of each two week iteration and so some good candidates may be skipped). We hold onto these good, but superseded, candidates for an arbitrary period of a month or two. This provides us with the ability to do things like binary-chop release candidates to see when a bug was introduced or demo an old version of some function for comparison with a new.

We have implemented these policies as a part of our artefact repository so largely it looks after itself.

This entry was posted in Agile Development, Continuous Delivery. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *