A friend of mine has recently started work at a new company. She asked me if I’d answer a few questions from their dev team, so here is the second…
Q: “Currently at MarketInvoice we use short-lived feature branches that are merged to master post-code review. How would you recommend we shift towards trunk based development and are there any other practises you would recommend to reduce/eliminate the bottleneck of code review?”
I perceive three barriers to the adoption of trunk-based-development in the teams that I work with…
- The need for Code Review.
- A cultural assumption that you only commit (to master/trunk) when work is complete.
- A lack of confidence in automated tests.
I think that code-review is a very useful practice. We get good feedback on the quality of our work, we may get some new ideas that we hadn’t thought of, we are forced to justify our thinking to someone else, and, if we work in a regulated industry, we get to say that our code was checked by someone else.
All of these are good things, but we can get them all, and more, if we adopt pair-programming.
Code review is great, but it happens when we think that we have finished. That is a bit too late to find out that we could have done better. From a feedback perspective, it would be much more effective if we could find out that an idea, or approach, could be improved before, or immediately after, we have written the code rather than after we thought we had finished. Pair programming means that we get that feedback close to the point when it is most valuable.
Pair programming is a code-review, and so satisfies the regulatory need for our changes to be checked by someone else, at least is has in every regulatory regime that I have seen. Pair programming is also much more than just a continual review. One way to look at it is that we get the code-review as a side-benefit, for free.
This means that the review does not impose any delay on the development. The code is being reviewed as it is written and so the review is both more thorough and adds no additional time to the development process.
So, my first answer is… Pair Programming!
Don’t wait to commit
This is a mind-set thing, and makes perfect sense. It seems very logical to assume that the ideal time to commit our changes is when we think that they are ready for use – the feature that we are working on is complete.
I think it is a bit more complicated than that though. I describe this in more detail in my post on “Continuous Integration and Feature Branching”
If we want the benefits of Continuous Integration we need to commit more frequently than when we think that we are finished. The only definitive point at which we can evaluate our changes is when we evaluate them with the “production version” of our code which is represented by trunk (or master). CI on a branch is not CI! It is neither integration, at least not with the version of the code that will be deployed into production, nor is it continuous because you only integrate, with the version of the code that is deployed into production, when the feature is “finished”.
So to practice Continuous Integration, which is a pre-requisite for Continuous Delivery, we have to commit more frequently to the copy of code destined for production and so we must change our working practices.
This is a big shift for some people. It is probably one of the most profound shifts of mind-set for a developer in the adoption of Continuous Delivery. “What, you want me to commit changes before I am finished?” – Yes!
Continuous Delivery is defined by working in a way so that your software is in a releasable state after every commit. That doesn’t mean that all of the code needs to be useful. It just means that it “works” and doesn’t break anything else.
In the language of Continuous Delivery we aim to “separate deployment from release”. We can deploy small, simple, safe changes into production and only “release” a feature when all of those small changes add up to something useful.
This leads us into the territory of a much more evolutionary approach to design. Instead of thinking about everything up front, even for a small feature, we will work in a fine-grained, iterative way that allows us to try ideas and discard them if necessary on the route towards something that works better.
This has lots of good side-effects. Not least it means that I will design my code to allow me to change my mind and get things wrong without wasting all of my work. That means that my code will have good separation of concerns, be modular and will use abstractions to hide the details of one part of my design from others. All of these are hallmarks of high-quality code. So by working more incrementally, I get higher quality designs.
“I can’t commit to trunk before I am finished because I may break something”. To me, that speaks of a lack of confidence in testing and/or a very traditional mind-set when it comes to testing strategy.
It kind of assumes that you can’t test your feature until it is finished. I think that that is old-school thinking. This is a problem that we know how to solve – “Test First!”.
This problem in part stems from the language that we have chosen to describe the use of automation to verify that our code works. We call these things “Tests” which tends to make us thing of performing this verification as a final step before we release. I wonder if the adoption of a “test-first” approach would have been different if we had called these things “specifications” rather than tests. “Specify first” seems more obvious perhaps than “test first”.
If we see our automated evaluations as “specifications” that define the behaviour that we want of our systems, we must obviously do the thinking, and create the automated version of these specifications, before we start to meet them by building code.
By building software to meet executable specifications of its behaviour we eliminate whole classes of errors, but even more importantly, we drive the design of our systems towards higher-quality. I argue this in an earlier post on “Test Driven Development“. The properties of code that make it testable are the same properties that we value in “high quality code”.
I have worked on large-scale complex systems where we could release at any time without fear of breaking things because our automated testing caught the vast majority of defects. Employing tests as “executable specifications” which describe the desired behaviours of our systems has a dramatic impact on the quality of the code that we produce.
In a study of production defects the authors estimated that over 70% of production defects would be eliminated by a more disciplined use of automated testing.
Using a test-first approach drives quality into our designs, protects against the most common causes of production defects and allow us to move forwards with more confidence.