Continuous Integration and Feature Branching

Recently I spoke at the Pipeline Conference in London. I gave a talk on “Optimising Continuous Delivery” to a bunch of people who were self-selected as interested in Continuous Delivery, most of them would already consider themselves CD practitioners, Pipeline is a conference dedicated to learning about CD!

Early in my talk I described some of the ground-rules for CD, those practices that I consider “table-stakes” for playing. One of my slides is intentionally slightly jokey. It describes my advice for branching strategies in the context of CD.


I knew that this would be contentious, it is always contentious. This is the practice that I advise that I get most push-back on. EVERY TIME!

Before I had got home from the conference my twitter account ‘@davefarley77’ had gone a bit mad. Lots and lots of posts, for and against, questions and challenges and while a week later it has slowed down a bit, the rumblings continue.

I wrote about feature branching some years ago. I was about to say that the debate has moved on, but in reality I don’t think that it has. The same questions and issues arise. So this blog post is meant to add a few more thoughts on the topic.

The push-back that I get when I express my view that any form of branching is counter, in principle, to the ideas of Continuous Integration is varied.

At one extreme I get “Heretic, burn him at the stake” kind of feedback, at the other “Yes, but it can’t possibly work without Feature Branching – you must work in small teams and/or on trivially simple projects”.

The first point is fair enough, I am, constitutionally, a heretic. I like to be skeptical about “received wisdom” and question it.

In this case though, my views on branching are from experience rather than mere academic skepticism. I have been a professional software developer for nearly four decades now. I have tried most things over the years. I have refined my understanding of what works and what doesn’t on a lot of projects, trying a lot of different tools, technologies, methodologies and techniques.

In response to the second class of “push-back” I do sometimes work in small teams, but also with some of the biggest companies in the world. For the past three decades I think that it is fair to categorise most of my development work as at the more complex end of the scale. Which is one of the reasons that I take some of these disciplines quite so seriously.

I am an adherent of agile principles and take them to their extreme with my flavour of Continuous Delivery when I am in a position to decide, or influence the decision.

I first practiced a version of Continuous Integration in 1991. We had a continual rolling build, a home built version control system, written in shell-script, and even a few simple “unit tests” on our C++ project. This pre-dated the popularity of CI by a considerable margin, but it worked really well!

What I learned on this project, and on many others, small and MASSIVE, is that what really matters is feedback! Fast and high-quality. The longer that you defer feedback, the greater the risk that something unexpected, and usually bad, will happen.

This is one of the ideas that inspired the evolution from Continuous Integration to Continuous Delivery. We wanted better feedback, greater insight, into the effect of our changes, whatever their nature.

So you can tell, I am a believer in, and advocate for, Continuous Integration. We create better code when we get fast feedback on our changes all of the time.

CI is a publication based approach to development. It allows me to publish my ideas to the rest of my team and see the impact of them on others. It also alows the rest of my team to see, as it is evolving, the direction of my thinking. When teams practice CI what they get is the opportunity to “Fail Fast”. If something is a problem, they will spot it REALLY quickly, usually within a handful of minutes.

CI works best when publications/commits are frequent. We CI practitioners actively encourage commits multiple times per day. When I am working well, I am usually committing every 15 minutes or so. I practice TDD and so “Red-Green-Refactor-Commit” is my mantra.

This frequency doesn’t change with the complexity of the code or size of the team. It may change with how clearly I am thinking about the problem or with the maturity of the team and their level of commitment to CI.

What I mean by that, is that once bitten by the feedback bug, you will work VERY hard to feed your habit. If your build is too slow, work to speed it up. If your tests are too slow, write better tests. If your hardware is too slow on your build machines, buy bigger boxes! I have worked on teams on some very large codebases, with complex technologies that still managed to get the fast feedback that we needed to do high-quality work!

If you care enough, if you think this important enough, you can get feedback fast enough, whatever your scale! It is not always easy, but it has always been possible in every case that I have seen so far – including some difficult, challenging tech and some VERY large builds and test suites!

“What has all of this got to do with branching?” I hear you ask. Well if CI is about exposing our changes as frequently as possible, so that we can get great feedback on our ideas, branching, any form of branching, is about isolating change. A branch is, by-design, intended to hide change in one part of the code from other developers. It is antithetical to CI, the clue is in the name “CONTINUOUS INTEGRATION”!

To some degree this isolation may not matter too much. If you branch, but your branch is VERY short-lived, you may be able to get the benefits of CI. There are a couple of problems with this though. First, that this is not what most teams do. Most teams don’t merge their branch until the “feature” that they are working on is complete. This is called “Feature Branching”.

Feature Branching is very nice from the perspective of an individual developer, but sub-optimal from the perspective of a team. We would all like to be able to ignore what everyone else is doing and get on with our work. Unfortunately code isn’t like that. Even in very well factored code-bases with beautiful separation-of-concerns and wonderfully loosely-coupled components, some changes affect other parts of the system.

I am not naive enough to assert that Feature Branching can never work, you can make anything work if you try hard and are lucky. Even waterfall projects occasionally produced some software! My assertion is that feature branching is higher-risk and, at the limit, a less efficient approach.


The diagram above shows several paths from idea to working software in production. So if we want effective, high-quality feedback where in this diagram should we evaluate our changes? Point 1 is clearly no good, the changes on the branches, 5, 6 and 7, are never evaluated.

We could evaluate the changes after every merge to trunk, 2, 3 and 4. This is what lots of Feature branching teams do. The problem now is twofold:

1) We get no feedback on the quality of our work until we think that we are finished – Too Late!
2) We have zero visibility of what is happening on other branches and so our work may not merge. – Too Risky!

Before the HP Laserjet Firmware team made their move to Continuous Delivery, their global development team spent 5 times as much effort on merging changes between branches as on developing new features!

(See from time 47:16 in this presentation  also “A Practical Approach To Large Scale Agile Development”)

At this point my branch-obsessed interlocutors say “Yes, but merging is nearly free with modern tools”.

It is true! Modern distributed Version Control Systems, like GIT, have very good merge tools. They can only go so far though. Modern merge tools are good at the optimistic lock strategy of deferring locking things down until you see a conflict, at which point they request some help, your help. Most of the time merges are simple and automatic, but often enough, they are not.

As soon as you need to intervene in a merge there is a cost and until the point of merging you don’t know how big that cost will be. Ever got to merge some change that you have worked on for several days or a week, only to find that the differences are so great that you can’t merge? Lots of teams do find themselves in this position from time to time.

Back to our diagram. What feature branch teams sometimes do is run a dual CI system, they run CI on the branches AND after the merge to Trunk. This is certainly safer, but it is also slow.

As ever, the definitive point is the testing that happens at the point of merge to Trunk. It is only at this point that you can honestly say “Yes, my change works with everyone else’s.”. Before that, you are hoping that someone else hasn’t done something horrid on another branch that breaks your stuff when you merge.

This approach is safer because you are getting some feedback sooner, from the CI running on your feature branch, but this branch is telling lies. It is not the real story. This is not a change set that will ever make it into production, it isn’t integrated with other branches yet. So even if all your tests pass on this branch, some may fail when you merge. It is slow because you are now building and running everything at least twice for a given commit.

The real, authoritative feedback happens when you evaluate the set of changes, post merge, that will be deployed into production, until your branch is finished and merged onto Trunk, everything else is a guess.

CI advocates advise working on Trunk all the time. If you want to be pedantic, then yes, your local copy of the code is a form of branch, but what we mean by “all the time” is that we are going to make changes in tiny steps. Each change is itself atomic and leaves the code in a working state, meaning that the code continues to work and deliver value. We will usually commit many of these atomic changes every day. This often means that we are happy to deploy changes into production that are not yet complete, but don’t break anything!

CI, like every other engineering practice, comes with some compromises. It means that we are only allowed to commit changes that keep the system working. We NEVER intentionally commit a change that we know doesn’t work. If we do break something the build stops and rejects our change, that is the heart of CI.

This means that we have to grow our features over multiple commits, if we want regular, fast, authoritative feedback. This, in turn, changes the way that we go about designing our features. It feels more like we “grow” them through a sequence of commits rather than take them aside, design them and build them in isolation and then merge them.

This is a pretty big difference. I think that this is one of the reasons for the second category of push-backs that I tend to get from people who are more used to using branches.

Q: “Yes, but how do you make anything complex in 15 minutes?” 

A: You don’t, you break complex things into a series of small, simple changes.

Q: “But how can a team fix bugs in production?”

A: They feed the fixes in to the stream of commits, like any other change to the system.

Q: “Ah yes, but how do you do code reviews?”

A: Pair Programming is my preferred approach. You get better code reviews and much more.

Q: “Ah, but you can’t do this for software like XXX or using technology like YYY”

A: I have build systems-software, messaging systems, clustering systems, large volume data-base backed systems, whole enterprise systems, some of the highest performing trading software in the world, as well as web-sites, games and pretty much any other type of software that you can think of using this approach.

I have done it in Java, C#, C++, Python, Ruby, Javascript, shell-script, FPGA systems, Embedded software and COBOL. I have seen other teams using this approach on an even wider variety of technologies and products. I think it works!

CI is about speed and clarity of feedback. We want a definitive picture of the quality of our work, which means that we must evaluate, precisely, the code that will go into production. Anything else is guessing. We want our feedback fast and so we will optimise for that. We work to position the machinery that provides that feedback so that it can try our changes destined for production as soon as possible, that is, as close to the point that we made the changes as we can achieve.

Finding our that my code is stupid or broken within 2 minutes of typing it is very different to having to wait, even as short-a-time as an hour for that insight. It changes the way that I work. I can proceed faster, with more confidence and, when I do mess-up, I can step back with very little cost.

So we want definitive feedback fast. That means that anything that hides change gets in the way and slows us down. Any form of branching is antithetical to Continuous Integration.

If your branch lasts less than a day, my argument against it is weakened, but in that case I will pose the question “why bother with branches?”.

I work on Trunk, “master” in my GIT repos. I commit to master locally and push immediately, when I am networked, to my central master repo where CI runs. That’s it!

I do compromise the way that I work to achieve this. I use branch by abstraction, dark-releasing and sometimes feature-flags. What I get in return is fast, definitive (at least to the quality of my testing) feedback.

Last years “State of DevOps Report” claimed that my style of development is a defining characteristic of “High Performing Teams”. If you are not merging your changes to Trunk at least daily, it predicts that your outcomes are more closely aligned with “Lower Performing Teams”.

There is a lot more to this, CI is not a naive approach it is well-thought out and very widely practiced in some of the most successful companies in the world. Trunk-based development is a core practice to CI and CD, it really is very difficult to achieve all of the benefits of CI or CD in the absence of Trunk-based development. You can read more about these ideas on the excellent Trunk-Based-Development site.

This entry was posted in Agile Development, Continuous Delivery, Continuous Integration, Effective Practices, Feature Branching. Bookmark the permalink.

41 Responses to Continuous Integration and Feature Branching

  1. I’d love to hear some war stories that led you to a strong conclusion that pair-programming and no branches is easier to establish than a practice of having short-lived (as in, up to a few hours long) feature branches and pull requests backed by fast builds.

    Through personally being in touch with hundreds of companies using Semaphore CI, I don’t recall seeing a team that doesn’t do feature branches. Both approaches can work and you’ll fail in both if you don’t understand the challenges and pitfalls. In that sense it’s really valuable to document ways to fail, like you are doing. Not to mention how it depends on the archicture and tech stack… 🙂

    • davef says:

      I don’t think that I said “easier to establish” 😉

      The techniques that I recommend are hard to adapt to, but they are, IMO, better!

      So NOT “easier to establish” but “more effective”.

      I agree that the techniques that I am recommending are less common than a, to my mind, slightly broken mis-interpretation of CI – which is the commonest approach. The point is not “how popular” but rather “how effective”. The teams that I have seen doing this produce higher-quality code more quickly than the teams that I have seen practicing the common approach. Which is really why I wrote this piece in the first place. 🙂

    • Hi Marko! Seems like you are equating feature-branches with so called task/topic branches. They are different things. Task/topic branches are for a single small development change. Feature branches are by definition for things that are bigger than a story or task, and might even barely fit into a sprint (if at all).

      Features (and feature-branches) span multiple stories and tasks, and if developed (and integrated) on a branch prior to merging to main-trunk, then unless the work on the feature branch is broken down into smaller stories/tasks and integrated to main-trunk just as frequently as if they hadnt been done on a feature-branch first, then there is delayed integration & feedback.

      If you are doing short-lived branches that live for a few hours (or even less than a day), then those are development tasks (or at most a small story) per branch, and is more accurately called a task-branch, or (occasionally) story-branch, or more generally a “topic” branch (where the “topic” is referring to a leaf-level story or task, and is not something that is bigger than a user-story).

  2. Thanks Dave for bringing this issue back to the foreground again (as it seems much needed). I think there are some bigger issues at play as to why too many folks still don’t seem to “get” what the issue is (and is not) with branching and merging, whether it is done within the codebase using version-control, or in the code-structure itself (with toggles).

    Unfortunately, this leads to a lot of misinformation (on both sides), or even stuff that makes sense, to those who possess the proper mindset/perspective, but not to the rest.

    This is also problem with a lot of the arguments that talk about the perils of long-lived branches (as well as “feature” branches). The issue is not the length/duration of the branch, it is the frequency of the feedback and synchronization (from merge-integration). You can say that until youre blue in the face, but if you start-off with the argument with the wrong or controversial thing, youve all to often lost your audiences’ focus on what you most need them to hear and understand.

    There are some fundamentally different ways (mindsets) of thinking about (and hence understanding) branching and branches. As someone who wrote extensively about this topic early on (the “Streamed Lines” branching patterns in ’98), I have more personal experience with those failures than most. We tried to focus on the context, and identify the different styles/mindsets and tradeoffs (rather than take a strongly opinionated approach for one over the other), all seemingly for naught.

    To make matters worse, the so called “feature branch” was never even a pattern or recommendation back then (thru the 90s). It was more something that almost exclusively appeared in the context of very large codebases spanning multiple teams, and only for “feature teams” (the pre-agile flavor of feature-teams). Some notion of an integration codeline per-team made sense (whether it was a feature-branch, or some other kind of branch), and probably even necessary when you would otherwise have scores of developers trying to commit to the same repository during the same 60min time-period, back when “full/clean” build-times took hours rather than minutes, and commits weren’t really atomic.

    How that practice migrated to small teams and small codebases still seems to defy all reason, and yet that’s exactly what happened with the likes of GitFlow and its predecessors+successors (and it most definitely does no good to “blame” vendors/products like Rational/ClearCase, or make absolute/universal statements like “never” or “always” that neglect purpose & context)

    Instead, I think we need to get back to basics on the how & why of branching — what problem(s) was it designed to legitimately solve, *when* (and how) is the right way to do it, and what are the guiding principles (like the SOLID principles).

    I think we have better vocabulary and understanding now than we did in the 90s and 2000s (now that Agile & DevOps & Lean are accepted and pervasive, even when culture & mindset is still lagging behind).

    More on that later…

  3. [Follow up to my previous comment]

    Back to the purpose of branching and the problem(s) it was designed to solve. This one is pretty easy/basic, yet often forgotten, especially in these days of SaaS. SCM tools/systems were designed to address three “classic” SCM problems: shared data/content, concurrent editing/changes, multiple maintenance.

    The problem of having shared data/content is why we should use a version-control repository in the first place (instead of filesystems and copying/backup). Concurrent editing was mostly addressed by the notion of a “checkout” using an optimistic or pessimistic licking mechanism (e.g., copy-modify-merge, or lock-modify-unlock), at the “right” granularity (single file, folder, (sub)module, or whole-repo).

    The notion of branching was designed for solving the multiple maintenance problem, which is when you have multiple active versions (i.e. releases) of a codebase that “must” be supported, concurrently, during the same time. This was far more exceedingly common *before* the popularity of SaaS, when “release” and “deploy” where not only far apart in time and (physical) space but also in terms of ownership (different companies and sites).

    If I had only one released version to support in the field, and the only other “version” was the one being actively developed, I might not even need branching at all (or at best only short-lived, until the next release-date). But if I had multiple different releases in the field that needed to be supported, even while working on the next latest/greatest one – then this is what branching was designed for: representing an independently active evolutionary path of development (as long as it was a *necessary* path – one that provided more value than the additional costs of supporting the previous versions).

    That’s pretty much it in a nutshell. That was the correct/intended purpose and need for branching. The problem is it became popular as a strategy for parallelizing development work (prior to its deployment+release) without *proper* understanding of those costs. This is the world that viewed development as projects over products, and branches were the version-control equivalent of a nail for the project-oriented hammer.

    The frequency and granularity of branching use quickly got out of hand (a “branch for every purpose” and a “purpose for every branch”, and nesting and increasingly more levels of integration-indirection), and employers and managers didn’t have the right mindset and thinking tools to understand why (tools like lean-flow, lean-startup, cost of multi-tasking, context-switching, #noprojects, relative-estimation and beyond-budgeting [#noestimates] and cost-of-delay).

    Something similar has been happening with feature-toggles too (to often mis-used by being too long-lived, to frequent/fine-grained, not subject to refactoring and “clean coding”, etc.). Feature-toggling is a different form of branching (code-branching) at a different “binding-time”.

    All too often, the question of branching vs toggling comes up when it probably shouldnt. When the answer is presumed to be one or the other, but is probably neither. Just focus on integrating very frequently, in small (micro-sized) chunks/tasks. The pressure to branch or toggle may be coming from a different mindset with misconceived notions about traceability vs transparency, isolation vs encapsulation, safety/stability vs liveness/speed, etc.) and should probably be solved a different way that requires neither a feature-branch nor a feature-toggle.

    Shakespeare’s Polonius (from Hamlet) famously says: “Neither a borrower nor a lender be; / For loan oft loses both itself and friend.” This is well suited to branching and toggling as forms of “debt” that can compromise the fast friendship we need to maintain with fast+frequent flow and feedback.

  4. johnny nospam says:

    why not do CI and feature branches? we just clone our build definition, and point the build at the feature branch.

    • davef says:

      I think I make it clear in the article that I think that Feature Branches slow you down. That is why I prefer not to work this way.

    • When doing CI *and* feature branches, the feature branch adds another level of indirection between the development change/task and the CI branch (main-trunk) which causes a delay integration & feedback unless you integrate from feature-branch to main-trunk on a per change/task basis (almost per commit).

      You have to ask yourself what is the additional (intermediate) branch doing for you. If it is adding “stability” (so that main-trunk has less rapid change-volatility) then that can be valid (in the case of large codebases with multiple teams) but its still delaying CI integration frequency.

      If the feature branch isnt really adding stability, then why is the additional isolation necessary? If its just to use the feature branch as a “container” for all the commits that happened for the feature, then is a branch the right way to do that if you dont need the additional isolation (and merging/integration effort) that the branch introduces?

  5. I totally agree with ‘fast feedback’. The definitive speed of any software development is determined by the quality and speed of the feedback (and by quality, the closer to the real production feedback, the better). That’s what really good developers are good at. Feedback.

    Therefore I agree that you should merge/commit often. I don’t care what we define a branch, as long as we commit often.

    I do though think that most have CI wrong on another level. Most implementations of CI is wrong. If the build breaks, other developers shouldn’t be hindered. Your code should automatically be stopped from being merging/committing/pushed to the repo, if it doesn’t work.

    I’ve seem much of what you describe, but also where branches was used

    • Joseph Tate says:

      How do you manage CI with all development on master without hindering other users? Automatic backouts?

      • davef says:

        There are several books on this topic. The quick answer though is that you work in a series of many small changes and optimise the build and the tests to be VERY efficient. I have worked in teams that shared a repo between about 450 people.

        Google do a version of this with 31,000 people – it scales very well.

        There are a couple of common approaches, and many variants. The two common approaches are:

        Human CI.
        You use a mix of tech and team discipline to keep CI working. For example, don’t commit on a broken build and commit a fix or revert you change with 10 minutes of breaking the build.

        Gated Commit.
        Instead of committing to a VCS, you commit to the build. The build compiles the code and runs all of the test, and only if everything works does the build merge the changes into the VCS.

  6. Axel Roest says:

    Refreshing thoughts!
    My previous project had feature branches that lay dormant for a week, because of delayed code reviews. Merging afterwards was hell!

    • code-reviews tend to cause integration delays when the size of the change being reviewed is large and/or requires multiple other reviewers to formally “signoff” (or needs to wait for a specific person or role to do the signoff — creating a resource utilization/scheduling issue that impedes flow of change and integration/feedback. If the changes are small enough, it doesnt require as many people, and if the team is agile/collaboraive enough, it doesnt have to wait for a specific approver to become available.

  7. Ben Heiskell says:

    > Q: “Ah yes, but how do you do code reviews?”
    > A: Pair Programming is my preferred approach. You get better code reviews and much more.

    This answer dodges one of the most common reasons I see engineers using multi-day branches. I think it’s safe to say peer reviewing code is an industry norm these days. This in practice gets directly in the way of CI. There are compensating techniques, like merging to master prior to executing the build, but they’re obvious downsides to that strategy. I’m a huge fan of CI, and get as close to it as I can, but I can’t think of any team I’ve worked on where it would have been pragmatic to pair-programming every change. I don’t love it, but the best strategy I’ve developed so far is to ensure my teams put a high priority on timely review of changes and squash scope creep or bike shedding wherever it arises.

    Loved the post!

    • davef says:

      Thanks, I am pleased that you liked it!

      I think that wether you choose to see “Pair Programming” as a form of continual review or not is down to the team. I do. I think that PP is a better for of code-review, it gives you ALL of the value of a separate code-review except in-more detail, continually and has a whole load of side-benefits that in some cases are even more valuable than the review in the first place!

      I sometimes get the push-back, “yes but I work in a regulated industry and so must have the review” – Me too! I have worked in several regulated industries under many different regulatory authorities. I haven’t found one yet that quibbled over PP not being a review. It counts as a review from a regulators perspective, creates higher-quality code and does so without slowing down the feedback cycle – I am sold! 😉

    • These days, pair programming isn’t the only good option for “continual review”. Duvall & Glover covered this nicely in their authoritative CI book in 2007 (from the same series as Humble and Farley’s CD book, but 3-4 years earlier). They did this when they introduced automated static code analysis into the CI pipeline (and called it “Continuous Review”). This drastically and repeatably reduced the amount of manual time and effort needed to review code changes (just like TDD and BDD did for testing). When you have a tool like SONAR or Cobertura or Panopticon (or JetBrains) that is able to automate systematic code-quality issues like design-quality, clean coding standards, code-smells, and the other various aspects of technical debt, it lets you better focus your “manual” reviewing efforts on higher-level things that don’t take as much manual effort.

      You can defer/focus those manual reviewing efforts to so called pull-requests, and if you keep the changes small & frequent enough, even without (promiscuous) pair-programming, you can still do (promiscuous) pair-reviewing to leapfrog that code-review as quality-gate bottleneck.

      The problem comes when the code-review encourages or enforces a form of handoff-to-specialty-role (i.e., buildmaster, architect, tech-lead or feature-lead). That creates bottlenecks and diffuses the “conquer and divide” approach of being able to collaboratively “swarm the solution” (instead of using a handoff).

      • Just to clarify, when you say JetBrains, do you mean their Upsource product? Are you talking about tools to automatically reject code that fails some quality tests, or just to auto inspect it and provide the results to reviewers?

        • Hi Barney! When I say JetBrains I was referring to their earlier products (an IDE in particular). One of them was IntelliJ, and specific to Java. I remember another one that worked for C#. Another popular one for C# was “resharper”. I am referring to tools that do static code analysis (as opposed to run-time analysis or monitoring), that are able to analyze the code structure and dependencies and flag warnings & violations of conventions and rules in areas of note just coding-standards, but memory management, security, DSM, code-smells, and even suggest refactorings, and (more recetly) report a deyailed mulit-component measure or index of maintainability and technical debt (e.g. SQALE).

          These tools automatically review/inspect the code based on selected and configured rules (and their plugins), and then raise a warning or error-level as a return value, which can then be used by a CI/Build system to flag success or failure along, and report the results along with optional vs mandatory remedial actions.

  8. If pair programming everything isn’t an option, and code review is still wanted, I wonder whether it’s better to do a typical pull-request to master workflow, (with of course less than circa seven business hours between branching and merging), or push directly to master and then have a peer review it later.

    I’ve only ever worked with the former. Post-merge review seems attractive, but then I’m not sure quite how that would be organized and what happens with the feedback. If it’s a suggestion for a maintainability improvement does it get dealt with, and if it’s feedback that says the change will break prod does that make a discontinuity of delivery? Or does the reviewer immediately revert the commit?

    This particularly important if there are not sufficient automated tests, so a coding mistake has realistic potential to cause a production bug.

    • davef says:

      I have tried both of those. What I found was that as the rates at which commits are made increase, the temptation to “cheat” the system grows. What I have seen is developers making “pacts”. “You can sign my name to a commit, and I will sign yours”. Not good 🙁

      I have tried to build the review cycle into CI systems, using some of the well-known code-review tools. It is OK, but not perfect.

      To be clear, you can make either of these work. In my experience though Pairing is much the best solution to these problems. I would question why PP isn’t an option. If you have a practice that allows you to go faster and create better code with fewer bugs, why would you not choose that?

      In my clients I try to treat any push-back on pairing as demonstrating a need for education, rather than demoing that pairing is inappropriate. I don’t always win the argument, but I always try 😉

      • Thanks Dave. I will take your advice about questioning why PP isn’t an option. I can’t answer it here since I’m not asking about any one particular situation.

        I suppose one reason for PP not being an option, from the perspective of someone who isn’t a tech lead or CTO, is I might not be able to persuade someone else in the team or organization that we should do full time PP, even if I could persuade them that we should do CI.

  9. Paul Watson says:

    Hi Dave,

    Interesting read! I’ve tried both approaches and I think both can work. Of course you make no mention of temporary branches that CI tools can create so your feature branch can be tested in the trunk.

    I’d also like to tell you how influential your ideas around acceptance testing, DSLs and mocking out dependencies have been. So much so I’ve taken this idea and bought into it wholesale on the last few projects I’ve worked on. The feedback is as liberating as you say.

    Current setup uses docker to create a transient environment, complete with external message bus, wiremock for numerous external web APIs, DB container, etc etc. This all runs thousands of black box tests in a manner you describe in your videos. The key is that it all happens BEFORE the code is merged to master.

    This means our short lived feature branches can experience a rolls royce test suite and get early feedback before hitting trunk.

    I wonder how you think this fits into your idea around the friction of extra branching?

    I should say we pair programme (it’s the best way) AND do code reviews. We write better software together and we catch bugs and simplify designs in code reviews. I like the safety net of people looking at my code and I learn things when I see how they write theirs.

    I’m also an ardent believer in “rough-cut” code review. Am I/are we heading in the right direction/barking up the wrong tree.

    I personally have drank the kool aid on this approach. It’s like a drug and I know colleagues who say the same. There was a sea change in the way developers worked. They stopped manually poking the app and started writing acceptance tests to check it was working.

    It’s my guess that when you talk about feedback you’re assuming unit/functional tests run in CI which deploy somewhere only when merged to trunk. This probably kicks off a pipeline which deploys to a test environment against which a suite of acceptance (system) tests run against the new version.

    If this guess is correct (I might be way off) I would like to know if you’ve tried the “Farley Approach” to testing with transient environments which run per branch? If you have: what do you think of this approach?

    If you haven’t: please do so and then do a talk about it 😀

    • davef says:

      Yes, both can work, but I think that there is a difference. I can choose to ride a motorbike, standing-up, facing backwards or I could sit down and be more cautious. I may get away with the first approach, but it is less-stable, less reliable and if things go badly, they could have big consequences. Just because I got away with it once or twice, or even twenty or a hundred times, doesn’t mean it is as safe and stable as sitting and working the controls with my hands.

      Yes feature branching can work, and works a lot more reliably than riding a motorbike standing-up facing backwards, but it is not as stable, or as safe, as practicing CI. The data says that in “High-performing teams” you are much more likely to find them practicing CI than you are to see them working on feature branches (see reference in blog-post for source).

      I am pleased that you have found the acceptance testing and DSL stuff useful. I find it goes down very well with the companies that I consult with. This approach works fine on Trunk too 😉 You do have to keep the performance of your feedback good, care about the efficiency of the tests themselves and scale out the hardware that they run on, but you can do all of this in a truly MASSIVE scale if you need to.

      Yes, each change to Trunk kicks of the pipeline. I refer to this as “Giving birth to a Release Candidate”. The pipeline’s job is then to “prove that a Release Candidate is NOT fit to make it into production”. We treat the tests, all of the test (Unit, Acceptance, Performance, Analysis, Security, Data-Migration ALL TESTS) as a falsification mechanism and reject the Release Candidate if any test fails. The trick is to be able to run all of these tests multiple times per day. I generally recommend that people aim to get definitive feedback from all of these tests in under 1 hour. Sounds impossible, it is not, but it can be demanding!

      The problem with running any tests on a per-branch basis is “what are you evaluating”. If you are running on a branch you are, by definition, testing a different set of code to what is on master – otherwise the branch has no point. This means that you are testing a set of code that won’t end up in production. At best, this is inefficient, at worst it is misleading you. If all the tests pass on the branch what does that mean? It doesn’t mean that it will work in production, you still don’t know until you have merged your changes with mine. My changes may break your code. So we have to run the tests again when we merge – slow and inefficient!

      Thanks for the feedback!

      • [sigh] Dave, your statement that feature branching can work but “is not as safe or responsible as CI” is again making the false assumption and false dichotomy (there is no forced choice between feature branching versus CI, neither precludes the other nor does it have to hinder the other). It can be an *and*, its its not a matter of luck or circumstance, but a matter of discipline.

        Your mantra of red-green-refactor-commit (where “commit” implicitly means integrate+build+test) is independent of branching, It is not delayed by the correct+disciplined use of branching, nor does it add more/superfluous steps.

  10. First up, thanks for the thought provoking post! It’s definitely got me thinking this morning.

    After a few re-reads, I’m still missing the connection between “CI is about speed and clarity of feedback” (which I wholeheartedly agree with) and “Any form of branching is antithetical to Continuous Integration”. It seems like that is based on the assumption that you are only running your CI on master? Why not just run CI against any commits pushed, whether it’s master or other branches or pull requests or what-not? Is there a technical reason that’s not viable, or is it philosophical? It’s certainly possible on all the major CI providers that I have worked with over the year.

    Beyond that, it seems like the major argument is that “merging into master is hard”. That just sounds like a good argument for keeping feature branches short lived and regularly having master rebased into them. There are more and more tools popping up to force feature branches to be up-to-date with master before merging back in. Provided you can make guarantees about master being evergreen, it seems like it’s up to individuals to handle keeping up with master on their feature branches as makes sense for how they develop.

    The huge advantage to branches and pull-request based development is that you can clearly describe what is being implemented over a series of time and collect feedback as you go. You can do it asynchronously, with a distributed team on different time scales. You have code reviews captured and you can do meta-reviews on them to help folks learn how to give better feedback and receive feedback better. Pair programming is great, but it shouldn’t replace those things, it should complement them.

    My take is that different teams come in different shapes, and that CI practices come in different shapes accordingly. What works for one team size and composition doesn’t for another. I see the crux of CI as being continuous feedback, delivered rapidly and acting as a central source of truth for a team. I see branching as largely orthogonal to these concerns.

    • davef says:

      I am pleased that you enjoyed it, and delighted if it gets anyone thinking – that is always a good result whatever the outcome of the thoughts 😉

      For me the link between those two statements is “A branch is, by-design, intended to hide change in one part of the code from other developers.”. We want branches for the isolation that they provide, we want Continuous Integration to evaluate our changes alongside everyone else’s continuously (or at least a close approximation of “continuously”).

      You are right, the thrust is to recommend that we “integrate frequently”. The main point that I am trying to make is that there are better ways than feature branching to achieve that. Feature branching is intended to let you create a feature on a branch, obvious I know, but my point is that for CI, waiting until a feature is complete is too long to wait to see if it works with everything else. By changing the way that we work, by “embracing change” as Kent Beck said, and working in tiny, incremental steps towards the creation of a feature we eliminate the need to branch. It does require us to change the way that we work, but when we do, my subjective experience, and the data, says that we create better software faster!

      I don’t believe that this is a “teams come in different shapes” kind of thing. I think that there are some software engineering practices like TDD and CI, and I place both firmly in the engineering discipline camp, that have such a profound impact on quality and productivity that they are not really optional. Or at least, when a team chooses to do something else the choice that they are making is “we are happy to create lower quality software more slowly”.

      I recognise that this is a very hard-line statement, but it is what I believe to be true. I think that there are some disciplines that are so effective and so well-proven that they should be part of the definition of what “professional software development” looks like and for me CI (and TDD) are firmly in that camp.

      • The problem isnt that its a hard-line (and opinionated) statement. The problem is that it is wrong, because its premise is false (which in turn is predicated upon false assumptions).

        “Waiting to see if a feature is complete to see if it works with everything else is too long” is a correct statement. The assumption that creating the branch always implies such waiting is very much false, as is the assumption that using such a branch makes it take longer to integrate frequently/continuously. It doesn’t.

  11. Ingmar says:

    What about rebasing master to the feature branches regularly, maybe even after each merge into master? Then the feature branches would really only consist of their local changes.

    • davef says:

      Yes and if you really must use feature branches, this is what I would recommend. But I believe this to be very much second-best.

      First, there is a temptation to stay on the branch for longer and longer. Part of the definition of CI is that if you are “Committing changes to master at least once per day”. If you rebase through the day and then commit at the end, well OK, but that is less efficient than working on TRUNK.

      I want the lowest-possible friction to small frequent commits. That is, pretty-much, what CI demands of me. So I want to minimise the cost of committing changes. If I work on, even short-lived, feature branches I incur extra work, more steps in my process of committing changes to TRUNK.

      If you don’t practice CI, and commit to master less frequently than once-per-day, you are running a big risk that you will get tripped-up by a nasty merge later on. The longer you wait, the bigger that risk. Also, while your changes are isolated on your branch, you limit the degree to which I, and others, can refactor the code – without killing your changes.

      So I think that my argument for CI, in preference to feature branches, stands.

    • If everyone’s working on a feature branch, then master isn’t changing. In that scenario you don’t achieve anything by rebasing your branch on to master. You haven’t tested whether the code in your branch works in combination with the code in someone else’s branch.

      • Barney is correct. Rebasing from master (or trunk) more frequently means the developer (or pair) gets frequent feedback from the team’s integration branch. But it is NOT bidirectional. It does not make merging to main-trunk more frequent.

        It does usually require less effort because the amount of change since the last merge-in (sync-up) should be less, and therefore less effort to merge out/up to main-trunk.

  12. Okay – time for the other shoe to drop. While most of the initial post great stuff, there are still a couple fundamental statements or sentiments that are just plain wrong, the first and most fundamental being the following:

    [FALSE] “Any form of branching is antithetical to Continuous Integration.” [/FALSE]

    I know you and Jez having been saying this for years (and before that Martin), and Paul Hammant picked it up as well (and others since then). It was wrong then, its wrong now, and its always going to be wrong. That you continue to perceive all the arguments against as pushback is more indicative of the deeper problem.

    The deeper problem is your current way of perceiving and thinking about branching. This is a limiting mindset (as opposed to a growth mindset) and is not only holding you back from a better and more useful understanding, but is doing the same for others too.

    This mindset tends to think of branching from a “physical” perspective (of the code). But that’s not the actual dimension/domain where branching operates. Even a simplistic separation of branching “domain” into so called quadrants (like Agile Testing quadrants, or technical debt quadrants) can help expose the limitations of this mindset, along just two axes:

    1. New/change Development vs Integration — Use of a branch for developing (in-progress changes) VERSUS integration (synchronization of changes from others)

    2. Early (big up-front) vs. Late (Just-in-Time) — Branching early (project-based), for projects/releases, subprojects/features, and (sub)tasks VERSUS Branching late (lean-flow), at the last responsible moment, when true parallel/concurrent maintenance (separate from the main-trunk) becomes unavoidable (often post-release, but sometimes shortly before (even if only briefly).

    Your statement about branches being “antithetical” to CI is only true for ONE of these four quadrants (between 25% to at best 35% of the entire domain).

    The problem is that most of the time you encountered (feature) branches, it was the dynfunctional usage, and usually referring to *early* branching of a feature *integration* (often for a separate feature-team). This limited you from being able to perceive other forms and uses in any other way, and in the rare cases when you do (shortlived for <=1day) you perceive those as exceptions or outliers. Rarely (if ever) were you ever called in to look at the branching usage of a project with more than one codeline that was doing it effectively/right, and not causing unnecessary integration delays. So you see the problem-space as very much black-and-white.

    The problem is NOT the branch. It is not the branch that limits or discourages people from working and committing in smaller and more frequent (micro-sized) chunks. Whether I do (new) development changes multiple times a day is not made any harder or less likely by integrating them from a workspace on its own stream/branch versus the main-trunk. (In fact, making many small/micro-branches [i.e. one per change-task] is more overhead to create and retire each branch micro-branch then to integrate them all from the same stream in short+fast+frequent increments.

    It is not branching that is antithetical to CI, it is failing to integrate frequently that is antithetical to CI. The fear or risk of merging functional subsets of a feature (in working+tested chunks) had nothing to do with whether a branch is created or not. It has *everything* to do with concern over delivering partial/incomplete features (even if what complete so far is fully functional). This has nothing to do with branches or branching. It has everything to do with business-decision and funding-model regarding fixed-vs-flexible scope. This is what motivates branching/toggling at install-time or run-time (in the code) instead of earlier binding-time (in the VCS).

    Stop blaming the branch (or the vendor, e.g. IBM/Rational). The problem is the mindset around the incremental integration of features being all-or-nothing when it shouldn’t have to be. Whether you solve it with an additional VCS-branch, or an additional-code toggle, you are still adding substantial complexity to the integration-structure and/or code-structure that makes it less “clean” (and less well “factored”) and introduces more technical debt.

    I would go so far as to say that 80% of the time, the right answer to the question of branch-or-toggle is *neither*. Just integrate very frequently and using TDD/BDD and refactoring (and if you use toggles, don’t exempt them from refactoring).

    So the correct advice here is NOT “don’t branch X 3”. The correct advice is the same as Dijkstra’s laws of premature optimization:

    1. Don’t do it!
    2. Don’t do it (yet!)

    This is true for BOTH branching *and* toggling (and even parallelization of algorithms). Instead, Defer the irreversible decision to the last responsible moment (when the cost of delay crosses the critical threshold). Sometimes that means making the decision reversible somehow (by adding another level of indirection, *possibly* at later binding-time). But until then, keep it small, incremental, simple, sequential, and integrated as frequently as possible.

    The other thing to remember about frequent feedback, is that it applies at ALL levels of scale. Which is why we have multiple feedback loops at multiple levels. Sometimes adding a branch (or a toggle) IF DONE THE RIGHT WAY, does not delay feedback at the current-level but adds an additional (outer-nested) feedback-loop at the next-level-up (where you might not have had one in place already). This is where you might add a branch while still integrating just as frequently (rather than instead of).

    • davef says:

      Hmm, not sure how to respond to that. I start out disagreeing with what you say, I still think that I am right that CI and FB are antithetical, one is about exposing change, one is about hiding it. I end up agreeing with you, or you and Dijkstra, “Don’t do it! Don’t do it yet!”.

      I talk about toggling in another post, and come to the same conclusions that you mention here, it is a practice of last, or at least late, resort.

      There is good data that FB doesn’t work as well as CI, but you are right that what that data says is “failing to integrate frequently” is the real problem. But software development is complex and is a human discipline (or sometimes a human ill-discipline ;-)). As such there are complex effects, there are human behaviours that need to be taken into account. If we “integrate frequently” from branches, there is an overhead, we have more steps to take. They may be small simple steps, but there are more. As humans, we tend to be lazy. If there is more work, we will avoid it one way or another. What I see, and read, and hear reported back from friends and colleagues, is that what most teams end up doing is delaying the merge. So the (small) extra cost of branching applies a small pressure to delay the merge so works against your “frequent integration”.

      CI is a strategy designed to maximise that integration. Again, any impediments to the process of committing, and evaluating, change will tend to make us integrate less frequently. So, over the years I have become obsessed with the efficiency of this part of the process. I want the minimum number of steps to commit, I want my builds to be as fast as I can achieve, my tests to be fast, the feedback from the CI to be efficient (I prefer Team Displays to email for example). I want to optimise EVERYTHING to reduce the cost of a commit and so reduce the barrier to frequent, small changes. For me that is what CI is really all about!

      If your branches live too long, you are not getting feedback regularly enough (“failing to integrate frequently”). If you are integrating frequently enough, the extra steps associated with a branch seem superfluous to me and so are waste.

      • The statement that “a branch is, by design, intended to hide change from other developers” is FALSE. Isolating work is not the same thing as hiding it. Branching is not the same as information-hiding, it is a form of isolating/separating concerns, not hiding them. (Just like “information hiding” is a form of encapsulation, but not all encapsulation is about information hiding.)

        Separation of concerns, and hiding concerns are different things. We usually want to expose the separation (not hide it). We arent minimizing change (nor its frequency), we are instead striving to minimize the (economic) *impact* change (and change-frequency) using forms of isolation (encapsulation) to achieve separation of concerns. A branch encapsulate one or more changes by isolating them in a separate version-container. It does not hide them, nor does it prevent them from being merged frequently or easily.

        Also, the statement “If we ‘integrate frequently’ from branches, there is an overhead, we have more steps to take” is FALSE, and is predicated on a false assumption.

        There are no more steps to take to integrate from your workspace to main-trunk, if you are on a separate development branch or not.

        What you are hearing reported from teams colleagues etc. is being filtered through your incorrect assumptions/thinking, and is a different kind of branch (one for integrating, not development). Delaying the merge in this case is not due to extra steps in the merge-commit (there are none). It is due to having two-levels of integration (first to the feature-branch, then to the main-trunk). That is a legitimate delay, and also a legitimate additional merge-commit.

        Be careful about optimizing commits/commit-steps as if they were synonymous with optimizing merging/integration-steps. They are not the same thing. Not every commit is a merge/integration. (And not every commit represents a working/correct state, much less an integrated one).

        You keep shoving too many assumptions into the same statement. commit does not necessarily mean merge. Isolate does not imply hiding.

  13. Alper says:

    Github’s code review functionality is pretty terrible anyway. Somebody who builds a nice way for a team to retroactively code review work and maintain review coverage could win pretty big in this space.

  14. Thanks Dave! Glad youre starting to absorb at least part of it. Lets address those (before I reply to your toggling post – which shares the same thinking-fallacies).

    You write: “There is good data that FB doesn’t work as well as CI …”

    Ive seen the data you refer to, from Paul H., on your CD blog (with Jez), and even later material from Nicole Forsgren. The problem is the data is no good – it shares the same wrong assumption (which is the assumption that lifetime of the branch is closely correlated to frequency of integration to main-trunk). This assumption is wrong, and the data used to infer that is in fact incorrect.

    A. The data on this subject that fed into the last two years state-of-devops report makes the same wrong assumption. It doesnt include any data that looks at the actual frequency of commits-to-trunk from the corresponding development activity (regardless of whether the activity was done on a separate branch).

    It assumes that duration of the branch is the same as the time between when the work started and when it was merged/integrated. This is a false assumption. If work is integrated more than once between those two times, the data that was collected isnt seeing that. It ALSO isnt seeing when work is integrated infrequently WITHOUT having a corresponding branch.

    It only thinks work is being committed frequently (or not) based on the branch duration, and not on the number of integration/commits between the activity start+end (even if it is at the granularity of a “story” rather than a feature, and especially if the story itself is broken up into smaller dev-tasks that are each integrated).

    Pretty much all the data you’re referring to suffers from this problem. Which is exactly why your next statement (“if your branches live too long you are not getting feedback regularly enough”) is utterly false/incorrect.

    The problem is neither the branch NOR the lifetime of the branch (at least not when it is a *development*-branch). The problem is failure to break-up the work into smaller chunks that get integrated more frequently. This is *regardless* of whether the development work is done on a separate branch or not!

    The branch doesnt make people and teams worse at doing that. If anything, the branch gives a better feeling of a safety-net to make it safer to commit to the repository more frequently (which is only actually true for a centralized CVS, since a DVCS like Git already gives the developer private versioning in the developer’s repo).

    Branch length/duration simply DOES NOT automatically correlate to merge-commit frequency. All of your data that you think suggests otherwise implicitly assumes it but never actually demonstrates/confirms this. IN order to do that the data would have to be able to look at the frequency of commits-to-trunk on behalf of the feature regardless of whether that work happened on one (or more) branch, as well as whether or not it was distributed across multiple stories (and their (sub)tasks) on behalf of the feature.

    Your next problem is thinking that minimizing the number of steps to commit is somehow equivalent to minimizing the time+effort to integrate to main-trunk. You need to look at the execution time+delays associated with committing *private* versions (whether or not they are on a branch, like in a developers Git repo), as well as the time+steps to sync and/or rebase your workspace before you commit to central-repo (on any branch, much less trunk).

    The key things you are missing (and making implicit assumptions about) are *timing* and *usage*. Not all feature-branches are created equal, and you (and Jez, and Martin, etc.) have been assuming otherwise. There is a difference between a branch created for working/development (e.g., a new task, change, story) versus a branch created for the purpose of integrating/stabilizing shared work. The latter is a “codeline”, a form of “integration branch.”

    The original complaint with the feature branches Martin and others had seen was the case of a feature *integration* branch. A single branch was being used to integrate/stabilize changes from multiple developers toward a single feature (regardless of whether they did their work directly on the feature-branch, or worked on a separate short-lived story/task-branch and then integrated to the feature branch).

    This is VERY different from a development/working branch being used for a task, story, or feature. In the case of the story or feature, work still needs to be broken-down (and committed/merged) in smaller micro-sized chunks (and TDD/BDD are among the best known ways of doing this).

    Creating an additional *integration* branch is what adds another level of integration-indirection between the development-change and the main-trunk. This is NOT AT ALL the case with a development/working branch. And the duration of the activity for implementing the entire story (or small feature) *will* correspond to the duration of a corresponding working branch (IF one is even created) but it does NOT determine if it is done in an incremental manner with frequent commits (nor does it make it more difficult, nor any less safe).

    Lastly, you are also assuming too much is being “isolated” with a branch. The branch isolates work, but not necessarily people (or feedback). The way you end-up using the branch *might* do that, if you are not intentional or deliberate about it. That is not a matter of luck, but of intention and discipline (just like TDD, refactoring, and “simple code”).

    • davef says:

      I think you keep saying the same thing! It is not that I don’t understand, it is that I don’t agree. 😉

      As I have said several times, and you have ignored, there are ways of working with branches where CI is possible, but when you do that you incur the overheads of branching for no real value. If you are merging a development branch to master multiple times per day, why bother with the development branch, work on master locally and push regularly from there. It *is* a form of local branch. I think that the thing that we can agree on is that frequent integration of your work with everyone else’s is the key enabling step to CI. After that the mechanisms matter a bit less, but from my perspective I recommend, for previously mentioned reasons, that you work to eliminate all possible waste from the process to make it as low-impact, and simple, as possible.

      • Hi Dave! Yes – I keep saying the same thing. I’ve been hoping you have a greater curiosity to learn and understand why your reasoning is wrong (based on false assumptions), than in maintaining a position (or at least learning and understanding why the assumptions being applied are false).

        You write: “If you are merging a development branch to master multiple times per day, why bother with the development branch, work on master locally and push regularly from there. ”

        Why indeed? For that matter, why bother with the overhead of having a local branch? Of course, when it is strictly local, when no trace of my in-progress effort is visible in the central repo, then it really and truly is hidden from the rest of the team!

        You also wrote (correctly): “CI is a publication based approach to development. It allows me to publish my ideas to the rest of my team and see the impact of them on others. It also allows the rest of my team to see, as it is evolving, the direction of my thinking.”

        There is much talk of feedback, and of the rest of the team seeing (and being aware) of the work that is taking place. And “publishing” your changes to the central repository very visibly provides “intention-revealing” transparency to the rest of the team!

        Here’s the thing: I get all of this when I publish my in-progress work to the central repo (using intention-revealing names/comment for each commit, tag or branch). This is true *regardless* of whether my last commit was to a separate branch or main-trunk.

        There is certainly additional (and valuable) feedback when I *impose* my changes on the rest of the team by overwriting the latest commit on the shared integration-branch. But we don’t want to do that when we’re not yet at the appropriate point in our red-green-refactor cycle (that would violate the Beck’s rules of “simple code”).

        Do you truly want to “hide” your work from the rest of the team before its finished? Or do you just want to *isolate* your work (but still visibly) before you have confidence/trust that it won’t have a harmful impact?

        You already said we don’t want to hide work/change from the team. We want to isolate and contain any non-trivial harmful impact, while still making the work visible.

        But if your work is *only* visible in your local workspace and not visible in the central repo, then it is effectively hidden!!!

        The answer to why others want to “bother with a development branch” (even if they merge to master multiple times a day) is because it makes the work-visible (in the central-repo) and clearly reveals the intent (e.g., by naming it after the corresponding task, story, or even feature).

        This is the entire justification for the overhead of *creating* the development branch. Because it transparently *exposes* the work-in-progress (instead of hiding it in the local workspace).

        In fact, neglecting to do so *delays* the feedback of making that work visible, and for others on the team to notice it when they view any recent changes to the repo structure or history.

        Creating the development branch does NOT hide the work, NOR delay feedback to the team. In fact it does the opposite! It visibly exposes the work-in-progress, clearly expresses its intent, while still isolating the *impact* of the change.

        The purpose of a branch is the opposite of hiding work or delaying feedback. It is to isolate the work to minimize the impact of in-progress change while making it visible in the central repository.

        From that point onward, there is no “extra work/steps” to commit the changes to the central-repo nor to merge them to main-trunk (any decent git-client, or vcs-client, takes care of that, not to mention any decent CI tool).

        There is no delayed feedback, no hidden work, caused by the branch itself, nor by creating it. All those opinions, experiences, reports to the contrary are based on FUD stemming from misinformation, misunderstanding, and false-assumptions.

        There is still the problem of adopting the necessary discipline to overcome ill-informed mindsets and fear/concern (both reasonable and unreasonable).

        But you have to fix those wrong assumptions, the gross over-simplifications, before you can understand the real problem and the real issues & problems.

        – branches make work/intent *visible* (they don’t hide it)
        – branches *isolate work* (not feedback)
        – commits are feedback/publication, even when they aren’t merge-commits
        – branch length/lifetime does *not* correlate to reduced integration frequency
        – development (work-in-progress) branches are *very* different from *integration* branches
        – feature *integration* branches most definitely *do* delay integration (feature/story/task *development* branches do *not*)
        – delayed integration is usually due to fear of commit-meant (with or without a branch)

  15. Brice says:

    This article doesn’t reflect my personal experience with branches.

    Firstly, you present feature branching and CI as mutually exclusive. They’re not. You’re creating a false dichotomy.

    As other commenters have mentioned, the CI tool should be configured to run a merged version of a branch with master whenever a new commit is made to that branch. You get a new build with every commit, and the result reflects the state of the branch *merged with the trunk*, which invalidates both your major issues, namely “too late” and “too risky”.

    I’ve worked in environments where we deployed multiple new versions per day to production and used feature branches (in fact, working on trunk was impossible as it was a protected branch). This invalidates the claim you make that feature branch invariably slows development down.

    You might argue that we were carrying out extra work not needed and that we would have been going even faster by developing on trunk. I would disagree but that’s a longer discussion.

    Thirdly, branching carries semantic information that your process looses. Branches tell a story of the work in a way that a linear history does not. My biggest personal issue with trunk development is that this information is lost. Even on project where I am the sole developer, I will branch and do a forced merge to show that the branch work is related and the commits should be considered together as a unit of work (even if the entire process has three commits and takes 20 minutes). This information is lost in a linear history, making it harder to understand changes made in the past.

    Fundamentally, I think you mistake practices and principles.

    The practice of branching is neutral, and basically irrelevant to the underlying principle here, which is to keep WORK IN PROGRESS to a minimum, and behind which I stand for both informal/personal and formal reasons.

    Teams can have a shitload of WIP while developing on trunk by having quarterly or biannual release cycles (Been there, done that).

    Teams can have almost no WIP by using branching (Also been there, done that).

    You’re cargo-culting trunk development without understanding the underlying issue. (Or at least you seem to in your post. Having gone through “continuous Deployment” I know your reasoning is in fact much more sophisticated on these issues). The take-aways from the State of Devops report is “Good leadership matters”, “Reduce WIP”, “Automate”, “Couple loosely”. NOT “Don’t branch”.

    The irony is I’m inclined to side with you, but I think the post lacks subtlety, is misleading and wrongly reasoned so I’m having to defend the opposite viewpoint!

    • davef says:

      I think that we will have to agree to differ. If you ask the people that coined the term “Continuous Integration” and established the practice in our industry, the agree with me. Appeals to authority aren’t sufficient though. I too have worked on projects with all of these approaches. One of the common factors that I have seen for the highest-performing teams is to work on Trunk! The data from the industry backs my claim (see the link in the blog-post).

      Yes you can do any of these approaches badly. That is not what I am describing. At the limit, working on Trunk is, in my experience, the least risky and most efficient. Data seems to back that claim.

      I don’t think that I “mistake practice and principles”. If you have drunk the Lean cool-aid, then limiting WIP *and* reducing waste are both important to an effective, efficient process. There are two things here, not one!

      We can “have almost no WIP while using branches” but they can’t also, AT THE LIMIT, reduce waste while branching – branching needs more steps! If the branch is unnecessary, then the steps are waste. Even if those steps are simple and cheap, there are more. I am making the point that there are ways of working that make the branches unnecessary.

      You said “the CI tool should be configured to run a merged version of a branch with master whenever a new commit is made to that branch”. Assuming that you mean run CI on the code in the branch, then, unless you are reckless, you are also running the build again when you merge to master? If so, that is waste, you are running the build and the tests twice instead of once, and so delaying feedback.

      • Hi again!

        “If you ask the people that coined the term “Continuous Integration” and established the practice in our industry, they agree with me.”

        I had those conversations with all those people back then (Martin even wrote the foreword for our book on SCM patterns). And with the Poppendiecks (who agreed the issue was un-synchronized work). And with Duvall and Glover. (I was also a reviewer for your book too).

        All of that was before Martin popularized “feature branch” with the wrong definition. Feature *integration* branches are usually a bad idea (there are some limited cases, when the biggest obstacle is concurrent-commit-contention during the integration+build time-period [when it can’t be more easily resolved by other means/HW]).

        But then term grew to gross misuses, and wrongly included even *development* branches (for small sprint-sized features) and then all branches (even when still integrating frequently). [Pretty ironic considering the exact same thing happened to the term “Refactoring”, and Martin was the one who most famously published “Refactoring Malapropism”]

        Martin even went on-record in Adam Dymitruk’s (in)famous blog-entry that he had been previously unaware of this other (*development*) usage of feature-branches. And he significantly updated his bliki entry at the time to clarify, as well as the perceived reasoning/intent for why those using the long-lived unintegrated feature-branch didn’t want to merge (it wasnt because of the branch, it was the mindset around merging all-or-nothing, and the fear of incremental/partial implementation on the main-trunk).

        Even yours and Jez early articles on the CD blog in those days was referring to the wrong (oversimplified and incorrect) definition but still at the correct (ab)use-case.

        Then you and others (including Paul H.) started overgeneralizing the problem (and the causes) due primarily to a fundamental misperception of branches as operating at the physical-level of a codebase (which is untrue) as project-oriented constructs (which is often true).

        Some better commercial tools popularized the notion of a branch as a “stream” (rather than as a project or subproject), which is in fact the proper mindset for thinking and using branches, and is exactly aligned with lean thinking (and principles of product development flow) regarding flow and WIP.

        And throughout this blog-entry (in the post, and in the comments) you are not allowing yourself to break out of your existing mindset. You keep saying “you disagree” and then keep referring to the same false assumptions and false dichotomies, rather than allowing yourself to realize where the actual boundaries and limits are.

        And its all too easy to continue when you keep seeing branches ONLY in that limiting way, and dismiss the other as merely *possible* but lucky/risky/wasteful, In fact the opposite is true, and you are missing the majority of the purpose and utility of a branch (especially from a stream-oriented perspective instead of a project-oriented one).

        The harm is that your recommended alternative (feature-toggling) is also growing in misuse/abuse/overuse, to the point where its causing a lot more overhead/waste/harm than the thing you are trying to steer them away from in the first place (when done improperly).

        Granted, showing them the right way to do it, in the right context, for the right reasons – is extremely helpful. But it goes both ways! You also have to acknowledge and show the right ways, context, and reasons for branching (as well as toggling), rather than further repeating/spreading/teaching wrong information.

        You need to talk about timing (specifically binding-time), and about delaying execution (rather than integration), and fight the real problem to get the right mindset.

Leave a Reply

Your email address will not be published. Required fields are marked *