Is Continuous Delivery Riskier?

I read an article in CIO magazine today “Three Views on Continuous Delivery“. Of course the idea of such an article is to present differing viewpoints, and I have no need, or even desire, for the world to see everything from my perspective.

However the contrary view, expressed by Jonathan Mitchell seemed to miss one of the most important reasons that we do Continuous Delivery. He assumes that CD means increased risk. Sorry Jonathan, but I think that this is a poorly informed, misunderstanding of what CD is really about. From my perspective, certainly in the industries where I have worked of late, the reverse is true. CD is very focussed on reducing risk.

Here is my response to the article:

If Continuous Delivery “strikes fear into the heart of CIOs” they are missing some important points.

Continuous Delivery is the process that allows organisations to “maintain the reliability of the complex, kaleidoscope of interrelated software in their estate”, it doesn’t increase risk, it reduces it significantly. There is quite a lot of data to support this argument.

Amazon adopted a Continuous Deployment strategy a few years ago, they currently release into production once every 11.6 seconds. Since adopting this approach they have seen a 75% reduction in outages triggered by deployment and a 90% reduction in outage minutes. (Continuous Deployment is subtly different to Continuous Delivery in that release are automatically pushed into production when all tests pass. In Continuous Delivery release is a human decision).

I think that it is understandable that this idea makes Jonathan Mitchell nervous, it does seem counter-intuitive to release more frequently. However, what really happens when you do this is that it shines a light on all the places where your process and technology are weak, and helps you to strengthen them.

Continuous Delivery is a high-discipline approach. It requires more rigour not less. Continuous Delivery requires significant cultural changes within the development teams that adopt it and beyond. It is not a simple process to adopt, but the rewards are enormous. Continuous Delivery changes the economics of software development.

Continuous Delivery is the process that is chosen by some of the most successful companies in the world. This is not an accident. In my opinion Continuous Delivery finally delivers on the promise of software development. What businesses want from software is to quickly and efficiently put new ideas before their users. Continuous Delivery is built on that premise.

Posted in Uncategorized | Leave a comment

New Job, New Business

I have very recently given up my job at KCG (Formerly Getco) Ltd. KCG have been a very good employer and I thank them for the opportunities that they gave me.

The reason that I left though is, I hope, understandable. I have finally, after the long-time nagging of my friends and relatives, decided to try independence. This is a very exciting time for me and I thank my friends and relatives for giving me the push that I needed ;-)

I have set up my own consulting company called ‘Continuous Delivery Ltd.” – What else? I intend to offer advice to companies travelling down, or embarking on, the complex road to Continuous Delivery. I also plan to work on some software that I think is missing from the Continuous Delivery tool-set – I have to feed my coding-habit somehow (stay-tuned).

I hope that this will, if anything, give me a bit more time for writing blog-entries here. I have a LOT to say that I haven’t got around to yet.

If you are curious, you can read more about my new venture at my company website http://www.continuous-delivery.co.uk/

Naturally, if you feel that my services can be of any help, please get in touch.

 

Posted in Uncategorized | Leave a comment

Cargo-Cult DevOps

My next blog post in the XebiaLabs “CD Master Series” is now available.

DevOps is a very successful meme in our industry. Most organisations these days seem to be saying that they aspire to it, though they don’t necessarily know what it is.

I confess that I have a slight problem with DevOps. Don’t get me wrong, DevOps and Continuous Delivery share some fundamental values. I see the people that promote DevOps as allies in the cause of making software development better. We are on the same team.

At its most simple level I don’t like the name ‘DevOps.’ It implies that fixing that one problem, the traditional barrier between Dev and Ops, is enough to achieve software nirvana. Fixing the relationship between Dev and Ops is not a silver bullet…

Go to the XebiaLabs website to read more…

Posted in Continuous Delivery, DevOps, External Blog Post | Leave a comment

The Reactive Manifesto

Over the past couple of months I have been helping out some friends to update the Reactive Manifesto.

There are several reasons why I agreed to help. First I was asked to, by my old friend Martin Thompson. The most important reason though is because I think that this is an important idea.

The Reactive Manifesto starts from a simple thought. 21st Century problems are not well-served by 20th Century assumptions of software architecture. The game is moving on!

There are lots of reasons for this: The problems that we are asked to tackle are growing in scale, sometimes in complexity too; The demands of our users are changing; The hardware environment has, and continues to change. The rate of change in our best businesses is increasing.

Talk to any of my friends and they will, no doubt, tell you that I am a bore on the topic of software design – as well as several other subjects ;-)

I think that we, as an industry, don’t spend enough time thinking about the design of our solutions. Too often we start out projects by saying “I have my language installed, my web-server, I have Spring, Hibernate, Ruby-on-Rails, <insert your favourite framework here> and  my database ready to go – now, what is the problem?”. We have become lazy and look for cookie-cutter solutions. We then proceed to write code in straight-lines – poor abstraction, little modelling, rotten separation of concerns. Where is the fun in any of that?

I get genuine pleasure from creating solutions to problems, but I don’t get pleasure from just any old solution. Code that only does the job is not enough for me. I want to do the job with as few instructions as possible, as little duplication. I want the systems that I write to be efficient, readable, testable, flexible, easy to maintain, high-quality, dare I say elegant!

I have been lucky enough to work on a few systems that looked like this. Do you know what? When we achieve those things we are also more efficient and more cost-effective as developers. The software that we create is more efficient too, it runs faster, does more with fewer instructions and is more flexible. This not over-engineering, this is professionalism.

Interestingly their are sometimes similarities in the course-grained architecture of such systems, at least on the larger ones that I have worked on. They are loose-coupled, based on services that implement specific bounded-contexts within the problem domain, that communicate with each other only via asynchronous messaging. These systems almost never look like the standard, out-of-the-box three-layer architecture built on top of a relational database, although pieces of them may use some of the standard technologies, including RDBMS.

The hardware environment in which our software executes is changing. The difference in the cost per byte between RAM and disk is reducing. The capacity of RAM is increasing dramatically. Distributed programming is the norm now, the relative performance of some of our hardware infrastructure has changed (e.g. Network is now faster than disk). Large-scale non-volatile RAM is on the horizon. All this means that the assumptions that underpinned the ‘standard-approach’ have changed. The old assumptions don’t match either the hardware environment nor the problems that we are solving. 

The Reactive Manifesto is about discarding some of those assumptions. About more effectively modelling the problems in our problem domain, writing code that is easier to test, more efficient to run, easier to distribute and that is dramatically more flexible in use.

Take a look at the Reactive Manifesto. If you think we are right please sign it, more than 8000 other people have done so so far. If you think we are wrong, tell us where.

Most importantly of all, please don’t assume that the same old way of doing things is the best approach to every problem.

Posted in Reactive Systems, Software Architecture, Software Design | Leave a comment

Strategies for effective Acceptance Testing – Part II

The second part of my blog post on effective Acceptance Testing is now available on the XebiaLabs website…

In my last blog post I described the characteristics of good Acceptance tests and how I tend to use a Domain Specific Language based approach to defining a language in which to specify my Acceptance Test cases. This time I’d like to describe each of these desirable characteristics in a bit more detail and to reinforce why DSL’s help.

To refresh you memory here are the desirable characteristics:

  • Relevance - A good Acceptance test should assert behaviour from the perspective of some user of the System Under Test (SUT).
  • Reliability / Repeatability - The test should give consistent, repeatable results.
  • Isolation - The test should work in isolation and not depend, or be affected by, the results of other tests or other systems.
  • Ease of Development - We want to create lots of tests, so they should be as easy as possible to write.
  • Ease of Maintenance - When we change code that breaks tests, we want to home in on the problem and fix it quickly.

Goto the XebiaLags site to read more…

Posted in Uncategorized | Leave a comment

Sorry to any real readers…

A few weeks ago I switched on the feature in WordPress that allows users to sign-up for notifications when I write a new post.

If you are a real person who signed up, I am very sorry but I am going to turn that feature off again.

Since turning it on I get constantly spammed with bogus sign-ups by clearly made up email accounts. I am not sure what the attack intended is, or what these people hope to gain from this, but it is chewing up storage and cycles to no advantage.

If you want notifications you can follow me on twitter at #davefarley77 and I will tweet when there is a new post.

Very sorry!

Posted in Site Admin | Leave a comment

Strategies for Effective Acceptance Testing

My second guest blog post for XebiaLabs is the first of two parts. It is on the topic of “Strategies for Effective Acceptance Testing”

“Automated testing is at the heart of any good Continuous Delivery process and I see automated Acceptance Testing as being one of the foundations of any effective testing strategy.

In my book ‘Continuous Delivery’ we defined Acceptance Testing as asserting that the code ‘did what the business wanted it to do’. The distinction that we made is between that and unit-test-based TDD, which is really focused on asserting that the code does what the developer thinks it should. Both of these perspectives are important, and compliment one another, but for the rest of this post I want to talk about Acceptance Testing.

Good Acceptance Tests are hard to get right, but there are a few tricks that make it easier…”

Read the rest at the XebiaLabs site…

Posted in Acceptance Testing, Continuous Delivery, External Blog Post, Uncategorized | Leave a comment

What does ‘Good’ look like?

The nice folks at XebiaLabs have asked me to do a few guest blog posts on their site. My first post is called “What does ‘Good’ look like?”

I think that we have a problem in the software development industry. A significant proportion, if not the majority of practitioners have never seen, let alone worked on, an efficient project. This is a scary thought! If people don’t know what good looks like, how can we expect them to do well?”

(Read the rest at the XebiaLabs site…)

 

Posted in Agile Development, Continuous Delivery, External Blog Post | Leave a comment

The first casualty of a software emergency…

A colleague and I were talking today about what happens when things go badly. I said that I thought that in an ideal world we should be working to make our deployment pipelines so efficient that, even in the event of an emergency we should be able to make changes and have them fully validated before we release them. After all, the last time that you want to be increasing risk is when things are already bad! Jason made me laugh with “So rather like ‘The first casualty of war is truth’, ‘The first casualty of Software-Emergency is validation”.

This is spot on. What commonly happens is that when things go badly teams throw their protections out of the window and start changing things in production without testing them first – scary!

The ideal answer to this is to concentrate, before the emergency, on shortening your cycle-time. Your cycle-time is the time it takes you to release the smallest possible change to your code following your normal release validations and procedures. Your cycle time should be short enough so that when an emergency hits, you can still release your fixes having validated them fully.

Short cycle-times are a general good. They allow you to work in small batches, ensuring that each change is small-enough so that when it does go wrong it is simple to see where the problem lies and how to fix it. They provide rapid feedback for your ideas, allowing you to quickly assess their value. They stop you from creating snowflake-servers, deploying untested changes into production. They allow you to get valuable software into the hands of your users quickly and efficiently and they allow you have a single, validated, route to production for all your changes.

Cycle-time is a wonderful tool to drive good behaviour.

People often ask me how to start with something as complex as the introduction of Continuous Delivery to an organisation. The answer is pretty simple, look to your cycle-time.

I think that Continuous Delivery is built on serval important foundations, Version Control, Continuous Integration, Automation, Feedback, Collaboration and Cycle-time.

If you don’t use version control, shame on you, download a free VCS now and start immediately. If you are not yet doing Continuous Integration, come on, catch-up, this has been a well know, well publicised, effective practice for more than a decade now. Cycle-time is different to these others though. The others are mechanisms, cycle-time is a metric by which you can measure your performance. As such it is a great tool to help you identify where to start.

A great tool for understanding your cycle-time is value-stream analysis, (aka value-stream mapping)

Draw a diagram enumerating the steps, laid out on a timeline, that changes destined for production progress through. From idea to working, validated software in the hands of a user. You can do this at various levels of resolution, depending on how much effort you want to spend identifying and measuring the steps. You should try and capture any pauses between steps as well as the steps themselves.

Here are some examples of value-stream maps for some real projects that I have seen.

TraditionalCycleTime

CDCycleTime

However good, or bad, you are at software delivery this should highlight places that you could improve.

This is an excellent tool and I recommend that you try it for your development process, it may surprise you.

I like tools and processes that encourage ‘good behaviour’. Processes that make doing the ‘right-thing’ easy. Cycle-time is one of those tools. If only you work to optimise cycle-time it will have a beneficial effect on your whole process. Even if that is all that you do. It will make you eliminate waste, it will make you collaborate more, because you will need to minimise hand-overs. It will make you reduce the amount of inventory/work in process and so make the process more efficient. It will make you automate more because human beings are too slow to sustain fast cycle-times.

It is not silver bullet. Software development is complex, but cycle-time is a really great metric, and working to optimise it is a great strategy towards better process.

Posted in Continuous Delivery | 1 Comment

The basics of TDD

The objectives of Test Driven Development and unit testing are generally misunderstood. The problem is the word ‘test’, it is much less about testing and much more about specification of requirements, showing your working – as in maths, and the impact it has on design. TDD is much more important than only testing. Robert C Martin has a good analogy, he likens TDD to double entry bookkeeping:

Software is a remarkably sensitive discipline. If you reach into a base of code and you change one bit you can crash the software. Go into the memory and twiddle one bit at random and very likely you will elicit some form of crash. Very, very few systems are that sensitive. You could go out to one of these bridges over here, start taking bolts out and they probably wouldn’t fall. I could pull out a gun and start shooting randomly and I probably wouldn’t kill too many people. I might wound a few but — you know — you get a bullet in the leg or a lung and you’d probably survive. People are resilient — they can survive the loss of a leg and so forth. Bridges are resilient — they survive the loss of components. But software isn’t resilient at all: one bit changes and — BANG! — it crashes. Very few disciplines are that sensitive.

But there is one other [discipline] that is, and that’s Accounting. The right mistake at exactly the right time on the right spreadsheet — that one-digit error can crash the company and send the offenders off to jail. How do accountants deal with that sensitivity? Well, they have disciplines. And one of the primary disciplines is dual-entry bookkeeping. Everything is said twice. Every transaction is entered two times — once on the credit side and once on the debit side. Those two transactions follow separate mathematical pathways until they end up at this wonderful subtraction on the balance sheet that has to yield to zero.

This is what test-driven development is: dual-entry bookkeeping. Everything is said twice — once on the test side and once on the production code side and everything runs in an execution that yields either a green bar or a red bar just like the zero on the balance sheet. It seems like that’s a good practice for us: to [acknowledge and] manage these sensitivities of our discipline…

-Robert C. Martin

The sensitivity of software is a good point to reflect upon, there is little in human experience that is so complex and yet so fragile. Without a strong focus on showing your working, no matter how good you are as a developer, if you omit the tests, your software will be worse than it could have been.

The double-entry bookkeeping analogy only holds up though if you do test first development. If you write your test after the code it is generally not sufficiently independent to provide a valid “separate path” check.

Test first is the idea that your write the test before you write the code that is being tested. This seems like a bizarre idea to many people at first, but actually makes perfect sense.

If you write the test first and run it, you get to see it fail, so you are testing the test.

If you write the test first then you are expressing what you want of your software from the outside in. It leads you to design for behaviour and so you have less of a tendency to get lost in irrelevant technicalities.

This is a much more effective design approach than testing after you have written the code, and as a by product it leads inevitably to software that is easy to test – you have to be pretty dumb to write a test before you have written the code for an idea that can’t be tested!

Finally there is a virtuous circle here. Software is easy to test when it is modular. It is easy to test when dependencies are externalised and it is easy to test when there is a clear separation of concerns.

Now the software industry is famous for change, but if there is any idea that has remained constant for, literally, decades it is that quality software is modular, has well defined dependencies and clear separation of concerns – sound familiar? This has been how computer science has defined quality since before I started, and that was a very long time ago!

Using TDD as a practice makes you produce higher quality software, not because it is well tested (though that is a nice by-product) but because it improves the quality of your designs. Want more detail:

http://c2.com/cgi/wiki?TestDrivenDevelopment http://www.agiledata.org/essays/tdd.html http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd http://unitmm.sourceforge.net/fibonacci_example.shtml http://clean-cpp.org/test-driven-development/ http://agile2007.agilealliance.org/downloads/presentations/TDD-Cpp-Agile2007-HandsOnTddInCpp.ppt_801.pdf http://www.growing-object-oriented-software.com/

Posted in Agile Development, TDD | Leave a comment