Pair Programming – The most Extreme XP Practice?

I was an early adopter of XP. I read Kent’s book when it was first released in 1999 and though skeptical of some of the ideas, others resonated very strongly with me.

I had been using something like Continuous Integration, though we didn’t call it that, in projects from the early-1990s. I was immediately sold when I saw JUnit for the first time. This was a much better approach to unit testing than I had come across before. I was a bit skeptical about “Test First” – I didn’t really get that straight away (my bad!).

The one idea that seemed crazy to me at first was Pair Programming. I was not alone!

I had mostly worked in good, collaborative, teams to that point. We would often spend a lot of time working together on problems, but each of us had their own bits of work to do.

With hindsight I would characterise the practices of the best teams that I was working with then as “Pair Programming – Light”.

Then I joined ThoughtWorks, and some of the teams that I worked with were pretty committed to pair programming. Initially I was open-minded, though a bit skeptical. I was very experienced at “Pairing – Light”, but assumed that the only real value was at the times when I, or my pair, was stuck.

So I worked at pairing and gave it an honest try. To my astonishment in a matter of days I became a believer.

Pair programming worked, and not just on the odd occasion when I was stuck, but all the time. It made the output of me and my pair better than either of us would have accomplished alone. Even when there was no point at which we were stuck!

Over the years I have recommended, and used, pair programming in every software development project that I have worked on since. Most organisations that I have seen try it, wouldn’t willingly go back, but not all.

So what is the problem? Well as usual the problems are mostly cultural.

If you have grown up writing code on your own it is uncomfortable sitting in close proximity to someone else and intellectually exposing yourself. Your first attempt at pair-programming takes a bit of courage.

If you are one of those archetypal developers, who is barely social and has the communication skills of an accountant on Mogadon1 this can, no-doubt, feel quite stressful. Fortunately, my experience of software developers is that, despite the best efforts of pop-culture, there are fewer of these strange caricatures than we might expect.

If you are a not really very good at software development, but talk a good game, pair-programming doesn’t leave you anywhere to hide.

If you are a music lover and prioritise listening to your music collection above creating better software, pairing is not for you.

There is one real concern that people ignorant of pair-programming have. That is that it is wasteful.

It seems self-evident that two people working independently, in-parallel, will do twice as much stuff as two people working together on one thing. That is obvious, right?

Well interestingly, while it may seem obvious, it is also wrong.

This may be right for simple repetitive tasks, but that isn’t what software development is about. Software development is an intensively creative process. It is almost uniquely so. We are limited by very little in our ability to create the little virtual universes that our programs inhabit. So working in ways that maximise our creativity is an inherent part of high-quality software development. Most people that I know are much more creative when bouncing ideas off other people (pairing).

There have been several studies that show that pair-programming is more effective than you may expect.

In the ‘Further Reading’ section at the bottom of this post, I have added a couple of links to some controlled experiments. In both cases, and in several others that I have read, the conclusions are roughly the same. Two programmers working together go nearly twice as fast, but not quite twice as fast, as two programmers working independently.

“Ah Ha!”, I hear you say, “Nearly twice as fast isn’t good enough” and you would be right if it wasn’t for the fact that in nearly half the time it takes one programmer to complete a task, two programmers will complete that task and do so with significantly higher quality and with significantly fewer bugs.

If output was the only interesting measure, I think that the case for pair-programming is already made, but there is more…

Output isn’t the only interesting measure. I have worked on, and led, teams that adopted pair-programming and teams that didn’t. The teams that adopted pairing were remarkably more effective, as teams, than those that didn’t.

I know of no better way to improve the quality of a team. To grow and spread an effective development culture.

I know of no better way, when combined with high-levels of automated testing, to improve the quality of the software that we create.

I know of no better way to introduce a new member of the team and get them up to speed, or coach a junior developer and help them gain in expertise or introduce a developer to a new idea or a new area of the code-base.

Finally, the most fun that I have ever had as a software developer has been when working as part of a pair.

Successes are better when shared. Failures are less painful when you have someone to commiserate with.

Most important of all the shared joy of discovery when you have that moment of insight that makes complexity fall away from the problem before you is hard to beat.

If you have never tried pair programming, try it.

Give it a couple of weeks, before assuming that you know enough to say it doesn’t work for you.

If your manager asks why you are wasting time, make an excuse. Tell them that you are stuck, or just “needed a bit of help with that new configuration”.

In the long run they will thank you, and if not find someone more sympathetic to what software development is really about.

Pair programming works, and adds significant value to the organisations that practice it. Give it a try.

     Dave Farley 2016

1 A Medical Drug used as a heavy sedative.

Further Reading:

‘The Case for Collaborative Programming’ Nosek 1998
http://bit.ly/1J6LrAP

‘Strengthening The Case for Pair Programming’ Williams, Kessler, Cunningham & Jeffries
http://bit.ly/1RUazPO

‘Pair Programming is about Business Continuity’ Dave Hounslow
http://bit.ly/1Pk0JFW

Posted in Uncategorized | 4 Comments

The Next Big Thing?

A few years ago I was asked to take part in a panel session at a conference. One of the questions asked by the audience was what we thought the “next big thing might be”. Most of the panel talked about software, after all it was a software conference and we were all software people. I recall people talking about Functional Programming and the addition of Lambdas to Java amongst other things.

At the time this was not long after HP had announced that they had cracked the Memristor, and my answer was “Massive scale, non-volatile RAM”.

If you are a programmer, as I am, then maybe that doesn’t sound as sexy as Functional programming or Lamdas in Java, but let me make my case…

The relative poor performance of memory has been a fundamental constraint on how we design systems pretty much from the begining of the digital age.

A foundational component of our computer systems, since the secret computers at Bletchley Park that helped us to win the second world war is DRAM. The ‘D’ in DRAM stands for Dynamic. What that means is that this kind of memory is leaky. It forgets unless it is dynamically refreshed.

The computers at Bletchley Park had a big bank of capacitors that represneted the working memory of the system and this was refreshed from paper-tape. That has been pretty much the pattern of computing ever since. We have had a relatively small working store of DRAM, backed by bigger, cheaper, store of more durable non-volatile memory of some kind.

In addition to this division between the volatile DRAM and non-volatile backing storage, there has also, always been a big performance gap.

Processors are fast with small storage, DRAM is slow but stores more, Flash is VERY slow but stores lots, Disk is even slower, but is really vast!

Now imagine that our wonderful colleagues in the hardware game came up with something that started to blur those divisions. What if we had vast memory that was fast and, crucially, non-volatile.

Pause for a moment, and think about what that might mean for the way in which you would design your software. I think that this would be revolutionary. What if you could store all of your data in memory, and not bother with storing it on disk or SSD or SAN. Would the ideas of “Committing” or “Saving” still make sense? Well, maybe they would, but they would certainly be more abstract. In lots of problem domains I think that the idea of “Saving” would just vanish.

Modern DRAM requires that current is supplied to keep the capacitors, representing the bits in our programs and data, charged. So when you turn off your computer at night it forgets everything. Modern consumer operating systems do clever things like implement complicated “sleep” modes so that when you turn off, the in-memory state of the DRAM is written to disk or SSD. If we had our magic, massive, non-volatile storage, then we could just turn off the power and the state of our memory would remain in-tact. Operating Systems could be simplified, at least in this respect, and implement a real “instant-on”.

What would our software systems look like if they were designed to run on a computer with this kind of memory? Maybe we would all end up creating those very desirable “software simulations of the problem domain” that we talk about in Domain Driven Design? Maybe it would be simpler to avoid the leaky abstractions so common with mismatches between what we want of our business logic and the realities of storing something in a RDBMS or column store? Or maybe we would all just partition off a section of our massive-scale non-volatile RAM and pretend it was a disc and keep on building miserable 3-tier architecture based systems and running them wholly in-memory?

I think that this is intriguing. I think that it could change the way that we think about software design for the better.

Why am I talking about this hypothetical future? Well, Intel and Micron have just announced 3D XPoint memory. This is nearly all that I have just described. It is 10 times denser than conventional memory (DRAM), it is 1000x faster than NAND (Flash). It is also 1000x better endurance than NAND, which wears out.

This isn’t yet the DRAM replacement that I am talking about. That is because although this memory will be a lot denser than DRAM and a lot faster than NAND it is still a lot slower than DRAM, but the gap is closing. If the marketing specs are to be believed then the new 3D XPoint memory is about 10 times slower than DRAM and has about half the endurance. In hardware performance terms, that is really not far off.

I think that massive scale non-volatile RAM of sufficient performance to replace DRAM is coming. It may well be a few years away yet, but when it arrives I think it will cause a revolution in software design. We will have a lot more flexibility about how we design things. We will have to decide explicitly about stuff that, over recent years, we have taken for granted and we will have a whole new set of lessons to learn.

Thought provoking, huh?

Posted in Hardware, Software Architecture, Software Design | 7 Comments

Is Continuous Delivery Riskier?

I read an article in CIO magazine today “Three Views on Continuous Delivery“. Of course the idea of such an article is to present differing viewpoints, and I have no need, or even desire, for the world to see everything from my perspective.

However the contrary view, expressed by Jonathan Mitchell seemed to miss one of the most important reasons that we do Continuous Delivery. He assumes that CD means increased risk. Sorry Jonathan, but I think that this is a poorly informed, misunderstanding of what CD is really about. From my perspective, certainly in the industries where I have worked of late, the reverse is true. CD is very focussed on reducing risk.

Here is my response to the article:

If Continuous Delivery “strikes fear into the heart of CIOs” they are missing some important points.

Continuous Delivery is the process that allows organisations to “maintain the reliability of the complex, kaleidoscope of interrelated software in their estate”, it doesn’t increase risk, it reduces it significantly. There is quite a lot of data to support this argument.

Amazon adopted a Continuous Deployment strategy a few years ago, they currently release into production once every 11.6 seconds. Since adopting this approach they have seen a 75% reduction in outages triggered by deployment and a 90% reduction in outage minutes. (Continuous Deployment is subtly different to Continuous Delivery in that release are automatically pushed into production when all tests pass. In Continuous Delivery release is a human decision).

I think that it is understandable that this idea makes Jonathan Mitchell nervous, it does seem counter-intuitive to release more frequently. However, what really happens when you do this is that it shines a light on all the places where your process and technology are weak, and helps you to strengthen them.

Continuous Delivery is a high-discipline approach. It requires more rigour not less. Continuous Delivery requires significant cultural changes within the development teams that adopt it and beyond. It is not a simple process to adopt, but the rewards are enormous. Continuous Delivery changes the economics of software development.

Continuous Delivery is the process that is chosen by some of the most successful companies in the world. This is not an accident. In my opinion Continuous Delivery finally delivers on the promise of software development. What businesses want from software is to quickly and efficiently put new ideas before their users. Continuous Delivery is built on that premise.

Posted in Uncategorized | 2 Comments

New Job, New Business

I have very recently given up my job at KCG (Formerly Getco) Ltd. KCG have been a very good employer and I thank them for the opportunities that they gave me.

The reason that I left though is, I hope, understandable. I have finally, after the long-time nagging of my friends and relatives, decided to try independence. This is a very exciting time for me and I thank my friends and relatives for giving me the push that I needed ;-)

I have set up my own consulting company called ‘Continuous Delivery Ltd.” – What else? I intend to offer advice to companies travelling down, or embarking on, the complex road to Continuous Delivery. I also plan to work on some software that I think is missing from the Continuous Delivery tool-set – I have to feed my coding-habit somehow (stay-tuned).

I hope that this will, if anything, give me a bit more time for writing blog-entries here. I have a LOT to say that I haven’t got around to yet.

If you are curious, you can read more about my new venture at my company website http://www.continuous-delivery.co.uk/

Naturally, if you feel that my services can be of any help, please get in touch.

 

Posted in Uncategorized | Leave a comment

Cargo-Cult DevOps

My next blog post in the XebiaLabs “CD Master Series” is now available.

DevOps is a very successful meme in our industry. Most organisations these days seem to be saying that they aspire to it, though they don’t necessarily know what it is.

I confess that I have a slight problem with DevOps. Don’t get me wrong, DevOps and Continuous Delivery share some fundamental values. I see the people that promote DevOps as allies in the cause of making software development better. We are on the same team.

At its most simple level I don’t like the name ‘DevOps.’ It implies that fixing that one problem, the traditional barrier between Dev and Ops, is enough to achieve software nirvana. Fixing the relationship between Dev and Ops is not a silver bullet…

Go to the XebiaLabs website to read more…

Posted in Continuous Delivery, DevOps, External Blog Post | Leave a comment

The Reactive Manifesto

Over the past couple of months I have been helping out some friends to update the Reactive Manifesto.

There are several reasons why I agreed to help. First I was asked to, by my old friend Martin Thompson. The most important reason though is because I think that this is an important idea.

The Reactive Manifesto starts from a simple thought. 21st Century problems are not well-served by 20th Century assumptions of software architecture. The game is moving on!

There are lots of reasons for this: The problems that we are asked to tackle are growing in scale, sometimes in complexity too; The demands of our users are changing; The hardware environment has, and continues to change. The rate of change in our best businesses is increasing.

Talk to any of my friends and they will, no doubt, tell you that I am a bore on the topic of software design – as well as several other subjects ;-)

I think that we, as an industry, don’t spend enough time thinking about the design of our solutions. Too often we start out projects by saying “I have my language installed, my web-server, I have Spring, Hibernate, Ruby-on-Rails, <insert your favourite framework here> and  my database ready to go – now, what is the problem?”. We have become lazy and look for cookie-cutter solutions. We then proceed to write code in straight-lines – poor abstraction, little modelling, rotten separation of concerns. Where is the fun in any of that?

I get genuine pleasure from creating solutions to problems, but I don’t get pleasure from just any old solution. Code that only does the job is not enough for me. I want to do the job with as few instructions as possible, as little duplication. I want the systems that I write to be efficient, readable, testable, flexible, easy to maintain, high-quality, dare I say elegant!

I have been lucky enough to work on a few systems that looked like this. Do you know what? When we achieve those things we are also more efficient and more cost-effective as developers. The software that we create is more efficient too, it runs faster, does more with fewer instructions and is more flexible. This not over-engineering, this is professionalism.

Interestingly their are sometimes similarities in the course-grained architecture of such systems, at least on the larger ones that I have worked on. They are loose-coupled, based on services that implement specific bounded-contexts within the problem domain, that communicate with each other only via asynchronous messaging. These systems almost never look like the standard, out-of-the-box three-layer architecture built on top of a relational database, although pieces of them may use some of the standard technologies, including RDBMS.

The hardware environment in which our software executes is changing. The difference in the cost per byte between RAM and disk is reducing. The capacity of RAM is increasing dramatically. Distributed programming is the norm now, the relative performance of some of our hardware infrastructure has changed (e.g. Network is now faster than disk). Large-scale non-volatile RAM is on the horizon. All this means that the assumptions that underpinned the ‘standard-approach’ have changed. The old assumptions don’t match either the hardware environment nor the problems that we are solving. 

The Reactive Manifesto is about discarding some of those assumptions. About more effectively modelling the problems in our problem domain, writing code that is easier to test, more efficient to run, easier to distribute and that is dramatically more flexible in use.

Take a look at the Reactive Manifesto. If you think we are right please sign it, more than 8000 other people have done so so far. If you think we are wrong, tell us where.

Most importantly of all, please don’t assume that the same old way of doing things is the best approach to every problem.

Posted in Reactive Systems, Software Architecture, Software Design | Leave a comment

Strategies for effective Acceptance Testing – Part II

The second part of my blog post on effective Acceptance Testing is now available on the XebiaLabs website…

In my last blog post I described the characteristics of good Acceptance tests and how I tend to use a Domain Specific Language based approach to defining a language in which to specify my Acceptance Test cases. This time I’d like to describe each of these desirable characteristics in a bit more detail and to reinforce why DSL’s help.

To refresh you memory here are the desirable characteristics:

  • Relevance - A good Acceptance test should assert behaviour from the perspective of some user of the System Under Test (SUT).
  • Reliability / Repeatability - The test should give consistent, repeatable results.
  • Isolation - The test should work in isolation and not depend, or be affected by, the results of other tests or other systems.
  • Ease of Development - We want to create lots of tests, so they should be as easy as possible to write.
  • Ease of Maintenance - When we change code that breaks tests, we want to home in on the problem and fix it quickly.

Goto the XebiaLags site to read more…

Posted in Uncategorized | Leave a comment

Sorry to any real readers…

A few weeks ago I switched on the feature in WordPress that allows users to sign-up for notifications when I write a new post.

If you are a real person who signed up, I am very sorry but I am going to turn that feature off again.

Since turning it on I get constantly spammed with bogus sign-ups by clearly made up email accounts. I am not sure what the attack intended is, or what these people hope to gain from this, but it is chewing up storage and cycles to no advantage.

If you want notifications you can follow me on twitter at #davefarley77 and I will tweet when there is a new post.

Very sorry!

Posted in Site Admin | 1 Comment

Strategies for Effective Acceptance Testing

My second guest blog post for XebiaLabs is the first of two parts. It is on the topic of “Strategies for Effective Acceptance Testing”

“Automated testing is at the heart of any good Continuous Delivery process and I see automated Acceptance Testing as being one of the foundations of any effective testing strategy.

In my book ‘Continuous Delivery’ we defined Acceptance Testing as asserting that the code ‘did what the business wanted it to do’. The distinction that we made is between that and unit-test-based TDD, which is really focused on asserting that the code does what the developer thinks it should. Both of these perspectives are important, and compliment one another, but for the rest of this post I want to talk about Acceptance Testing.

Good Acceptance Tests are hard to get right, but there are a few tricks that make it easier…”

Read the rest at the XebiaLabs site…

Posted in Acceptance Testing, Continuous Delivery, External Blog Post, Uncategorized | Leave a comment

What does ‘Good’ look like?

The nice folks at XebiaLabs have asked me to do a few guest blog posts on their site. My first post is called “What does ‘Good’ look like?”

I think that we have a problem in the software development industry. A significant proportion, if not the majority of practitioners have never seen, let alone worked on, an efficient project. This is a scary thought! If people don’t know what good looks like, how can we expect them to do well?”

(Read the rest at the XebiaLabs site…)

 

Posted in Agile Development, Continuous Delivery, External Blog Post | Leave a comment