Mob Rule?

I was at a conference last year where I saw Woody Zuill talking about “Mob Programming”. You can see that talk here

A very simple description of Mob programming, for those of you who don’t have time to watch Woody’s presentation, is “All the team working together on the same stuff”. Instead of dividing work up and allocating it to individuals or pairs, the whole team sits together and advises a “Driver” who has the only keyboard.


Mob Programming1 Mob Programming2

Interested but Skeptical

I thought it was an interesting, thought-provoking, even challenging idea. I confess that my first reaction was skepticism. To be honest, as a grumpy old man in my 50s, with a scientific-rationalist world-view my reaction to most things is skepticism. I think of it as a healthy first response ūüėČ

I thought about my skeptical responses, “That can’t possibly be as efficient as the team dividing the work”, “Surely some people will just sit back and not really contribute”, “It is going to be dominated by a few forceful individuals”, “How could you design anything complex?”. Once I started voicing them, they seemed familiar. These are precisely the kind of responses that I get when I talk to teams about adopting Pair Programming.

Now, I am a passionate advocate for Pair Programming. I believe that it makes teams stronger, more efficient, more cohesive and allows them to produce significantly higher-quality work. So I was in a quandary.

On one hand, “this can’t possibly be efficient, can it?” on the other “Working collaboratively produces better results”. I am scientific rationalist enough to retain an open-mind. I assumed that that would be it. I assumed that, given the nature of my work as a consultant, I would never experience Mob Programming personally and so would only ever see it from the perspective of a distant, outside observer. I was wrong.

Invited to join the Mob

I have some friends working in a start-up company, who have recently experimented with Mob Programming and have been practicing it for a few months now. I know them through my good friend, Dave Hounslow (@thinkfoo), who used to work there. The team very kindly invited both of us to spend the day with them and “join the mob”.

Before trying Mob Programming, the team was already fairly advanced in their use of agile development and Continuous Delivery. Dave had helped them to establish a strong culture of automated testing and an effective deployment pipeline. They were used to working collaboratively and doing pair programming, TDD and automated acceptance testing.

The team is fairly small, 5 developers, they saw a presentation on Mob programming and decided to try it as an experiment for a single iteration, and have never gone back to their previous mode of work, pair-programming.

Introductions, Process and People

Dave and I arrived and after coffee and introductions, we attended the team stand-up meeting. The team are thinking of changing the stand-up because it no longer has a role of establishing a shared understanding, they do that all day, every day, working together in the Mob. However there is one team member who works remotely on customer service and so this is an opportunity to catch-up with her.

We then spent a bit of time in front of a whiteboard while the team described their system to us. Dave knew it a bit from when he used to work there, it was completely new to me. They described their architecture and then the story that we would be working on, and then we began.

The team have invested a bit of time and infrastructure to support their mob programming. They have a couple of nice big monitors so that everyone can see what is going on as the code evolves and a timer that reminds them, when to swap who is driving.

The approach works by giving everyone a turn at the keyboard, including newbies like Dave and I. The timer is set for 15 minutes. Each person gets 15 minutes at the keyboard and then everyone shifts where they are sitting with a new person taking the keyboard. It is not totally rigid, so if you are at the keyboard and in the middle of typing a word, you can finish, but the team try to respect the time-slots. They avoid one person hogging the keyboard.

We sat in a loose semi-circle with the person at the keyboard placed at the centre. We all then offered advice on what to do and where to go next. When the time was up everyone shuffled round one place, musical chairs without the music. The next person in sequence would take the keyboard and we would continue the conversation and design of the code. I think that the simple act of getting up and shifting seats helped keep the engagement going throughout the day. Tiny as it was, the act of moving sharpened the focus, just a little bit.

Sounds chaotic doesn’t it? However, it wasn’t. There were occasions when the discussion went on long enough that there was no typing during the 15 minute period but on the whole that wasn’t the case. Generally¬†we made steady progress towards the aims of the story.

There was a lot of talking, this wasn’t dominated by one or two people everyone had their say. There was, inevitably some variance. The level of experience of the team varied widely both in terms of general software development experience and in terms of experience of this project. So at different points different people had more or less to contribute. I was a complete newby to the system and so I asked more questions than the others and could mostly contribute on general issues of design and coding rather than specifics of the existing system. Others knew the system very well and so would contribute more when specific questions arose. This is one of the benefits!

Optimise for Thinking, Not Typing

The story we were working on was one of those stories that was exploring some new ideas in the system. It was gently challenging existing architectural assumptions. At least that was the way that I perceived it. This was a good thing. This wasn’t a trivial change, we had new territory to explore. We certainly spent more time talking than typing, but then I have never admired “lines of code” as a useful metric for software. I much prefer to optimise for thinking rather than typing. So while the amount of code that we produced during the day was relatively small, I think that it was higher quality than any of us would have produced alone.

The conversations were wide ranging, and often I felt that the presence of two outsiders, Dave and me, tilted the conversation more in the direction of reviewing some past decision than may otherwise have been the case. However, I also felt as though we both added some value to the discussions and the final output of the day, working code.

Assumptions Fulfilled?

So what about the assumptions that I started with?

“It can’t possibly be as efficient…”. Well the team have tracked their velocity and have seen a significant increase in the number of stories that they produce. I know that velocity is really only a pseudo-measure of progress, who is to say that this week’s stories are the same size as last? Nevertheless, the team subjectively feel that they are now moving significantly more quickly and producing higher-quality output. The data that they have seems to back this up.

“People will sit back and not contribute”. There were some people that spoke less than others, but everyone was engaged throughout the day and everyone contributed to some degree.

“It will be dominated by forceful individuals”. There were a few “forceful personalities” present, myself and Dave included. Nevertheless the team seemed to me to listen carefully and consider ideas from everyone. It is certainly true that some of us present talked more and were a bit more directive than others in the group, nevertheless I felt as though the outcome was a group-driven thing.

“How could you design anything complex?”. This is an old chestnut that people new to TDD and Pair Programming raise all the time. I am completely relaxed about the idea of incremental design. In fact I believe that it is the only way that we can solve really complex problems with high quality. I was pleased that the story that was chosen for the day had a bit of substance to it. It wasn’t a deeply challenging problem, but did stress some architectural assumptions. I am confident that we, the whole group, moved the thinking about the architecture of the system forwards during the course of our work that day.

Justified Skepticism?

So what do I think about Mob Programming now that I have tried it? Well I am still on the fence. I finished that day having had my beliefs challenged. This is certainly not a crazy, inappropriate waste of time by any stretch of the imagination.

This was a small team that was considerably disrupted by having two newbys, Dave and me, drop-in for the day. I think that for any team this small, this would have been disruptive. I seriously doubt that, for most teams, the new people would have made much of a contribution. Dave and I are both experienced software developers and both used to working as consultants and adding value quickly. Nevertheless I think that we added more value, more quickly than would usually be the case. We made suggestions that the team tried immediately. We learned about the system more quickly, to a level of detail that suprised me. Again, in any other process, I think that we would have spent more time either doing introductory stuff or applied a narrower focus and worked in a smaller area of the code – on our first day.

I was extremely impressed that this small team could accommodate the disruption of, nearly, doubling in size for a day and not only do useful work, but actually do so in a way that moved on their thinking.

At the human level this is a nice way to work. You have the conversations that are important. You share the context of every decision with the whole team all the time. You laugh and joke and share the successes as well as the disappointments.

These days I work as an independent consultant, advising people on how to create better software faster. I am intrigued at the prospect of using Mob programming as a mechanism to introduce small teams to new ideas. I think it could be a wonderful instructional/learning tool. If anyone fancies carrying out that experiment you should give me a call ūüėČ

So where are my reservations, why do I feel like I am still “on the fence” rather than a true believer? Well, there was a bit coaching by one of the participants to organise the others. I honestly don’t know if the process would have worked better or worse without him directing it quite so strongly, but I can imagine it breaking down if you had the wrong mix of people. That is a weak criticism, any process or approach can be disrupted by the wrong person or group of people. My point is that if you have a bad pair, you can move on and pair with someone else the next day. If you have a bad Mob you have to be fairly strong minded to face the problem and fix it. It will force you to have some difficult conversations. Perhaps no bad thing if you do it, but I know of many teams that would shy away from such a social challenge.

There must be some natural limit. The idea of a Mob of 200 people working on the same thing is ludicrous. So where is the boundary? I am a believer in the effectiveness of small teams. So I wouldn’t have a team of 200 people in the first place. However, I wonder how well this would work with larger, still small, teams? There were 6 of us in this Mob and it worked well. Would it have continued to do so with 8, 10 or 12? If I am honest I think that a team of 12 is too big anyway, but it is a valid question, what are the boundaries?

There are times when I want some time to think. If I am pairing and I hit such a point it is simple to say to my pair – “let’s take a break and come back to this in 30 minutes”. It is harder to make that call if everyone is working in a Mob.

On the whole I had a fascinating day. I would like to extend my thanks to the folks at Navetas for inviting me to join their Mob and experience this approach first hand. It was a lot of fun and I learned a lot.

Posted in Agile Development, Development Process, Mob Programming | Leave a comment

Test *Driven* Development

Before Test Driven Development (TDD) the only thing that applied a pressure for high-quality in software development was the knowledge, experience and commitment of an individual software developer.

After TDD there was something else.

High quality in software is widely agreed to include the following properties.

High quality software:

    • Is modular.
    • Is loosely-coupled.
    • Has high¬†cohesion.
    • Has a good separation of concerns.
    • Exhibits information hiding.

Test Driven Development is only partially about testing, of much greater importance is its impact on design.

Test Driven Development is development (design) driven by tests. In Test Driven Development we write the test before writing code to make the test pass. This is distinct from Unit Testing. TDD is much more than ‚ÄúGood unit testing‚ÄĚ.

Writing the test first is important, it means that we always end up with ‚Äútestable‚ÄĚ code.

What makes code testable?

    • It is modular.
    • Loosely-coupled.
    • Has high¬†cohesion.
    • Has a good separation of concerns.
    • Exhibits information hiding.

Precisely the same properties as those of high quality code. So with the introduction of TDD we now have something else, beyond the knowledge, experience and commitment of a programmer to push us in the direction of high-quality. Cool!

Like to know more?…

Posted in Agile Development, Software Design, TDD | Leave a comment

Redgate Webinar Q&A

I recently took part in a Webinar for Database tools vendor Redgate. At the end of the Webinar we ran out of time for some of the questions that people had submitted, so this blog post provides my answers to those questions.

If you would like to see the Webinar you can find it here.


Q: “How can we overcome the “we’ve always done it that way” group-think mentality?”

Dave: For me the question is, “is your process working now as well as you want it to?” if not I think you should try something else. ¬†

I believe that we have found a better way to deliver valuable, high-quality, software to the organisations that employ us. The trouble is that it is a very different way of working. Mostly people are very wary of change, particularly in software development, where we have promised a lot before and not delivered.  

The only way I know to move a “group-think” position is gradually. You need to make a positive difference and win trust. It is about looking at real problems and solving them, often one at a time.¬†¬†

I believe that we, the software industry, are in a better place than we were, because we finally have the experience to know what works and what does not. The trick now is to migrate to the approaches that work. This take learning, because the new approaches are very different to the old, it challenges old assumptions. It is helpful to get some guidance, hire people that have some experience of this new way of working, read the literature, and carry out small, controlled experiments in areas of your process and business that will make a difference.  

I often recommend to my clients that they perform a “Value-stream analysis” to figure out where they are efficient at software delivery and where they are not. This is often an enlightening exercise, allowing them to easily spot points that can be improved. Sometimes this is technology, more often it is about getting the right people to communicate effectively.¬†

Once you have improved this problem, you will have improved the situation and gained a little “capital” in the form of trust that will allow you to challenge other sacred-cows. This is a slow process, but for a pre-existing organization it is the only way that I know.¬†

Q: “What advice would you have for gaining management buy-in for continuous delivery?”

Dave: Continuous Delivery is well-aligned with management ambitions. We optimise to delivery new ideas, in the form of working software, to our users as quickly and efficiently as possible. The data from companies that have adopted CD is compelling, it improves their efficiency and their bottom-line. Many of the most effective software companies in the world employ CD. 

The problem is not really the ideals, it is the practice, what it takes to get there. CD organizations look different to others. They tend to have many small teams instead of fewer large ones. Each team has a very high degree of autonomy, many don’t really have “management” in the traditional sense. So this can be very challenging to more traditional organizations.¬†

The good news is that the way to adopt CD is by incremental steps. Each of these steps is valuable in it’s own right, and so each can be seen as a positive step. If you don’t user version control – start. If you don’t work iteratively, and regularly reflect on the outcomes of your work so that you can correct and improve – start that. If you don’t employ test automation, or deployment automation or effective configuration management start those things too. Each of these steps will bring a different benefit, over time they reinforce one-another so you get more than the sum of the parts.¬†

There are several CD maturity models, there is one in the back of my book, which can offer guidance on what to try next here is another that I have used: 

Q: “We are very early in the stages of DB CD process changes, what are the most important issues to tackle early?”¬†

Dave: That is quite a tough question to answer without the consultants lament “It depends” ūüėȬ†

I think that the fundamental idea that underpins CD is to take an experimental approach to everything, technology, process, organization the lot. Try new things in small controlled steps so that if things go wrong you can learn from it rather than regret it. 

At a more technical level, I think that version controlling pretty much everything, automated testing and continuous integration are corner-stones. If you are starting from scratch, it is much easier to start well with automated testing and continuous integration than to add these later. It is not impossible to add them later, it is just more difficult. 

So be very strict with yourselves at first and try working so that you don’t make ANY change to a production system without some form of test. This will feel very hard at first if you are new to this, but it really is possible.¬†

Q: “Are there any best practices you’d especially recommend we bear in mind?”¬†

Dave: There is a lot to CD. I tend to take a very broad view of its scope and so it encompasses much of software development. At that level the best practices are grounded in Lean and Agile principles. Small, self-directed teams, working to very high quality standards, employing high levels of automation for tests and deployment are foundational. 

At the technical level there are lots at all different levels of granularity. I guess the key idea from my book is the idea of the “Deployment Pipeline” this is the idea of automating the route to production. A good mental model for this is to imagine that every change that is destined for production gives birth to a release-candidate. The job of the deployment pipeline is to prove that a release candidate is NOT FIT to make it into production. If a test fails, we throw the release candidate away.¬†

Q: “What are some common practical issues that people encounter during the implementation of CD?”¬†

Dave: I have covered some of this in the preceding answers. Most of the problems are people problems. It is hard to break old habits. At the technical end, the most common problems that I have seen have been very slow, inefficient builds, poor, or non-existent, automated deployment systems and poor, or non-existent, automated tests. 

Q: “What would be the fastest way to actually perform CD?”¬†

Dave: The simplest way to start is to to start from scratch, with a blank sheet. It is easier to start a new project or new company this way than to migrate an existing one.  

I think it helps to get help from people that have done this before. Hire people with these skills and learn from them. 

Q: “We deal with HIPPA regulated data and I am personally unsure of letting this data out. How does CD typically get implemented in highly regulated environments? Are there particular challenges?”¬†

Dave: The only challenge that I perceive is that regulators are often unfamiliar with the ideas and so their assumptions of what good regulatory conformance looks like is tailored with, what to me, looks like an outdated assumption of development practice. 

My experience of working in heavily regulated industries, mostly finance in different countries, is that the regulators quickly appreciate this stuff and they *love* it.  

CD gives almost ideal traceability, because of tour very rigorous approach to version control and the high-levels of automation that we employ we get FULL traceability of every change, almost as a side-effect. In the organizations where I have worked in the finance industry, we have been used as bench-marks for what good regulatory compliance looks like. 

So the challenge is educating your regulators, once they get it they will love it. 

Q: “How should a data warehouse deal with a source database which is in a CD pipeline?”¬†

Dave: As usual, it depends. The simplest approach is to treat it like any other part of the system and write tests to assert that changes work. Run these tests as part of your deployment pipeline.  

If not you need to take a more distributed, micro-service style approach. In this approach try and minimize the coupling between the Data Warehouse and the up-stream data sources. Provide well-defined, general, interfaces to import data and make sure that these are well tested. 

Q: “How do you recommend we use CD to synchronize, deploy and verify complex projects with database,¬† Agent Job,s SSIS packages and SSRS reports.”¬†

Dave: I would automated the integration of new packages as part of my deployment pipeline. I would also look to create automated tests that verify each change, and run these as part of my pipeline. 

Q: “How would you deal multiple versions of a database (e.g. development , internal final test, and a version for customer), and do you have any advice for the automatic build and deploy of a database?”¬†

Dave: I recommend the use of the ideas in “Refacoring Databases” by Scott Ambler and Pramod Sadalage.¬†

Q: “Do you have any tips for enabling rapid DB ‘resets’ during build/test? E.g. How to reset DB to known state before each test?”¬†

Dave: A lot depends on the nature of the tests. For some kinds of test, low-level unit-like tests it can be good to use the transactional scope for the duration of the test. At the start of the test, open a transaction, do what you need for the test, including any assertions, at the end of the test abort the transaction. 

For higher-level tests I like the use of functional isolation. Where you use the natural functional semantics of your application to isolate one test from another. If you are testing Amazon, every test starts by creating a user account and a book. If you are testing eBay every test starts by creating a user account and an auction….¬†

You can see me describing this in more detail in this presentation РI am speaking more generally about testing strategies and not specifically about the DB, but I think that the approach is still valid. 

Q: “I’m concerned about big table rebuilds not being spotted until upgrade night.¬† Also obscure feature support like FILESTREAM. Do you have any tips for avoiding these kinds of last-minute surprises or dealing with a wide mix of systems?”¬†

Dave: I tend to try to treat all changes the same. I don’t like¬†surprises¬†either, so I try to find a way¬†to evaluate every change before it is released into production. So I would try to find a way to automate a test that would highlight my concerns and I would run this test in an environment that was sufficiently close to my production environment to catch most failures that I would see there.¬†¬†

Q: “Do you have any advice for achieving zero down time upgrades and non breaking on-line database changes?”¬†

Dave: I have seen two strategies work. They are not really exclusive of one another. 

1) The microservice approach, keep database scoped to single applications and create software that is tollerant of a service not being available for a while. I have done some work on an architectural style called “Reactive Systems” which promotes such an approach.¬†

2) Work in a way that every change to¬†your database is additive. Never delete anything, only add new things, including schema changes and transactional data. So ban the use of UPDATE and DELETE ūüėȬ†

Q: “Managing db downtime and replication during CD”¬†

Dave: See comments to preceding question 

Q: “How do you craft data repair scripts that flow through various development environments?”¬†

Dave: I generally encode any changes to my database as a delta. Deployment starts from a baseline database image and from then on changes are added as deltas. Each copy of my database includes a table which records which delta version it is at. My automated deployment scripts will interrogate the DB to see which version it is at. It will look at the deltas to see which is the newest and it will apply all of the deltas between those two numbers. This approach is described in more detail in Pramod and Scott’s book.¬†

I think of the delta table as describing a “patch-level” for my DB. So two DBs at the same “patch-level” will be structurally identical, though they may contain different transactional data.¬†

Q: “What are some of the community-supported open source C.D. applications that would work well for an enterprise org that currently doesn’t have C.D.?”¬†

Dave: If you are going to take CD seriously you are going to want to create a pipeline and so coordinate different levels of testing for a given release candidate. So build management systems are a good starting point, Jenkins, TeamCity and Go from ThoughtWorks are effective tools in this area. 

I think that the tools for automated testing of DBs are still relatively immature, most places that I have seen use the testing frameworks from application programming languages and grow their own tools and techniques from there.  

RedGate have tools for versioning DBs. I haven’t used them myself, but they have a good reputation. My own experience is that up to now I have used conventional version control systems, Subversion or GIT, and stored scripts and code for my DB there.¬†

Q: “Which tools make CD (in general, and for the database) easier?”¬†

Dave: See above. 

Posted in Uncategorized | 1 Comment

Pair Programming – The most Extreme XP Practice?

I was an early adopter of XP. I read Kent’s book when it was first released in 1999 and though skeptical of some of the ideas, others resonated very strongly with me.

I had been using something like Continuous Integration, though we didn’t call it that, in projects from the early-1990s. I was immediately sold when I saw JUnit for the first time. This was a much better approach to unit testing than I had come across before. I was a bit skeptical about “Test First” – I didn’t really get that straight away (my bad!).

The one idea that seemed crazy to me at first was Pair Programming. I was not alone!

I had mostly worked in good, collaborative, teams to that point. We would often spend a lot of time working together on problems, but each of us had their own bits of work to do.

With hindsight I would characterise the practices of the best teams that I was working with then as “Pair Programming – Light”.

Then I joined ThoughtWorks, and some of the teams that I worked with were pretty committed to pair programming. Initially I was open-minded, though a bit skeptical. I was very experienced at “Pairing – Light”, but assumed that the only real value was at the times when I, or my pair, was stuck.

So I worked at pairing and gave it an honest try. To my astonishment in a matter of days I became a believer.

Pair programming worked, and not just on the odd occasion when I was stuck, but all the time. It made the output of me and my pair better than either of us would have accomplished alone. Even when there was no point at which we were stuck!

Over the years I have recommended, and used, pair programming in every software development project that I have worked on since. Most organisations that I have seen try it, wouldn’t willingly go back, but not all.

So what is the problem? Well as usual the problems are mostly cultural.

If you have grown up writing code on your own it is uncomfortable sitting in close proximity to someone else and intellectually exposing yourself. Your first attempt at pair-programming takes a bit of courage.

If you are one of those archetypal developers, who is barely social and has the communication skills of an accountant on Mogadon1 this can, no-doubt, feel quite stressful. Fortunately, my experience of software developers is that, despite the best efforts of pop-culture, there are fewer of these strange caricatures than we might expect.

If you are a not really very good at software development, but talk a good game, pair-programming doesn’t leave you anywhere to hide.

If you are a music lover and prioritise listening to your music collection above creating better software, pairing is not for you.

There is one real concern that people ignorant of pair-programming have. That is that it is wasteful.

It seems self-evident that two people working independently, in-parallel, will do twice as much stuff as two people working together on one thing. That is obvious, right?

Well interestingly, while it may seem obvious, it is also wrong.

This may be right for simple repetitive tasks, but that isn’t what software development is about. Software development is an intensively creative process. It is almost uniquely so. We are limited by very little in our ability to create the little virtual universes that our programs inhabit. So working in ways that maximise our creativity is an inherent part of high-quality software development. Most people that I know are much more creative when bouncing ideas off other people (pairing).

There have been several studies that show that pair-programming is more effective than you may expect.

In the ‘Further Reading’ section at the bottom of this post, I have added a couple of links to some controlled experiments. In both cases, and in several others that I have read, the conclusions are roughly the same. Two programmers working together go nearly twice as fast, but not quite twice as fast, as two programmers working independently.

“Ah Ha!”, I hear you say, “Nearly twice as fast isn’t good enough” and you would be right if it wasn’t for the fact that in nearly half the time it takes one programmer to complete a task, two programmers will complete that task and do so with significantly higher quality and with significantly fewer bugs.

If output was the only interesting measure, I think that the case for pair-programming is already made, but there is more…

Output isn’t the only interesting measure. I have worked on, and led, teams that adopted pair-programming and teams that didn’t. The teams that adopted pairing were remarkably more effective, as teams, than those that didn’t.

I know of no better way to improve the quality of a team. To grow and spread an effective development culture.

I know of no better way, when combined with high-levels of automated testing, to improve the quality of the software that we create.

I know of no better way to introduce a new member of the team and get them up to speed, or coach a junior developer and help them gain in expertise or introduce a developer to a new idea or a new area of the code-base.

Finally, the most fun that I have ever had as a software developer has been when working as part of a pair.

Successes are better when shared. Failures are less painful when you have someone to commiserate with.

Most important of all the shared joy of discovery when you have that moment of insight that makes complexity fall away from the problem before you is hard to beat.

If you have never tried pair programming, try it.

Give it a couple of weeks, before assuming that you know enough to say it doesn’t work for you.

If your manager asks why you are wasting time, make an excuse. Tell them that you are stuck, or just “needed a bit of help with that new configuration”.

In the long run they will thank you, and if not find someone more sympathetic to what software development is really about.

Pair programming works, and adds significant value to the organisations that practice it. Give it a try.

     Dave Farley 2016

1 A Medical Drug used as a heavy sedative.

Further Reading:

‚ÄėThe Case for Collaborative Programming‚Äô Nosek 1998

‚ÄėStrengthening The Case for Pair Programming‚Äô Williams, Kessler, Cunningham¬†& Jeffries

‘Pair Programming is about Business Continuity’ Dave Hounslow

Posted in Uncategorized | 6 Comments

The Next Big Thing?

A few years ago I was asked to take part in a panel session at a conference. One of the questions asked by the audience was what we thought the “next big thing might be”. Most of the panel talked about software, after all it was a software conference and we were all software people. I recall people talking about Functional Programming and the addition of Lambdas to Java amongst other things.

At the time this was not long after HP had announced that they had cracked the Memristor, and my answer was “Massive scale, non-volatile RAM”.

If you are a programmer, as I am, then maybe that doesn’t sound as sexy as Functional programming or Lamdas in Java, but let me make my case…

The relative poor performance of memory has been a fundamental constraint on how we design systems pretty much from the begining of the digital age.

A foundational component of our computer systems, since the secret computers at Bletchley Park that helped us to win the second world war is DRAM. The ‘D’ in DRAM stands for Dynamic. What that means is that this kind of memory is leaky. It forgets unless it is dynamically refreshed.

The computers at Bletchley Park had a big bank of capacitors that represneted the working memory of the system and this was refreshed from paper-tape. That has been pretty much the pattern of computing ever since. We have had a relatively small working store of DRAM, backed by bigger, cheaper, store of more durable non-volatile memory of some kind.

In addition to this division between the volatile DRAM and non-volatile backing storage, there has also, always been a big performance gap.

Processors are fast with small storage, DRAM is slow but stores more, Flash is VERY slow but stores lots, Disk is even slower, but is really vast!

Now imagine that our wonderful colleagues in the hardware game came up with something that started to blur those divisions. What if we had vast memory that was fast and, crucially, non-volatile.

Pause for a moment, and think about what that might mean for the way in which you would design your software. I think that this would be revolutionary. What if you could store all of your data in memory, and not bother with storing it on disk or SSD or SAN. Would the ideas of “Committing” or “Saving” still make sense? Well, maybe they would, but they would certainly be more abstract. In lots of problem domains I think that the idea of “Saving” would just vanish.

Modern DRAM requires that current is supplied to keep the capacitors, representing the bits in our programs and data, charged. So when you turn off your computer at night it forgets everything. Modern consumer operating systems do clever things like implement complicated “sleep” modes so that when you turn off, the in-memory state of the DRAM is written to disk or SSD. If we had our magic, massive, non-volatile storage, then we could just turn off the power and the state of our memory would remain in-tact. Operating Systems could be simplified, at least in this respect, and implement a real “instant-on”.

What would our software systems look like if they were designed to run on a computer with this kind of memory? Maybe we would all end up creating those very desirable “software simulations of the problem domain” that we talk about in Domain Driven Design? Maybe it would be simpler to avoid the leaky abstractions so common with mismatches between what we want of our business logic and the realities of storing something in a RDBMS or column store? Or maybe we would all just partition off a section of our massive-scale non-volatile RAM and pretend it was a disc and keep on building miserable 3-tier architecture based systems and running them wholly in-memory?

I think that this is intriguing. I think that it could change the way that we think about software design for the better.

Why am I talking about this hypothetical future? Well, Intel and Micron have just announced 3D XPoint memory. This is nearly all that I have just described. It is 10 times denser than conventional memory (DRAM), it is 1000x faster than NAND (Flash). It is also 1000x better endurance than NAND, which wears out.

This isn’t yet the DRAM replacement that I am talking about. That is because although this memory will be a lot denser than DRAM and a lot faster than NAND it is still a lot slower than DRAM, but the gap is closing. If the marketing specs are to be believed then the new 3D XPoint memory is about 10 times slower than DRAM and has about half the endurance. In hardware performance terms, that is really not far off.

I think that massive scale non-volatile RAM of sufficient performance to replace DRAM is coming. It may well be a few years away yet, but when it arrives I think it will cause a revolution in software design. We will have a lot more flexibility about how we design things. We will have to decide explicitly about stuff that, over recent years, we have taken for granted and we will have a whole new set of lessons to learn.

Thought provoking, huh?

Posted in Hardware, Software Architecture, Software Design | 8 Comments

Is Continuous Delivery Riskier?

I read an article in CIO magazine today “Three Views on Continuous Delivery“. Of course the idea of such an article is to present differing viewpoints, and I have no need, or even desire, for the world to see everything from my perspective.

However the contrary view, expressed by Jonathan Mitchell seemed to miss one of the most important reasons that we do Continuous Delivery. He assumes that CD means increased risk. Sorry Jonathan, but I think that this is a poorly informed, misunderstanding of what CD is really about. From my perspective, certainly in the industries where I have worked of late, the reverse is true. CD is very focussed on reducing risk.

Here is my response to the article:

If Continuous Delivery “strikes fear into the heart of CIOs” they are missing some important points.

Continuous Delivery is the process that allows organisations to “maintain the reliability of the complex, kaleidoscope of interrelated software in their estate”, it doesn’t increase risk, it reduces it significantly. There is quite a lot of data to support this argument.

Amazon adopted a Continuous Deployment strategy a few years ago, they currently release into production once every 11.6 seconds. Since adopting this approach they have seen a 75% reduction in outages triggered by deployment and a 90% reduction in outage minutes. (Continuous Deployment is subtly different to Continuous Delivery in that release are automatically pushed into production when all tests pass. In Continuous Delivery release is a human decision).

I think that it is understandable that this idea makes Jonathan Mitchell nervous, it does seem counter-intuitive to release more frequently. However, what really happens when you do this is that it shines a light on all the places where your process and technology are weak, and helps you to strengthen them.

Continuous Delivery is a high-discipline approach. It requires more rigour not less. Continuous Delivery requires significant cultural changes within the development teams that adopt it and beyond. It is not a simple process to adopt, but the rewards are enormous. Continuous Delivery changes the economics of software development.

Continuous Delivery is the process that is chosen by some of the most successful companies in the world. This is not an accident. In my opinion Continuous Delivery finally delivers on the promise of software development. What businesses want from software is to quickly and efficiently put new ideas before their users. Continuous Delivery is built on that premise.

Posted in Uncategorized | 2 Comments

New Job, New Business

I have very recently given up my job at KCG (Formerly Getco) Ltd. KCG have been a very good employer and I thank them for the opportunities that they gave me.

The reason that I left though is, I hope, understandable. I have finally, after the long-time nagging of my friends and relatives, decided to try independence. This is a very exciting time for me and I thank my friends and relatives for giving me the push that I needed ūüėČ

I have set up my own consulting company called ‘Continuous Delivery Ltd.” – What else?¬†I intend to offer advice to companies travelling down, or embarking on, the complex road to Continuous Delivery. I also plan to work on some software that I think is missing from the Continuous Delivery tool-set – I have to feed my coding-habit somehow (stay-tuned).

I hope that this will, if anything, give me a bit more time for writing blog-entries here. I have a LOT to say that I haven’t got around to yet.

If you are curious, you can read more about my new venture at my company website

Naturally, if you feel that my services can be of any help, please get in touch.


Posted in Uncategorized | Leave a comment

Cargo-Cult DevOps

My next blog post in the XebiaLabs “CD Master Series” is now available.

DevOps is a very successful meme in our industry. Most organisations these days seem to be saying that they aspire to it, though they don’t necessarily know what it is.

I confess that I have a slight problem with DevOps. Don’t get me wrong, DevOps and Continuous Delivery share some fundamental values. I see the people that promote DevOps as allies in the cause of making software development better. We are on the same team.

At its most simple level I don‚Äôt like the name ‚ÄėDevOps.‚Äô It implies that fixing that one problem, the traditional barrier between Dev and Ops, is enough to achieve software nirvana. Fixing the relationship between Dev and Ops is not a silver bullet…

Go to the XebiaLabs website to read more…

Posted in Continuous Delivery, DevOps, External Blog Post | Leave a comment

The Reactive Manifesto

Over the past couple of months I have been helping out some friends to update the Reactive Manifesto.

There are several reasons why I agreed to help. First I was asked to, by my old friend Martin Thompson. The most important reason though is because I think that this is an important idea.

The Reactive Manifesto starts from a simple thought. 21st Century problems are not well-served by 20th Century assumptions of software architecture. The game is moving on!

There are lots of reasons for this: The problems that we are asked to tackle are growing in scale, sometimes in complexity too; The demands of our users are changing; The hardware environment has, and continues to change. The rate of change in our best businesses is increasing.

Talk to any of my friends and they will, no doubt, tell you that I am a bore on the topic of software design – as well as several other subjects ūüėČ

I think that we, as an industry, don’t spend enough time thinking about the design of our solutions. Too often we start out projects by saying “I have my language installed, my web-server, I have Spring, Hibernate, Ruby-on-Rails, <insert your favourite framework here> and ¬†my database ready to go – now, what is the problem?”. We have become lazy and look for cookie-cutter solutions. We then proceed to write code in straight-lines – poor abstraction, little modelling, rotten separation of concerns. Where is the fun in any of that?

I get genuine pleasure from creating solutions to problems, but I don’t get pleasure from just any old solution. Code that only does the job is not enough for me. I want to do the job with as few instructions as possible, as little duplication. I want the systems that I write to be efficient, readable, testable, flexible, easy to maintain, high-quality, dare I say elegant!

I have been lucky enough to work on a few systems that looked like this. Do you know what? When we achieve those things we are also more efficient and more cost-effective as developers. The software that we create is more efficient too, it runs faster, does more with fewer instructions and is more flexible. This not over-engineering, this is professionalism.

Interestingly their are sometimes similarities in the course-grained architecture of such systems, at least on the larger ones that I have worked on. They are loose-coupled, based on services that implement specific bounded-contexts within the problem domain, that communicate with each other only via asynchronous messaging. These systems almost never look like the standard, out-of-the-box three-layer architecture built on top of a relational database, although pieces of them may use some of the standard technologies, including RDBMS.

The hardware environment in which our software executes is changing. The difference in the cost per byte between RAM and disk is reducing. The capacity of RAM is increasing dramatically. Distributed programming is the norm now, the relative performance of some of our hardware infrastructure has changed (e.g. Network is now faster than disk). Large-scale non-volatile RAM is on the horizon. All this means that the assumptions that underpinned the ‘standard-approach’ have changed. The old assumptions don’t match either the hardware environment nor the problems that we are solving.¬†

The Reactive Manifesto is about discarding some of those assumptions. About more effectively modelling the problems in our problem domain, writing code that is easier to test, more efficient to run, easier to distribute and that is dramatically more flexible in use.

Take a look at the Reactive Manifesto. If you think we are right please sign it, more than 8000 other people have done so so far. If you think we are wrong, tell us where.

Most importantly of all, please don’t assume that the same old way of doing things is the best approach to every problem.

Posted in Reactive Systems, Software Architecture, Software Design | Leave a comment

Strategies for effective Acceptance Testing – Part II

The second¬†part of my blog post on effective Acceptance Testing is now available on the XebiaLabs website…

In my last blog post I described the characteristics of good Acceptance tests and how I tend to use a Domain Specific Language based approach to defining a language in which to specify my Acceptance Test cases. This time I’d like to describe each of these desirable characteristics in a bit more detail and to reinforce why DSL’s help.

To refresh you memory here are the desirable characteristics:

  • Relevance¬†– A good Acceptance test should assert behaviour from the perspective of some user of the System Under Test (SUT).
  • Reliability / Repeatability¬†– The test should give consistent, repeatable results.
  • Isolation¬†– The test should work in isolation and not depend, or be affected by, the results of other tests or other systems.
  • Ease of Development¬†– We want to create lots of tests, so they should be as easy as possible to write.
  • Ease of Maintenance¬†– When we change code that breaks tests, we want to home in on the problem and fix it quickly.

Goto the XebiaLags site to read more…

Posted in Uncategorized | 1 Comment