Welcome to My YouTube Channel

I have recently decided to launch a YouTube Channel to complement this Blog.

My aim for the channel is to provide some insight into the techniques and practice of Continuous Delivery, explain some of my ideas on Software Engineering, and how we can start to work more like Engineers and gain advantage from that.

I also plan, from time to time, to indulge my general interest in software development, muse on what we can learn from science and how to apply that to software development, and finally to be a bit opinionated.

Take a look and if there are any topics that you are particularly interested in me covering please let me know.

So far I have published the following episodes:

Posted in Agile Development, Blog Housekeeping, Continuous Delivery, Effective Practices, Engineering Discipline, High Performance Computing, Personal News | Tagged , , , , | Leave a comment

Q&A from GOTO Copenhagen Session on Reactive Systems

I recently spoke at GOTO Copenhagen on the topic of Reactive Systems.

I will post a link to the video of my talk here when it is published.

I didn’t have time to answer all of the questions, so here are my answers to the questions asked…

Q: A typical question that arises when thinking about eventual consistency is the user responsiveness. Imagine a user updating some property of something in the UI, that user often wants to see the result of that update, immediately. It’s kind of bad user experience to show him out of date information and a message “please refresh in some time to see the result of your action”. What is your view on that? How can we hide eventual consistency from the user, in the cases where we don’t want him to notice that it’s eventual consistent in the backend?

Q: How do you deal with asynchrony in the UI where users want immediate feedback of their action?

A: Eventual consistency may, maybe should, make you think about some different ways of showing the user what is going on, but there is also an assumption built into your question that this is going to be slow.

Imagine what is going on in a synch call across a process boundary. A request-response call to a remote server perhaps…

Our client code needs to encode the call somehow.
The request needs to be sent across the wire.
Our thread needs to be blocked while we wait for a response.

The server end will be triggered when the message arrives.
We will need to translate the message into something useful.
We call the server code with the message.
Server code formulates a response.
Server code with need to translate the response into something to send.
and send it.

Our client code will need to receive the message.
Translate it into something useful.
Wake-up the client-blocked sync thread.
Call the client code with the response.
Process the response.

Now think about how that would be different for an async communication. We would remove the steps to block the thread on the client and reactivate it. Instead we could imagine that thread being continually busy, in the simplest case, looping around looking for new messages.

So in the line of any communication there is less work to do, not more.

All of the highest performance systems in the world, that I am aware of, are built on top of async messaging for this reason. Telecoms, Trading, Real time control systems.

So for the vast majority of interactions a user of an async system will get better, rather than worse, response. In the tiny number of interactions when something is going wrong, and responses are slowed the user is seeing the truth, that their invocation hasn’t finished yet, but they are not blocked from making progress elsewhere.

This does lead to a slightly take on UI design, but it is, at least, only different rather than worse, and maybe a more accurate and more robust representation of the truth.

Q: Event-Driven or Message-Driven in 2020?
Q: What is the difference between events and messages?
Q: Should we be storing messages or event? Or both?

A: When we wrote the Reactive Manifesto https://www.reactivemanifesto.org/ we debated this a lot. “Event or Message”. We came down on the side of Message because an “Event” gives the impression that there is not necessarily any consumer of the event. Whereas “Message” has a more obvious implication is that something somewhere cares and is listening.

I think that this may be thought of as a bit like counting Angels, and personally I am fairly relaxed about the differences, but when you are trying to communicate ideas broadly it is sometimes useful to be a bit pedantic about the language that you choose.

Q: How do reactive frameworks relate to reactive systems?

A: I think that there is a relationship, but they are not the same. Reactive frameworks are largely focussed on stream processing at a programatic level, Reactive Systems is more of an architectural stance.

I did say in my presentation that these ideas are kind of fractal though, so the async, event/message-based nature of both of these levels of granularity are common.

There are some details of the Reactive frameworks that I have seen that I dislike, as a matter of personal taste, as a programmer (Futures for example). I see little advantage in trying to make aysnc look like sync.

Taking a more architectural viewpoint and simply, at the level of a service, processing async messages as input and sending async messages as output results in simpler code. It may result in a little more typing, but the code will be simpler and so easier to follow.

The real advantage that I perceive in Reactive Systems is the separation of essential and accidental complexity. The code that I spend my day-to-day work on is inside the services. It is focused solely on the domain logic of my problem. Everything else is outside in the infrastructure. Reactive Programming probably offers the same effect if you think about it, but most of the code that I have seen doesn’t achieve that.

Q: Any good places to store events?

A: Ideally that is a problem for your infrastructure. Aeron, for example, has “Clustering” Support which allows you to preserve, and distribute, the event-log. When configured this way, it will record, and play-back, the stream of events for you.

But once you have the stream, you can do almost anything you like with it.

Q: When would a massage driven system be inappropriate, or just overkill?

A: I think that my answer to this splits into two.

On the one hand, this style of development is still reasonably niche. It has a long an extremely well established history, but is still most widely used in fairly unusual problems. Trading, Telecoms, Real-Time systems and so on. I believe that it is MUCH more widely applicable than that, but because of that the tooling is fairly niche too. Akka is probably the most fully-fledged offering. It is certainly Reactive, personally I think that there are some aspects of the Actor model in Akka that seem more complex than is really required, but it is a great place to start, with lots of examples and successful industrial and commercial applications.

On the other hand, as I said there is something fairly fundamental at the level of Computer Science here. Async message passing between isolated nodes is a bit like the quantum physics of computing, it is the fundamental reality on which everything else is built. This is how processors work internally. It is how Erlang works, it is how Transputers worked in the 1980s and it is how most of the seriously high-performance trading systems ion the world work at some level.

Performance isn’t the only criteria though. I value this approach primarily for the separation of accidental and essential complexity. Distributed and Concurrent systems are extremely complex things – Always! This approach allows me to build complex systems more simply than any other approach that I know.

So I think that it should be MUCH more broadly applicable, but the currently level of tooling and support means that I probably would choose to use it only when I know that the system will need to scale up to run on more than one computer or needs to be VERY robust. For systems simpler than that, I may compromise on a more traditional approach 🙂

Q: Instead of back pressure couldn’t you automatically startup an extra component b?

A: Yes you can but you need to signal that need somehow, and that is what “Back-pressure” is for. It allows us to build systems that are better able to “sense” their need to scale elastically on demand.

Q: Why unbounded queue is bad pattern? How about Apache Kafka?

A: An unbounded queue is ALWAYS unstable. If you overload it what happens next? To build resilient systems you must cope with the case of what you will do when demand exceeds supply (the queue is full).

There are only three options:

  1. Discard excess messages.
  2. Increase resources to cope with demand.
  3. Signal that you can’t cope and slow-down the input at the sender.

Options 2 & 3 require the idea of “Back-Pressure” to get the message out to something else to either launch some more servers (elastically scale) or to slow input.

At the limit, given that resources are always finite (even in the cloud) you probably want to consider both 2 & 3 for resilient systems.

Kafka allows you to configure what to do when the back-pressure increases.

Q: If you need to join two datasets, coming from two different streams, first stream – fast real-time, second – slowly changing, without persisting data on your storage, how would you recommend to do it? Any recommended patterns?

A: In the kind of stateful, single-threaded reactive system that I was describing this is a fairly simple problem. Imagine a stateful piece of code that represents your domain logic. Let’s imagine a book-store. I could have a service to process orders for books. I have lots of users and so the stream of orders is fast and, effectively, constant.

I may not choose to design it like this in the real-world, but for the sake of simplicity, let’s imagine that we check the price of the book as part of processing an order.

I am going to process orders and changes to the price of books on the same thread. This means that I can process different kinds of messages via my async input queue. When an event occurs to change the price of a book, interspersed with processing orders, as I begin to process that message, nothing else is, or can, go on. Remember, this is all on a single thread, so the “ChangeBookPrice” message is in complete, un-contended control of the state of the Service.

So I have no technical problems, my only problems are related to the problem-domain. These are the sorts of problems that we want to be concentrating on!

So what should we do when we change the price of a book?

We could change the price and reject orders not at that price. We could change the price, but allow orders placed before we changed the price to be processed at the old price… and so-on.

I think that the simplicity of the safe, single-threaded, stateful, programming model combined with the separation of technical and domain concerns that it entails gives us greater focus on the problem at hand.

Q: Let’s say you have scalable components and a large history of events. How to deal with the history to recreate the state of that new component which just scaled up. Use snapshots to store an intermediate state of a component?

A: Yes, this is one of the complexities of this architectural approach. You get some wonderful properties, but it is complex at the point when messages change.
The first thing to say is that in these kinds of architectures, the messages store is the truth!

The first scenario, that you talk about, is what happens if you want to re-implement your service. Well, as long as the message protocol is consistent – go ahead, everything will still work. Since a message is the only way that you can transform the state of your service, as long as you can consistently replay the messages in order, your state, however it is represented internally, will be deterministic.

The problem comes when you want to change the messages. You have then got an asymmetry between what you have recorded and what you would like to play-back. When we built our exchange we coped with this in two different ways.

When we shut the system down we would take a “snapshot” of the state of a service. When the service was re-started it would be restarted by initializing it with the newest snapshot, and then by replaying any outstanding, post-snapshot, messages.

We then built some tools that allowed us to apply (and test) transformations on the snapshot. This was a bit complicated, but worked for us.

The other solution was to support multiple message versions at runtime and dynamically apply translations into the new form required by the service.

One more, common, pattern that we didn’t use much in our exchange was to support multiple versions of the same message, through different adaptors.

Q: How can random outcomes be reproduceable? Eg implementing a game with dice. Roll die will have a result, but if only the command is saved?

A: Fairly simply, you externalize the dice! Have a service outside of the game that generates the random event. Send that as a message. The game is now deterministic in terms of the sequence of messages.

Q: What about eventual consistency of data? how do you resolve conflicts?

A: I think that broadly there are two strategies. You align your service with Bounded Contexts in the problem domain. You choose these, where you can, so that you don’t care about consistence between different services.

For example. If I am buying books off Amazon. The stuff that is in my shopping cart right now is unrelated to the history of my orders. Even once I have ordered the stuff in my cart, I don’t really care if it takes a second or two for the order-history to catch-up. So “eventual consistency” between my “Shopping Cart service” and “Order History Service” doesn’t matter at all.

Where I need two distinct, distributed, services to be in-step I can take the performance overhead of achieving consensus. There are well-established distributed consensus protocols that will achieve this. RAFT is probably the best known at the moment. So you can apply RAFT to ensure that your services are in-step where they need to be.

If this sounds slower, it is, but it is no slower than any other approach that is ALWAYS what you must do to achieve consistency. These are the same kind of techniques that happen below the covers of more conventional, distributed synchronous approaches – e.f. Distributed Transactions in a RDBMS.

Q: How do you ensure ordering across multiple instances of the same component? So scaling up, without risking two instances reserving the same, but last, book in the inventory?

A: This is back to the idea of eventual consistency. There are two strategies, live with the eventual consistency:

Allow separate instances of your multiple instances to place an order for a book at the same time, but have one “Inventory” to actually fulfill the order.

or:

Use some distributed consistency protocol to coordinate the state of each place where books can be ordered.

Q: Isn’t reactive Actors in a different pyjama?

A: The stuff I was describing could be considered to be a simple actor style. It misses some of the things that are usually involved in other actor based systems (e.f. Akka).

The fundamental characteristics though are the same. We have stateful bubbles of logic (actors) communicating exclusively via async messages.

Q: In the system you built – did you use a message bus?

A: Yes, we built our own infrastructure layered on top of a high-performance messaging system from 29 West.

Q: Should messages be sent to kafka or similar?

A: You can certainly implement Reactive Systems on top of Kafka.

Q: Why not just accept message 4 if 3 is missing? is order important?

A: Dropping messages is not a very sensible thing to do at the level of your infrastructure. Though it may make sense within a particular problem domain.

The problem is, if my infrastructure just ignores the loss of message 3, then the state of my service processing the messages is now indeterminate. Imagine two services listening to the same stream of messages. One gets 3 the other doesn’t. If we don’t make the message order dependable our system is not deterministic.

If your problem domain allows you to ignore messages, perhaps they arrive too late and are no longer of interest – true in some trading systems for example, then you should deal with that problem in the code that deals with the problem domain, the implementation of the service, rather than in the infrastructure.

So the safest approach is to build the reliable messaging into the infra and deal with other special cases as part of the business problem.

Q: Persisting events sounds like a Big overhead on traditional synchronous call

A: Yes and doing it efficiently is important. However, if you have a system, of any kind, that requires state to be retrievable following a power outage, you have to store it somewhere. The mechanisms that I described for how the message stream is persisted as a stream of events is almost precisely the same as you would implement in the guts of a Database system. All modern RDBMS’ are based on the idea of processing a “transaction log”. This is the same thing, except where, and when, we process the log is changed.

When building our exchange we did a lot of research into this aspect of our system’s performance. The trouble with something like a DB is that it is optimized for a general case. If you look carefully at the performance of the storage devices, that we use to persist things, they are all, even SSDs, not optimized for predictable performance in Random Access. They work most efficiently if you can organize your access of them sequentially. We took advantage of that in the implementation our or message-log persistence so that we could stream to pre-allocated files and so get predictable, consistent latency. Modern disks and SSDs are very good at high-bandwidth streaming. So we could outperform a RDBMS by several orders of magnitude.

There is tech on the horizon that, I think, may disruptive and so strengthen the case even more for the kind of Reactive Systems that I described. That is massive-scale, non-volatile RAM.

Q: Was it LMAX you were working for?

A: Yes, LMAX was the company where we built the exchange.

You can read a bit more about our exchange and its architecture here: https://martinfowler.com/articles/lmax.html

Q: Service as a State machine implicates that the services should be stateful? That is added complexity? Thinking about changing the flows etc.

A: You have to have state somewhere, otherwise your code can’t do anything much.

Not all of your code needs to be stateful though. For the parts of your system that form the “system of record”, in this approach, those parts are implemented as “Stateful Services”.

If you want high-performance you can do this using the in-memory state as the system of record, using the techniques that I described – That was how our exchange worked. For other, slower, systems you service could be stateful and backed by some more conventional data store if that makes sense.

Q: How would a single thread bookstore service handle an order coming in while it is still processing the previous order? Or alternatively, two simultaneous orders?

A: It would queue the incoming second order and process it once the BookStore had finished processing the first. However, because of the MASSIVE INEFFICIENCY of data sharing in concurrent systems, avoiding the locking is something like three orders of magnitude faster than tackling this as a problem in concurrency.

Q: How to effectively handle transactions (and rollback in case of fail) in event based system? And how to understand that transaction not finished?

A: In these kind of systems the simplest solution is that a message defines the scope of a transaction. If you need broader consistency, use a distributed consensus protocol like RAFT.

Q: How do you deal with the communication with mobile and web frontends (and the UX of it)? Websockets and other solutions always feel more complicated for many use cases.

A: My preferred approach for all UI is to deal with it as a full bi-directional async comms problem. So then you have to use something like Websockets to get full “push” to the remote UI.

Q: Can your code use different cores in the CPU? Or will the next instance of execution use the same core? Do you utilise all the cores?

A: Yes, the system is multi-core, but is “shared-nothing” between cores. We can achieve this through good separation of concerns. For example, one core may be dedicated to taking bytes off the network and putting them in the ring-buffer, another maybe focussed on journaling the message to disk, another to processing the business-logic of the service and so on.

You can read more about the LMAX Disruptor, that coordinated those activities here:

https://lmax-exchange.github.io/disruptor/

…and see an interview with me and my friend Martin Thompson on the topic here:

https://www.infoq.com/interviews/thompson_farley_disruptor_mechanical_sympathy/

Q: How would you make mission critical software asynchronous?

A: Most really mission-critical software is already asynchronous! Look at Telecoms!

Q: Do we have to use Reactive Frameworks (like RxJava) in a Reactive System?

A: No, see my earlier answer.

Q: Seriously, why not JS on server-side?

A: It was a cheap-shot, but Javascript is an enormously inefficient use of a computer. One argument against it is from the perspective of climate-change.

Data Centres, globally, put more CO2 into the atmosphere than commercial aviation. Something between 7 and 10% of global CO2 production. The kind of systems that I am describing are something like four or more orders of magnitude faster than most conventional Javascript systems.

If I am over-exaggerating, and we could improve the performance by a single order of magnitude we could reduce global CO2 emissions by 9%!!!

We tend not think about software in these terms, but perhaps we should!

I cannot think of any sphere of human activity that tolerates similar levels of inefficiency as software.

Q: How to measure the impact of Eventual Consistency on asynchronous Event-Driven Systems?

A: The term “eventual” is confusing, we are talking about computer-speeds here. Eventual usually, under most circumstances, means faster than any human can detect. So in most cases, the eventuality of the system doesn’t matter at the human scale. Where the system slows for some reason, then you need to be able to cope with the fact that the data is not in-step, but that is simply a reflection of the truth of ANY distributed system. So the trade-off is ALWAYS between slower communications with consistency or faster communications with eventual consistency. The overhead for consistency is considerable, but it is ALWAYS considerable, even in sync-systems.

Posted in Uncategorized | Tagged | Leave a comment

Continuous Compliance

I work as an independent consultant advising organizations on how to improve their software development practice. This is a very broad theme. It touches on every aspect of software development from design, coding and testing, to organizational structure, leadership and culture.

My work is structured, perhaps unsurprisingly, around Continuous Delivery (CD). I believe that CD is important for a variety of reasons. It is an approach grounded in the ideas of “Lean Thinking”, it is based on the application of the Scientific Method to software development. It is driven through a rapid, high-quality, iterative, feedback-guided approach to everything that we do, giving us deeper insight into our work, our products and our customers.

All of this is powerful in its impact, but there is another dimension that matters a lot in certain industry sectors.

The majority of my clients work in regulated industries, Finance, Health-care, Gambling, Telecoms, Transport of different kinds and others.

My own background, as a developer and technical leader, was, in the later part of my career, in Finance – Exchanges and Trading Systems. Also highly regulated.

Nevertheless, when describing the Continuous Delivery approach to people I am regularly asked, “Yes, that all sounds very good, but it can’t possibly work in a regulated environment can it?”.

I have come to the opposite conclusion. I believe that CD is an essential component of ANY regulated approach. That is, I believe that it is not possible to implement a genuinely compliant, regulated system in the absence of CD!

Now, that is a strong statement, so what am I talking about?

What are the goals of Regulatory Compliance?

All of the regulatory regimes that I have seen are, in essence, focussed on two things:

1) Trying to encourage a professional, high-quality, safe approach to making change.

2) Providing an audit-trail to allow for some degree of oversight and problem-finding after a failure.

There is a third thing, but it is really secondary compared to these two. The third thing is that we need to be able to demonstrate the safety, quality and professionalism and our ability to work in a traceable (audited) way to regulators and auditors.

How does CD Help?

I believe that the highest quality approach that we know of for creating software of any kind is a disciplined approach to CD. The evidence is on my side https://amzn.to/2P2aHjv

So if our regulators require a professional, high-quality, safe approach to making change, the evidence says that they should be demanding CD (and structuring their regulations to encourage it!).

One of the core ideas in CD is the concept of the “Deployment Pipeline”, a channel through which all change destined for production flows. A Deployment Pipeline is used to validate, and reject changes that don’t look good enough. It is a platform, an organizing concept, and a falsification mechanism, for production-destined change. It is also the perfect vehicle for compliance.

All production change flows through the Deployment Pipeline. This means that, almost as a side-effect, we have access to all of the information associated with any change. That means that we can create a definitive, authoritative audit-trail.

(See links at end for more info on Deployment Pipelines & CD in general)

Figure 1 shows a diagram of an example Deployment Pipeline. Remember, there is no other route to production for any change.

Figure 1 – Example Continuous Delivery Deployment Pipeline

If we tie together our requirement-management systems with our Version Control System (VCS), through something as simple as a commit message tagged with the story, or bug, ID that this commit is associated with, then we have complete traceability. We can tell the story of any change from end-to-end.

We can answer any of the following questions (and many more):

  • “Who captured the need for this change?”
  • “Who wrote the tests?”
  • “Who committed changes associated with this piece of work?”
  • “Which tests were run?”
  • “This change was rejected, what failed to reject the change?”
  • “Who was involved with any manual testing?”
  • “Who approved the release into production?”
  • “Which version of the OS, Database, programming language, etc was deployed and used?”
  • “Which version of the deployment script/tooling was used?”

All of this information is available as a side-effect of building a Deployment Pipeline. In fact it is quite hard to imagine a Pipeline that doesn’t give you access to this information. I sometimes describe one of the important properties of Deployment Pipelines as “providing a key’ed, search-space for all of the information associated with any production change.”. This is Gold for people working in compliance, regulation and audit.

The Deployment Pipeline, when properly implemented, is in the perfect place to act as a platform for regulatory functions and data-collection. If we can mine this Gold, if we can identify the needs of the people working in these areas, we can implement behaviors, in the Pipeline, to support and enhance their efforts.

Here are a few examples from my own, real-world, experience…

  • Generate an automatic audit-trail for all production change.
  • Implement access-control to the Pipeline so that we can audit who did what.
  • Implement “compliance-gates” to automate rules
    e.g. “We require sign-off for release”:
    Solution: Use access-control credentials (and people’s roles) to automate “sign-offs”
  • Reject any commit that fails any test.
    (Most regulators *love* this idea when you explain it to them!)
  • In an emergency we may need to break a rule
    Solution: Allow manual override of rules
    e.g. “Reject any commit that fails a test”, but audit the decision and who made it.
    (Regulators love that too. They recognize that bad things sometimes happen, but want to see the decision-making)

What Does It Take?

Assuming that you have a working Deployment Pipeline, creating one is outside the scope of this article (see links below), the first practical step to implement “Continuous Compliance” is the one I have already mentioned. Connect your Pipeline, via commit messages, to your requirements system!

Use the IDs from Jira or Trello (or whatever) and tag every commit with a Bug or Story ID.

That should give you a key-based system that joins all of the information that you collect together and makes it searchable (and so amenable to automation, reporting, and tool-building).

The next step is to add access-control to Pipeline tools so that you can track human decision-making.

Continuous Delivery is defined as “working so that your software is always in a releasable state”. This does not eliminate the need for human decision-making. Where applicable and appropriate, capture the outcome of human decisions via the Pipeline tools.

The “Lean” part of CD means that we are trying reduce the work associated with the process to a minimum. We want to eliminate “waste” wherever we find it, and so we need to be smart about the things that we do and maximize their value.

For example, regulation often says that we need to document changes to our production systems. I agree! However, I don’t want to waste my, and my teams’, time writing documents that will only ever be read by regulators. Instead I would like to find things that I must do to create useful software and do them in a way that allows me to use them for other purposes, like informing regulators. One way to think about this is we are trying to achieve regulatory compliance as a side-effect of how we work.

In order to design and develop software I must have some idea of what I am trying to achieve (a requirement of some kind), I must work to fulfil that need (write code of some kind) and I must check that the need is fulfilled (a test of some kind).

What if I could do only these things, but do them in a way that allowed me to use the information that I generate for more than only these things?

If my requirements were defined in a way that documented changes to the behavior of the system and why they are useful (sounds a bit like “User Stories” doesn’t it?). If I adopted some simple conventions in the way that I captured and organized this information, to aid automation, then I have descriptions of changes that would contribute to, and make sense as, release notes. So I will be able to automate some of the documentation associated with a release, in a regulated environment.

If I drive these requirements from examples of desirable behaviors of my system, they define these desirable behaviors in a way that allows me to automate the examples and use them as specifications for the behavior of the system – Executable Specifications. These automated specifications (aka “Acceptance Tests”) can be used to structure the development activities. At the start of each new piece of work we begin by creating our “Executable Specifications” then we practice TDD, in fine-grain form, to incrementally evolve a solution to the meet these specifications.

These activities, combined, give us an extremely high-quality approach to developing solutions. If we record them they also provide us the “whys”, “whens” and “whos” that allow us to tell the story of the work done.

We can “grow” the system via many small, low-risk, audited, commits. Each change is traceable, audited and of very high quality. Each change is small and simple, verified by Continuous Integration, and so safer.

We can make a separate decision of when to release each, tiny, change into production and we will have an automated audit-trail of all of the actions and decisions that contributed to that release.

This approach is demonstrably, measurably, higher-quality and safer than any other that we know of.

All changes, whatever their nature, are treated in the same way. There are no special cases for bug-fixes or emergency fixes. No special “back-doors” to production. All production change flows through the same process and mechanism and so is traceable and verified to the level of testing that we decide to apply in our Pipeline.

How else could we minimize work?

Regulated industries often require various gate-keeping steps, sign-offs for example. Unfortunately the evidence is against these as a successful approach to improving quality and safety. In fact, the more complex approaches to gatekeeping, like “Change Approval Boards” are negatively correlated with software quality! The more complex the compliance mechanisms around change, the lower the quality of the software. (See Page 49 of the 2019 “State of DevOps Report”).

Nevertheless, most regulatory frameworks were designed before this kind of analysis was available. Most regulatory frameworks were built on an assumption of a Waterfall style, gated process. So if we want to achieve “Continuous Compliance” in a real-world environment, we must cope with regulation that is not quite the right shape for this very different paradigm. That is OK, because this new paradigm is much more flexible than the old.

Over time I hope, and expect, regulation to adapt, to catch-up to these more effective ways of working. It is, after all, a better way to fulfil the underlying intent of any regulatory mechanism for software.

I believe that there have been some small moves, at the level of interpretation of regulation, in some industries. Over time I expect that the regulations themselves will change to assume, or encourage, CD, rather than only allow interpretations that permit it.

I have had success with regulators, and people working in compliance organizations, in several different industries by engaging with them and demonstrating that what I am trying to achieve is in their interest. By bringing them on-board with the change, and helping to solve the real problems that people in these roles regularly face, you can not just get approval to interpret the regulations in ways more amenable to CD, but you can gain allies who will work to help you.

Here are a few techniques that I have used and advised my clients to adopt:

Example: When you are being audited, assign developers to help the auditors. Their job is to help, to give the auditors all of the information that they need, but also to observe what is going on and to treat the audit as a chance to learn what the auditors really need. This is a requirements-gathering opportunity! Then take what you have learned and implement new checks, in your Pipeline to stop errors sneaking through. Improve the audit-trail so that a future auditor can more easily see what happened. Create new reports on your Pipeline-search-space to tell the story in a way that meets the needs of the auditor.

Example: If your regulatory framework requires a code-review, how do you do that best and keep-up the pace of work that makes CI (and CD) work best? I my experience Pair-Programming, coupled with regular pair-rotation, gives all of the benefits of code-review, and more, and is acceptable to regulators to demonstrate that the code has been reviewed and that there is some independent oversight/verification of change.

Example: Your regulatory framework requires sign-off from a developer, operations person and tester before release. Use the access-control tools in your Pipeline to enforce this policy, and audit it.

Example: Regulation requires a separation of roles. Devs can’t access production, Ops can’t access Dev environments. Fine, I prefer to take it a step further. “No-one can access production!”. All production access is through automation, e.g. Infrastructure as Code, automated deployment, automated monitoring etc.

These are a few techniques that I have seen applied, and applied myself, in regulated environments. My experience, across the board, has been that regulators prefer these approaches, once they come to understand them, because they provide a better quality experience all around.

What Is Not Covered by CD?

Some regulatory regimes require significant documentation describing the architecture and design of the system as well as describing any “significant change” to its design.

I believe that these are another hang-over from Waterfall thinking. I think that the intent is that by asking for such documentation regulators are attempting to encourage people to think more carefully about change and to approach it with more caution.

I believe that a sophisticated approach to test-automation is a better approach. Nevertheless, current regulation usually requires documentation of the system architecture and significant changes to it.

I tend to approach this part of the problem in more conventional ways. Write the architecture documents as you always have, except try to ensure that the detail is not too precise. What you need to achieve is an accurate, but slightly vague description of your system. For example, describe the principles services, perhaps how they communicate. The main information flows and stores, but don’t go into the detail of implementation, code or schemas. Leave room for the system to evolve over time, but still meet the architectural description.

Try to agree, with your regulators, what “significant change” entails, what are they nervous of? They probably won’t tell you. Or at least they won’t be very definite. It is not their job. However, what you are looking for is how to ensure that the massive flood of changes that you want to apply (in a CD context) don’t count as significant.

Even these tiny, frequent changes will be audited, documented (by tests), reviewed (by pair-programming) and have things like (autogenerated) release-notes associated with them, but they won’t count as “significant” in the sense of requiring new documentation (beyond the automated tests and requirements).

Again, I hope, and expect, that regulation will change over time, to allow for these more effective forms of documentation to be used instead of Prose doing a poor job of describing some kind of design intent.

I am not against documentation that is useful in helping people to understand systems. I like to create and maintain a high-level description of the system architecture that aids people in navigating their way around the system. I am just not sure how this helps the goals of regulation and I don’t want to be forced to document, in prose, every change to my production system – that is the role of automated tests, which do a better job, because they are a more accurate description of the behavior of the code (they must be because they passed) and they are necessary for other reasons, beyond regulatory compliance, and so I am going to create them anyway.

Conclusion

I have worked in regulated industries before and after I learned how do practice Continuous Delivery. All of my non-CD experience, including what I have, over several decades in consultancy roles, observed in client organizations, leads me to the belief that in the absence of CD, regulatory compliance is practiced “more by the breach than the observance”. That is to say, most regulated organizations usually have a long list of “compliance-breaches” that they, one day, hope to catch-up on.

The usual organizational responses that I have observed are to either, try to slow the pace of change to gain control (this is counter productive because slow, heavy-weight process are negatively correlated with quality) or they try and skate-close-to-the-edge and keep working and do the bare-minimum to keep regulators happy. Neither of these is a desirable, or a high-quality outcome!

I have seen the CD approach remove compliance as an obstacle!

I have seen organizations move from taking weeks, sometimes months, to ensure that releases into production were “compliant” with regulation (and never making it), to being able to generate genuinely compliant release candidates multiple times per day, along with all of the documentation and approvals.

In fact, when working at LMAX on creating one of the highest performance financial exchanges in the world, it was more difficult for us to release a change that wasn’t compliant, than one that was. Our Deployment Pipeline enforced compliance, and so the only way we could avoid that was to break, or subvert the Pipeline.

So when I say “I believe that CD is an essential component of ANY regulated approach. That is, I believe that it is not possible to implement a genuinely compliant, regulated system in the absence of CD!” I really do mean it.

More Info

Continuous Delivery (Book):
https://www.amazon.co.uk/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912/

Rationale for CD (Talk):
https://www.youtube.com/watch?v=U_DuDgkoSRA

Optimizing Deployment Pipelines (Talk):
https://www.youtube.com/watch?v=gDgAVqkFYWs

Accelerate – The Science of Lean Software and DevOps (Book): https://amzn.to/2P2aHjv

Adopting CD at Siemens Healthcare (Article): https://www.infoq.com/articles/continuous-delivery-teamplay/

Posted in Agile Development, Continuous Delivery, DevOps, Effective Practices, Engineering Discipline, Pair Programming | Tagged , , , , | 12 Comments

Autonomy != Anarchy

I am a long-standing believer in the principles of Agile development. I have been working this way for several decades, before it was referred to as “Agile”. I am friends with several signatories to the original “Agile Manifesto” and with them I share a degree of disappointment about how those important ideas are often misinterpreted and ignored.

I have been fortunate to have been close to the birth of several ideas that have been widely adopted. My observation is that it is always the case that so much is lost in translation as the idea gains wider “acceptance”.

One of the ideas that I think is widely misunderstood is the idea of “Autonomy”.

These days I work as an independent consultant. I work with lots of teams in lots of different organizations and I perceive a common anti-pattern in those organizations that claim to have embraced the principles of an Agile approach.

This is best expressed as “We can’t tell our people do that because they are autonomous”.

This is tricky, because it is kind of true that you want autonomous teams and also kind of a disaster if every individual on every team has complete freedom of choice.

I think it is important to treat everyone with respect and to recognize the fact, and encourage the culture, that good ideas can come from anywhere. It is also an important characteristic of successful teams that they are organized in a way that means that they don’t need to ask for “permission” from people outside the team to change the design, the architecture, the tools or the way that they work.

However, there are some ideas that are wrong. There are more wrong ideas than right ideas, and reinventing the whole of computer science and software development practice from scratch for every individual on every team is a patently a ridiculous idea.

How can we, as an industry, make progress if everyone has a veto on every idea?

For me the answer is one of scope. What is the correct scope for Autonomy? Is it the individual, the team the organization, or should everyone just do what I tell them to?

Much as the last answer would stroke my ego, I think that the real answer is that the correct boundary for Autonomy is the team.

There is some flex in what that means, but the team is the scope. If the team agrees that they are going to practice pair programming, and one person doesn’t want to, they can’t veto the decision or decide not to take part. It is a team decision.

If the team decides to change the tools that they use for their build, one person can’t decide to decline and continue with the old. Even if they really dislike the new direction.

Autonomy should be an act of collective responsibility.

Posted in Uncategorized | Tagged , , | 11 Comments

Science and Software Development

I have been talking about Continuous Delivery being, informally, an application of the scientific method to software development for several years now.

I have spoken about it, CD, being a candidate for the beginnings of a genuine engineering discipline for software development.

My interest in this is related to my interest, as an amateur, in science in general and physics in particular. I am an avid reader of popular science, but I am not very academically qualified in these subjects.

Nevertheless I think that there is something important, significant, here.

My interests have led me to read more deeply into some of these topics, I am learning more.

The Beginning of Infinity

Two things that have come together recently and made me want to write this piece, which has been brewing in the back of my mind for some time.

The first is that I was given a gift, a book that is probably the most mind-expanding book that I have ever read.

“The Beginning of Infinity” by David Deutsch is a profoundly deep work on the philosophy of science (and rationality). People are starting to talk of this book, this thinking, as the successor to the work of Karl Popper who’s ideas, in the 1930s, revolutionised the way that science has been viewed and practiced ever since. Popper was the person who described, amongst other things, the importance of being able to falsify theories.

The classic example from Popper is – we can never prove that all swans are white, but as soon as we see a single black swan we can disprove, falsify, the white swan assertion. These days a scientific theory is not really valid unless it is capable of being falsified.

There are too many ideas in Deutsch’s “The beginning of infinity” for me to summarise them all here, go and read the book – you can thank me for the recommendation later 😉 One of the key points though is that science proceeds by trying to establish what Professor Deutsch calls “Good Explanations”. A “good explanation” is an explanation that is hard to vary without changing its meaning and one that is falsifiable.

“There is only one way of thinking that is capable of making progress, or of surviving in the long run, and that is the way of seeking good explanations through creativity and criticism.”

“Its (science’s) quest for good explanations corrects the errors, allows for the biases and misleading perspectives, and fills in the gaps.”

“So we seek explanations that remain robust when we test them against those flickers and shadows, and against each other, and against criteria of logic and reasonableness and everything else we can think of. And when we can change them no more, we have understood some objective truth. And, as if that were not enough, what we understand we then control. It is like magic, only real. We are like gods!”

David Deutsch,
    “The Beginning of Infinity: Explanations That Transform the World”

Software Development, Science & Engineering

I think that this philosophy of science stuff has profound impacts on how we should approach software development and even how we view what software development is.

The second thing that made start on writing about this, was based on a passing comment that I made on Twitter. I repeated a viewpoint that I have long held that automated testing in software is best thought-of, used, as a falsification mechanism. Amongst several others Bill Caputo replied and included some links to his thoughts on this which very closely aligned with mine and described some of these ideas better than I had.

Then in the twitter conversation that followed Bill posted this http://logosity.net/model.html

This is very close to the way in which I have started to think about software development in general and more specifically, the more scientifically rational approach to the engineering of software that I try to apply and promote.

For me these two ideas collide.

Software Development is an Act of Creativity

David Deutsch’s “Good Explanations” are deeper and more difficult than they sound. In striving for a “Good Explanation” we are required to gather information to allows us to “create knowledge”.

I describe software development as an inherently creative process. We don’t often consider it as such and much of software development is, incorrectly, treated as an exercise in production rather than creativity and suffers as a consequence. This misconception has dogged our industry and how we undertake the intensively creative task that is software development.

We are trying to create knowledge, in the form of a computer program, that captures our best understanding of the problem that we are trying to address. This is entirely a process of exploration and discovery. The encoding of the knowledge, in the form of something executable, is merely a transcription exercise. So the thinking, the design, the discovery of “good explanations” that fit our understanding is at the heart of all good software development.

Of course “merely a transcription exercise” underplays the complexity of that part of the process, but my point is that the technicalities of coding, the languages, the tools, the syntax of the instructions themselves have the same relationship to software development that maths does to physics. These things are tools that allow us to grow and extend our understanding. They are not the thing itself. Maths, and coding, are great fun. I completely understand, and recognise in myself, their appeal, but for me at least, that fun is enormously amplified when I can apply them to something practical. Ideally something that helps me deepen my understanding. Something that helps me to get to “better explanations”.

This is kind of obvious if we think in terms of computer science, but kind of missed in much of the discussion and practice that I observe in the software development community.

Software Development is Always a Process of Discovery

If we think back to our computer science studies we know that we only need a Turing machine, any Turing machine, to solve any classically computable problem. So the choice of tools, language, architecture, design are all only choices. These tools are not unimportant, but neither are they fundamental to solving any given problem.

I can write code to solve any computable problem in any language or paradigm. The only difference is how efficient I am in transcripting my ideas. Functional Programming, OO Programming, Ruby on Rails, C++, Java, Assembler can all only render the same ideas.

Of course it is a bit more complex than that. Certain programming approaches may help me to think, more easily, of some kinds of solution, others may hinder me. However, I believe that there is something deeper here that matters profoundly to the creation of good software.

It is the act of discovery and of learning, understanding the problem in more depth, that characterises our work and is the real value of what we do.

I believe that we should optimise our development approach, tools and processes to maximise our ability to foster that learning and process of discovery. We do this by creating a series of better and better explanations of the problem that we are attempting to solve, and the techniques (code) that we are employing to solve it.

Creating “Good Explanations”

Our “good explanations” take specific forms. They are the documentation and tests that describe a coherent picture of what our systems should do. They are the code that capture our best current theory of how our code should do the things it should. They are the ideas in our heads, the descriptions and stories that we tell each other, that allow us to understand, diagnose problems, and extend and maintain our systems. These are our good explanations and one of the profound advantages that we have over most disciplines is that we can make many of these “explanations” self-validating for consistency by automating them.

I have been a long-term adherent of Test Driven Development (TDD). I don’t take this stuff lightly and over the years of practicing it have refined my take on it. It is an old statement, not original to me, that TDD is not really about testing. I was peripherally involved in the birth of a thing called Behaviour Driven Development (BDD). The idea was to try and re-focus people’s thinking on what is really important in TDD. BDD was born as a means of teaching TDD in a way that led to the higher-value ideas of Behavioural focus and the use of “Executable Specifications” to drive the development of our software. It is a very effective approach and I teach it, and commend it, to the teams and organisations that I work with.

I now think that there is something more profound going on here though, and for me David Deutch’s “Good Explanations” hold the key. When we develop some software, any software for any purpose, we are, nearly always, embarking on a process of discovery.

We need to discover a lot of stuff. We need to learn more about the problem that our software is intended to address. We need to learn about what works for the consumers of our software, and what doesn’t. We need to discover what designs work well and give us the behaviours that we desire. We need to discover if our solutions are fast-enough, robust-enough, scalable-enough and secure-enough. We start out knowing little about all this, and begin learning from there. At any given moment, in the life of a software system, all of this stuff only adds up to “our best current theory”. We can never be certain of any of it.

Optimising for Learning

For the vast majority of human history we were really quite bad at learning. Then a few hundred years ago, we discovered how to do it. We call the trick that we learned then “Science”.

Science is humanity’s best, most effective approach to learning – Deutsch would say “gaining new knowledge”. Fundamental to this approach, according to Deutsch, is the formation of these “good explanations” and their defining characteristic that “they are hard to vary” without invalidating them.

In trying, at multiple levels, to capture a “good explanation” of what is going on. We are trying to describe the logic and algorithms that capture behaviours that we are interested in. We are trying to describe the data structures of the information that we deal with and process. We are trying, in some manner, to describe the need that our software is intended to address for our users or the market niche that our cool new idea is hoped to exploit.

All of these “descriptions” are “explanations” of our understanding. To transform these “explanations” into “good explanations” our “explanations” need to be more rigourous. The need to include everything that we know and, as far as we are able, check that our “explanation” fits all of the facts.

“Good Explanation” – Example

A good example of this, taken from Professor Deutsch’s book, is the idea of seasons. Some people believe that winter is caused by the Earth having an elliptical orbit and so being further from the Sun for part of the year. This is a good explanation in that I can’t vary it without changing it significantly. If the idea is correct, changing the explanation to say “The seasons are caused by Earth having a circular orbit” doesn’t work because that completely changes the explanation.

So this seems like a reasonable idea, and, even better, it is easily falsifiable. If this were true, if seasons are caused by the distance of the Earth from the Sun, then it should be winter at the same time of the year all over the planet, because the planet is in the same place in its orbit whether I am in London or Sydney. This isn’t the case, so this theory fails. It is a bad explanation because it doesn’t fit ALL of the facts.

Let’s try again. Observations show that for any given location on the Earth, the Sun will rise and set at different points on the horizon at different times of the year. Ancients, before global travel, knew this. A good explanation for this is that the axis of the Earth’s rotation is tilted with respect to its orbit around the Sun. The axis is tilted and precesses as the Earth orbits the Sun. That means that when our part of the planet is tilted toward the Sun we get more energy from the Sun because it is more directly overhead (we call this Summer) and when tilted away we get less energy (we call this Winter).

So if I was an ancient Greek, and knew about axial tilt as an explanation of seasons I could make a prediction. When it is Summer here, it will be Winter on the opposite side of the planet. This explanatory power is profound. It allows ancient Greeks to predict the seasons in a place that their descendants wouldn’t get to travel to for thousands of years!

Engineering – Applied Science

So what has all this philosophy of science stuff got to do with software? Well this science stuff is humanity’s best problem solving technique. It is the difference between essentially static, agrarian civilisations that lasted for tens of thousands of years with virtually no change and our modern, high-tech civilisation that doubles its knowledge every 13 months. The application of science to solving practical problems is how we solve the most difficult problems in the world. It is also what we call “Engineering”.

I believe that we should apply this kind of thinking, engineering thinking, to software development. What that takes is a significantly more disciplined approach to software development.

The rewards though are significant. It means that we can create high-quality software, more efficiently, more quickly than we have before. It means that our software will better meet the needs of our users and it means that the organisations in which we work can be more successful, while we are less stressed by trying to solve insoluble problems like “when will I be ready to release the new feature and get the product owner off my back?”.

So, step 1 is to approach software development as an exercise in learning, of discovery.

If our best way to learn is Science, and software development is all about learning, then we should apply the lessons of Science to our approach to software development.

An Engineering Discipline for Software

Following Deutsch’s model we should be trying to create “good explanations” that are “hard to vary” and then we should evaluate our explanations with each other, and with reality to confirm that they are consistent. What does this mean in practice?

We could try to write down some explanations of what we would like our software to achieve. We are not going to understand the totality of what we want our software to achieve at the outset, that is something that we will learn as we progress and understand the problem, and hopefully the demand, in more depth. So we are looking for a way in which we can capture our current intent and expectations in a form that we can later extend. How wonderful would it be if we could somehow capture these explantations of our current understanding in a form that would allow us to confirm that they are consistent with one another and met as we proceed to elaborate and extend our theories.

To me this is pretty much the definition of TDD. It allows us to record an incrementally evolving collection of, hard to vary, explanations that capture our current understanding. If we are smart, we capture them in a way that allows us, with the help of Continuous Integration, to immediately see if our theories, our “good explanations”, in the form of our code meet our expectations – do the tests pass?

This approach allows us to construct and re-use an automated system of checking that our “good explanations” are consistent with one another, that the body of our knowledge (of the system) as a whole is self-consistent. This, in turn, means that, as our understanding deepens, we can make small changes to our ideas and quickly and efficiently confirm that everything still makes sense. This approach allows us to stay informed about the state of our system-wide understanding, even as the scope of our system extends beyond our ability to intuitively understand it in its entirety. It means that we can extend and deepen our knowledge in a particular focused area (a new feature of the system).

I believe that the TDD approach, refined and elaborated upon by Continuous Delivery, represents a genuine “Engineering Discipline” for software development. I don’t mean this in a loose sense. I don’t mean that this is “analogous to Engineering”. I mean that it allows us to use a more scientifically rational approach to validating our ideas, measuring their effect and maintaining an ever increasing, consistent, collection of “good explanations” of our system and its behaviour.

Posted in Culture, Effective Practices, Engineering Discipline, Software Engineering | Tagged | 1 Comment

Hygiene Factors for Software Development

I got into a small debate about software development with someone recently via the comments section to a previous blog-post.

During the course of the debate I thought of an analogy to make part of my argument, but I think that it has broader applicability, which triggered this post.

I have been talking to a lot of people lately about “Software Engineering” and debating with people that I know, and some that I don’t, about what it takes to establish a profession, and an engineering discipline.

I perceive a reasonably broad consensus, amongst people that we may consider thought-leaders in our industry, some of whom I am happy to call friends, about what “good” software development looks like. I also perceive a level of dismay in that group about much common practice.

So what are these disciplines and where is the consensus?

I perceive a broad agreement that waterfall style thinking, although still very common in practice, is a busted idea. The data is in, it just doesn’t produce great software!

Software development is a learning process, from beginning to end. So we must work to establish effective, high-quality, fast feedback loops in order to maximise our opportunities to learn. That means working iteratively, as well as lots of other things.

We are not good at predicting the future and so we must be experimental, we must be sceptical of our ideas and find ways to evaluate them quickly and effectively. We need to be more data-driven, measuring rather than guessing.

Automated testing provides a substrate that helps us to achieve many of these goals. Taking a test-driven approach to development enhances the degree to which we can carry out these fast, cheap experiments in the design, and implementation, of our code.

If I am to be intellectually honest in my convictions, then all that I have just said about the development of code is also true about the creation and evolution of our approach to development. We should be data-driven, empirical, experimental in our approach to improving development process.

On the “data-driven” front we are making some progress. The excellent work done by my friends at DORA has raised the bar on measurement of process and practice in our industry. Their new book Accelerate explains the science behind their measurements. The results of these measurements are that, for the first time, we have data that says things like “Your company makes more money if you do x”, where ‘x’ is doing some of the things above.

The DORA folk have a model that predicts success (or failure) of your development approach. All of this is based on a peer-reviewed approach to data collection and analysis.

We can interpret these perceptions in several ways. Perhaps I am wrong and merely echoing the contents of my own filter-bubble (probably to some extent!). Most of the “thought leaders” that I am thinking of are old-hands, a polite euphemism meaning that my social group is getting-on a bit. Maybe these are the rants of old men and women (though most are men, which is another problem for our industry sadly).

A more positive interpretation, and one that I am going to assume for the rest of this post, is that this represents something more. Perhaps we are beginning to perceive the need to grow-up, a little, as an industry?

My own, primary, interest in this is around the engineering disciplines that I think that we should try to establish as a norm for software developers who consider themselves professionals. I would like us to have a more precise definition of what “Software Engineering” means. It would need to rule some things out, as well as define some things that we should always do.

Others are interested more in the “Profession” side of things. I have recently seen a rise in people discussing ideas like “ethics” in software development. Bob Martin has a couple of interesting talks on this, and closely related, topics. He makes good points about the explosive growth of our industry and the consequent dilution of expertise. He estimates that the average level of experience, amongst software developers, is just 5 years. As a result we, as an industry, are very bad at learning from the mistakes of the past.

I have been careful in my choice of words here. Currently we are not a “Profession” we are a “Trade”. The difference between these two is that a “profession” demands qualifications as a barrier to entry, and has rules to reject people that don’t conform to its agreed, established norms. By these defining characteristics we don’t qualify as a profession.

You can’t practice law or medicine without the appropriate qualifications. In our industry, if you can pass the interview, you can take part. If I can convince an interviewer that I am competent, over a small number of hours during the course of an interview, I could go and write software that controls an aeroplane, a medical scanner or a nuclear power plant. An individual company may have rules that demand a specific degree, or other qualification, but our “trade” does not.

If you are a surgeon and you decide that washing your hands between operations is a waste of your valuable time, once people notice of the increased death-rate at your hands, you will be “struck-off” and not allowed to practice surgery ever again, anywhere.

There can be no profession without professional discipline.

In 1847 Ignaz Semmelweis made an important discovery:

“The introduction of anaesthetics encouraged more surgery, which inadvertently caused more, dangerous, post-operative infections in patients. The concept of infection was unknown until relatively modern times. The first progress in combating infection was made in 1847 by the Hungarian doctor Ignaz Semmelweis who noticed that medical students fresh from the dissecting room were causing excess maternal death compared to midwives. Semmelweis, despite ridicule and opposition, introduced compulsory hand-washing for everyone entering the maternal wards and was rewarded with a plunge in maternal and foetal deaths, however the Royal Society dismissed his advice.” (Wikipedia https://en.wikipedia.org/wiki/History_of_surgery)

This resonates with me. I advocate for some specific practices around software development. These practices work together, in sometimes subtle ways. I believe that the combination of these practices provide a framework, a structure, a disciplined approach to software development that has the hallmarks of a genuine “engineering discipline”.

I believe that, like “washing your hands” as a surgeon, some of these disciplines are so important that they should become norms for our industry. I don’t doubt that you can write software without fast feedback, without automated tests, without an experimental approach, without collaborative teams and with big-up-front designs and with a 12 month plan. A positive outcome, though, is much less certain. Just because some surgeons had patients that survived, despite their lack of hygiene, doesn’t mean that hygiene isn’t a better approach.

These days, nobody can consider themselves a surgeon if they ignore the disciplines of their profession. I believe that one day, one way or another, we will, of necessity, adopt a similar approach.

If we are to establish ourselves as a profession, rather than as a trade, we will need to do something like this. Software is important in the world. It is the revolutionary force behind our civilisation at the moment. I foresee three futures for our industry.

1. We do nothing. At some point, something REALLY bad happens. Some software kills LOTS of people, or maybe destabilises our political, economic or social institutions. Regulators will regulate and effectively close us down, because they will get it wrong. (It has taken us decades to understand what works and what doesn’t, and we are supposed to be the experts!)

2. We start trying to define what it means to be a “Software Professional” in the true sense of the words. Something bad happens, but the regulators work with us to beef-up our profession, because they can see that we have been trying to apply some “duty of care”.

3. The AI Singularity happens and our Silicon overlords take the task of writing software out of our hands.
Ignoring 3 for now…

Scenarios 1 and 2 are both problematic.

I fear that we will continue with 1. The short-term economic imperative will continue to drive us, for a while, until the population at large realise just how important software has become. At which point there will be repercussions as they react to the lack of a sufficient duty-of-care in many instances. The VW emissions scandal is an early warning of this kind of societal reaction, I think.

Scenario 2 is problematic for different reasons. I think that it is the more sensible strategy, but it demands that we change our industry and allow it to progress from trade to profession. Daunting! At which point, if we succeeded, I would be expelled for not having any relevant qualifications. This is a big challenge, and not just for me personally ;-). Our industry is still growing explosively, educational establishments are not really delivering people with the skills ready to be “professional” in the sense that I mean. Many universities (maybe even most) still teach waterfall development practices for goodness sake!

My own experience of hiring and training young people into our industry suggests that there is relatively little advantage in hiring Computer Science graduates over most other graduates. We pretty much had to start from scratch with their brain-washing, errrr “on-the-job training”, in both cases. It is easy, even common, to graduate from a CS course and not be able to program, let alone program well. Physics, and other hard-science, graduates have a better understanding of experimental discipline and scientific rigour. The main problem with physicists (and most CS graduates) is getting them to realise that “yes, programming is actually quite difficult to do well” and the techniques that work for a few lines of private code don’t scale well.

There is still much debate to be had. Despite the fairly broad consensus that I perceive on what it means to apply “engineering thinking” in software, I still regularly get people arguing against the practices that I recommend. If I am honest, most of these arguments are ones that I have heard many times. Often these arguments are based on dogma rather than measurement or evidence. If we are to be more scientific, apply more engineering discipline to our work, we cannot base our decisions on merely anecdote. That is not how science and engineering work!

I am not arrogant enough to assume that I have all of the answers. However, I confess that I am hubristic enough to believe that the people expressing “ridicule and opposition” on the basis of dogma or only anecdote don’t have a strong case. Mentally I dismiss those arguments as being analogous to the surgeons who don’t “wash their hands”.

If you want to change my mind, change it with data, change it with evidence.

I think that we are in the same state as surgeons in the 1850s. Today, there is no reputable surgeon in the world that does not wash their hands before surgery now. This discipline wasn’t always obvious though. I believe that we have identified a number of practices that are the equivalent for software development of “washing your hands” for surgeons. I spend a lot of my time describing these despite <occasional> “ridicule and opposition” 😉

In both cases, existing practitioners, who don’t “wash their hands”, claim that this is unnecessary and a waste of time. I think that the data, and, I hope one day, history, is on my side.

Posted in Agile Development, Culture, Effective Practices, Engineering Discipline, Software Engineering | 1 Comment

Perceived Barriers to Trunk Based Development

A friend of mine has recently started work at a new company. She asked me if I’d answer a few questions from their dev team, so here is the second…

Q: “Currently at MarketInvoice we use short-lived feature branches that are merged to master post-code review. How would you recommend we shift towards trunk based development and are there any other practises you would recommend to reduce/eliminate the bottleneck of code review?”

I perceive three barriers to the adoption of trunk-based-development in the teams that I work with…

  • The need for Code Review.
  • A cultural assumption that you only commit (to master/trunk) when work is complete.
  • A lack of confidence in automated tests.

Code Reviews

I think that code-review is a very useful practice. We get good feedback on the quality of our work, we may get some new ideas that we hadn’t thought of, we are forced to justify our thinking to someone else, and, if we work in a regulated industry, we get to say that our code was checked by someone else.

All of these are good things, but we can get them all, and more, if we adopt pair-programming.

Code review is great, but it happens when we think that we have finished. That is a bit too late to find out that we could have done better. From a feedback perspective, it would be much more effective if we could find out that an idea, or approach, could be improved before, or immediately after, we have written the code rather than after we thought we had finished. Pair programming means that we get that feedback close to the point when it is most valuable.

Pair programming is a code-review, and so satisfies the regulatory need for our changes to be checked by someone else, at least is has in every regulatory regime that I have seen. Pair programming is also much more than just a continual review. One way to look at it is that we get the code-review as a side-benefit, for free.

This means that the review does not impose any delay on the development. The code is being reviewed as it is written and so the review is both more thorough and adds no additional time to the development process.

So, my first answer is… Pair Programming!

Don’t wait to commit

This is a mind-set thing, and makes perfect sense. It seems very logical to assume that the ideal time to commit our changes is when we think that they are ready for use – the feature that we are working on is complete.

I think it is a bit more complicated than that though. I describe this in more detail in my post on “Continuous Integration and Feature Branching

If we want the benefits of Continuous Integration we need to commit more frequently than when we think that we are finished. The only definitive point at which we can evaluate our changes is when we evaluate them with the “production version” of our code which is represented by trunk (or master). CI on a branch is not CI! It is neither integration, at least not with the version of the code that will be deployed into production, nor is it continuous because you only integrate, with the version of the code that is deployed into production, when the feature is “finished”.

So to practice Continuous Integration, which is a pre-requisite for Continuous Delivery, we have to commit more frequently to the copy of code destined for production and so we must change our working practices.

This is a big shift for some people. It is probably one of the most profound shifts of mind-set for a developer in the adoption of Continuous Delivery. “What, you want me to commit changes before I am finished?” – Yes!

Continuous Delivery is defined by working in a way so that your software is in a releasable state after every commit. That doesn’t mean that all of the code needs to be useful. It just means that it “works” and doesn’t break anything else.

In the language of Continuous Delivery we aim to “separate deployment from release”. We can deploy small, simple, safe changes into production and only “release” a feature when all of those small changes add up to something useful.

This leads us into the territory of a much more evolutionary approach to design. Instead of thinking about everything up front, even for a small feature, we will work in a fine-grained, iterative way that allows us to try ideas and discard them if necessary on the route towards something that works better.

This has lots of good side-effects. Not least it means that I will design my code to allow me to change my mind and get things wrong without wasting all of my work. That means that my code will have good separation of concerns, be modular and will use abstractions to hide the details of one part of my design from others. All of these are hallmarks of high-quality code. So by working more incrementally, I get higher quality designs.

Automated Testing

“I can’t commit to trunk before I am finished because I may break something”. To me, that speaks of a lack of confidence in testing and/or a very traditional mind-set when it comes to testing strategy.

It kind of assumes that you can’t test your feature until it is finished. I think that that is old-school thinking. This is a problem that we know how to solve – “Test First!”.

This problem in part stems from the language that we have chosen to describe the use of automation to verify that our code works. We call these things “Tests” which tends to make us thing of performing this verification as a final step before we release. I wonder if the adoption of a “test-first” approach would have been different if we had called these things “specifications” rather than tests. “Specify first” seems more obvious perhaps than “test first”.

If we see our automated evaluations as “specifications” that define the behaviour that we want of our systems, we must obviously do the thinking, and create the automated version of these specifications, before we start to meet them by building code.

By building software to meet executable specifications of its behaviour we eliminate whole classes of errors, but even more importantly, we drive the design of our systems towards higher-quality. I argue this in an earlier post on “Test Driven Development“. The properties of code that make it testable are the same properties that we value in “high quality code”.

I have worked on large-scale complex systems where we could release at any time without fear of breaking things because our automated testing caught the vast majority of defects. Employing tests as “executable specifications” which describe the desired behaviours of our systems has a dramatic impact on the quality of the code that we produce.

In a study of production defects the authors estimated that over 70% of production defects would be eliminated by a more disciplined use of automated testing.

Using a test-first approach drives quality into our designs, protects against the most common causes of production defects and allow us to move forwards with more confidence.

Posted in Agile Development, Continuous Delivery, Culture, Effective Practices, Feature Branching, Pair Programming, TDD | 5 Comments

Pair Programming for Introverts

A friend of mine has recently started work at a new company. She asked me if I’d answer a few questions from their dev team, so here is the first in a short series of their questions and my answers…

Q: “Pair programming has been shown to increase quality and reduce overall development time. Nevertheless, some need heads down focused time on a problem. How do you balance this?”

My preference is to strongly encourage teams to adopt the norm that most work will be done working in pairs, but not to make it a rule. I think it right to leave room for people to decide for themselves when it doesn’t make sense.

However, you are right, ALL of the data that I have seen from studies of pair programming say that it produces higher-quality output, and so in the long run, is significantly more efficient in delivering new code. More than that, I know of no better way to encourage collaboration, learning and continual improvement in a team than pair programming.

(Links to some of that research at the end of my blog post “Pair Programming – The Most Extreme XP Practice”)

So it is strongly in a team’s interest to adopt and encourage pair programming as the norm. It is not good enough to reject it because some people don’t like it. That would be like mountain rescue teams rejecting the use of ropes because it is annoying to carry them up the hill. Some things have value even if they take some work.

For me, this means that it is worth some effort, maybe even significant effort, for a team to adopt, learn and make pair programming a fundamental part of their development culture.

My experience has been that most people, before they have experienced it, are nervous of pairing.

In part I think that this is a cultural thing, we “program” people to imagine software development as a lonely introspective act. I don’t think that good software development is really like that. It is, at its heart, a process of learning.

We learn best when we can try-out new ideas and quickly discard the bad ones. One way to test ideas is to bounce them off another person. So pair programming provides us with a mechanism to quickly and cheaply exercise ideas and weed out some of the bad ones.

There are also some individuals who will always find pair programming stressful.

If I am honest, I believe that these individuals have a more limited value to the team. They may have value, but that value can’t be as much as someone of similar skill who learns faster and teaches more.

Introverted people are more sensitive to stimulation than others, and so need more quiet time to reduce the cognitive clutter. I am one of these people. I need, periodically, to be on my own to organise my thoughts. This doesn’t mean that people like this can’t take part in pair programming, it does mean that you have to give them some space, some of the time.

So, my idea of “optimal” is to do most, nearly all, development work in pairs but allow humans to be human. If someone needs time to form their thoughts, or learn some tricky concept alone, or just needs some quiet time to recharge for a bit, give them that time.

There is another important aspect to this. There is some skill to pair programming. It takes time to learn some of the social aspects. For example, one very common behaviour that I see, in newbies, is for my pair, when I am typing, telling me letter-by-letter when I make a typo or what the instruction is. They are trying to be helpful, but they are not.

Watch your own typing for a bit. If you are anything like me, then your typing will progress forwards and backwards as you make little mistakes and then correct them all of the time. When this happens you know, as you type, that you made a mistake. Most errors you correct immediately. Someone telling you at this point, actually slows you down. It interrupts the flow of your thinking – and it is irritating.

So when you are pairing, and you are not typing, give people a chance to spot, and correct, their own mistakes. Only mention a typo when the typist has moved on and clearly missed it. Only mention the correct use of a language construct or api call if the typist is clearly stuck. Otherwise KEEP QUIET!

The classic description of the roles in pair programming are “Driver” (the person who is typing) and “Navigator” (the person who is not). This is a bit crude, but close. If you aren’t typing your focus should be on the direction of the design rather than the typing.

The other important aspect of pair programming as a learning activity is to regularly rotate the pairs. Change pairs often, don’t allow pairs to become stale. My preference is to change pairs every day.

This sounds extreme to some people. It means that nearly everyone works on nearly everything that the team produces over the period of a week or two. It means that you get to see different people’s styles of working (and pairing) and learn from them. It means that you get to work with the person on the team that you find trickiest to pair with and with the person that you enjoy working with the most, on a regular basis.

Pairing means that you are working in very close proximity to other people. Think of your pair as a team, you have shared goals and will succeed, or fail, together. Be considerate, be collaborative, be kind!

If you get this kind of stuff right, then the barriers to pair programming begin to reduce. Even the introverts on your team will not only take part, but will benefit from it.

Pair programming takes time to adjust to. This is not something that you can try for a day or two. It takes a while for a team to get really good at it, so allow yourselves the time, don’t give up too soon.

Posted in Agile Development, Culture, Effective Practices, Pair Programming | 1 Comment

CI and the Change Log

I get in to debates about the relative merits of “Continuous Integration (and Delivery)” vs those of “Feature Branching”  on a fairly regular basis.

A common push-back against CI, from the feature-branchers, is “you can’t maintain a clean change-log”.

I guess this depends on how important you think the change-log is and what it is for.

Is the change-log equally, or more important than working software? Of course not!

I know that statement is a bit extreme but it is *kind-of* a relevant question. CI is a practice that comes with some trade-offs, but it is the best way that we have discovered of maintaining our software in a working state so far.

Analysis from the “2017 State of DevOps Report” found the following:

“High performers have the shortest integration times and branchlifetimes, with branch life and integration typically lasting hours.

Low performers have the longest integration times and branch lifetimes, with branch life and integration typically lasting days.

These differences are statistically significant.”

The VCS change log tells a story, but what is the story and what is it for?

If I connect my “story/requirements management system” (JIRA etc) to my VCS via a tag in the commit message, I can trace every commit to a story. So I have traceability. So I guess the next question is what are the use-cases for a change-log?

I can think of two broad groups of usage for a change-log:

1) Some kind of audit-trail of changes, maybe useful for a regulator or compliance person to see the history of changes.

2) An index of changes that a developer can use to navigate the history.

If I adopt CI, and make fine-grained, regular commits, each of them commented on and linked to a story (or bug), then I can tell the story of the story. I have my audit trail. It will be very detailed. It may even wander around a bit “Make the button blue” and later “Make the button green” but that was the true story of the development. This is a good, accurate representation of the life of the change.

I know that each commit was related to the story, so from the perspective of an auditor I have a definitive, albeit granular, statement.

From the perspective of a developer wanting to know what change did what, I have a more detailed picture that too, because of this more granular reporting. I can build up the story, in fine detail of the evolution of the ideas. I have not lost anything, I have more information not less. The picture may be a bit messier, but that only represents the reality of the evolution of the design.

I confess that I don’t really understand the desire for a “clean change log”. What does that mean? It seems to me to imply an assumption that once I have finished a “Story” I am done.

What is the difference between me playing “Story 1”, which “makes the Button green” and later “Story 5” which “makes the button blue” and me changing my mind in the midst of  “Story 3” and making the same change?

I think that this desire for a “clean change-log” may be based on an illusion of software development as an ever increasing collection of desirable features rather than as an exploration of a problem-space. I think that development is much messier than that. It is much more the latter than the former.

If we are not learning-as-we-go that some of our ideas are wrong, we are not doing a very good job of software development. In my world, however granular or not, the idea of a “clean change-log” is an illusion.

I don’t believe that software development is like that. However I work, I am going to be returning to the code over and over again and refining and updating it as requirements are added and as my understanding evolves. So even if I have a log entry-per commit, I still need to read them all to know the state of the system at any given point, the only difference is one of granularity.

I am increasingly starting to view the collection of a fine-grained picture of the changes in our development process as an asset, not as a liability. Instead of thinking of the change-log as a linear record, think of it as part of the “historical search-space” of information, linked by keys (like the id of your story and the id of your release candidates), that you can navigate to build any picture you like of what happened. To my mind that is a more powerful tool, not a less powerful one.

Posted in Agile Development, Continuous Delivery, Continuous Integration, Effective Practices, Feature Branching | 5 Comments

Three Distinct Mind-sets in TDD

I have blogged about TDD before. I think that it is one of the most important tools in improving the design of our software, as well as increasing the quality of the systems that we create. TDD provides valuable, fine-grained feedback as we evolve the solutions to the problems that our code is meant to address.

Oh yes, and as a side-benefit, you get some nice efficient, loosely coupled, tests that you can use to find regression problems in future. 😉

I sometimes teach people how to practice TDD more effectively, and one of the things that I notice is that one subtlety that people often miss is the difference in focus for each of the TDD steps.

True TDD is very simple, it is “RED, GREEN, REFACTOR“.

  • We write a test, run it and see it fail (RED).
  • We write the minimum code to make it pass, run it and see it pass (GREEN).
  • We refactor the code, and the test, to make them as clean, expressive, elegant and simple as we can imagine (REFACTOR).

These steps are important not just as a teaching aid, but also because they represent three distinct phases in the design of our code. We should be thinking differently during each of these steps…

RED

We should be wholly focussed on expressing the behavioural need that we would like our code to address. At this point we should be concentrating only on the public interface to our code. That is what we are designing at this point, nothing else.

If you are thinking about how you will implement this method or class, you are thinking of the wrong things. Instead, think only about how to write a nice clear test that captures just what you would like your code to do.

This is a great opportunity to design the public interface to your code. By focusing on making the test simple to write, it means that if ideas are easy to express in our test, they will also be easy to express when someone, even you in future, uses your code. What you are really doing, at the point when you strive for a simple, clear test, is designing a clean, simple to use, easy to understand API.

Treat this as a distinct, separate step from designing the internal workings of the code. Concentrate only on describing the desired behaviour in the test as clearly as you can.

GREEN

Experienced TDD practitioners, like me, will tell you to do the simplest thing that makes the test pass. Even if that simple thing is trivial, or even naive. The reason that we advise this is because your code is currently broken, the test is failing. You are at an unstable point in the development.

If you start to try and do more complex things at this point, like make your design elegant or performant or more general, you can easily get lost and get stuck in a broken state for a while.

If the “simplest thing” is to return a hard-coded value, hard-code it!

This does a couple of things. It forces you to work in tiny steps, a good thing, and it also prompts you to write more tests that allow you to expand the logic of your code, another good thing.

Your tests should grow to form a “behavioural specification” for your code. Adopting the discipline of only writing production code when you have a failing test helps you to better elaborate and evolve that specification.

Don’t worry, we won’t forget to tidy-up the dumb, overly simplistic things that we do at this point.

Over-complicating the solution is one of the commonest mistakes that I see TDD beginners make. They try to capture too much in one step. They prefer to have fewer more complex tests than many, small, simple tests that prod and probe at the behaviour of their system. The small steps, in thinking and in code, help a lot. Don’t be afraid of many small simple tests.

REFACTOR

Always refactor on a passing build. Wait until you are in the “GREEN” state before you begin. This keeps you honest and stops you wandering off into the weeds and getting lost! Make small simple steps and then re-run the tests to confirm that everything still works.

Refactoring is not just an afterthought, it is not just about aligning the indents and optimising the imports. This is an opportunity to think a bit more strategically about your design.

It is important that we treat it as a separate step. I often see things that I want to change either when writing a test (RED) or when writing code to make the test pass (GREEN). On my good days, I remember that this is not the time. I make a note and come back to it once the test is passing. On my bad days I often end up making mistakes, trying to do things in steps that are too big an complicated, rather than small and simple, and so I end up having to revert or at least think a lot harder than I need.

If you use a distributed VCS like GIT, I recommend that after each refactoring step, after you have checked that the tests all pass, commit the change. The code is working, and the committed version gives you a chance to step-back to a stable state if you wander-off into more complex changes by mistake.

In general, I tend to commit locally after each individual refactoring step, and push to origin/master after finishing refactoring, but before moving-on to the next test.

Another beginner mistake that I frequently observe is to skip the refactor step all together. This is a big mistake! The refactor step is the time to think a little bit more strategically. Pause and think about the direction in which your code is evolving, try and shape the code to match this direction. Look for the cues that tell you that your code is doing too much or is too tightly-coupled to surrounding code.

One of my driving principles in design is “separation of concerns” if your code is doing “something AND something else” it is wrong. If your code is doing a business level calculation and is responsible for storing the results – wrong! These are separate and distinct concerns. Tease out new classes, new abstractions that allow you to deal with concerns independently. This naturally leads you down the path towards more modular, more compose-able designs. Use the refactoring step to look for the little cues in your code that indicates these problems.

If the set-up of your tests is too complex, your code probably has poor separation of concerns and may be too tightly-coupled to other things. If you need to include too many other classes to test your code, perhaps your code is not very cohesive.

Practice a pause for refactoring every single time you have a passing test. Always look and reflect “could I do this better?” even if sometimes the answer is “no it is fine”.

The three phases of TDD are distinct and your mental focus should also be distinct to maximise the benefit of each phase.

Posted in Continuous Integration, Effective Practices, Software Design, TDD | 10 Comments