I gave a presentation on my recommended approach to Acceptance Testing today, here at GOTO Copenhagen.
You can see an earlier version of this talk, from another conference here. GOTO will be publishing their version soon 😉
I ran out of time for questions, but here are my answers to questions submitted via the GOTO app…
Question: Those slides could use a designers touch, though 🙂
Answer: Fair enough, I am not a designer 😉
Question: How do we obtain repeatable tests in cases when we can’t avoid that each test action updates the system state? How do we cope with non-repeatable tests?
Answer: I have yet to find a case where the “Functional Isolation” techniques that I described don’t suffice. Use the existing structures in the system to isolate test cases from one another.
Question: Great talk! How to you suggest to keep (concurrent) test cased isolated if faking the system time?
Answer: Thanks 😉 This is one of those cases where using one deployed version of the system doesn’t work. In the case of “Time-Travel” tests, then each test does need its own version of the System under test. So for each time travel test you have to incur the cost of deploying and starting the system – these tests aren’t cheap!
Question: What ære the biggest challenges of implementing executable specifications in a team
Answer: I think that the tech is relatively simple, the hard parts are changing the way that people think about tests and testing. Which parts of this are *most* difficult depends on the team. Some teams find it very hard to move responsibility for the tests to developers. Others find it difficult to translate, often over-complex, too-large, requirements into sensible user-stories that make it easy to map from story to executable specification.
Question: If you use such effort on building a nice DSL for the tests… Why doesn’t the actual system not just have such a nice API?
Answer: Good question, I think that good design pays, wherever you apply it. But however good your API design, I advise that you keep a layer of “insulation” between your test cases and the API. If your API is a wonderful exercise in clarity and brevity, the map to domain language will be simple, but you still need a separate place to allow you to manage changes. Executable Specifications/Acceptance Tests are a special case. When writing them you will be expressing ideas at a different level of abstraction to what is needed through a programatic interface. So you want enough “wiggle-room” to allow you to cope with those variances.
Question: If I am a developer of System B, which is downstream from System A, I should write tests for the output of System A, to check if it still respects the interface. But, how do I know what are the inputs to the System A to make it output what I am expecting in my test?
Answer: That is a problem, but it is a smaller problem than doing ALL of your testing via system “A” which is what I am advising against. Let’s invert this question, if you are a developer of System “B”, how much do you care about up-stream system “A”? Write tests to exercise the system to the degree that you care about it. (Good design would suggest that you should care about it to the minimal degree). System “A” talks to my system, system “B”, so I can either confirm how it talks to my system with these tests, or I can cross my fingers and hope that I don’t get a call at 3am when system “A” decides to do something else 😉
Question: Test infrastructure is also code, how do you test the test infrastructure itself?
Answer: There is clearly a law of diminishing returns at play here. You can’t apply TDD for every test case. I aim to make my DSL clear and simple, high-level, enough that it doesn’t need testing (at the level of individual test-cases). Sometimes though, I will use TDD to develop widely-used, more complex, bits of my test infrastructure. I see automated testing in general and TDD in particular as a really important tool in a developers kit-bag. It is like having power tools. Sometimes, I may need to do something that is too simple for the power tools (using a regular screw-driver to change the batteries in my smoke alarm). Other times I will use the power tools because they will be faster and more reliable when I am doing something more complex (assembling a kitchen-cabinet) 😉
Question: Loosing the “checks and balances” aspect does not seam like a good idea. If it’s the developer owning the acceptance tests, won’t he just test what he thinks is valid?
Question: Assuming that its cost intensive, How much acceptance testing is enough?
Answer: You do spend a lot of time, and money on infrastructure, to adopt my recommended approach to testing. However, ALL of the data from the industry says that it pays for itself. This is a way of going faster with higher quality. If it wasn’t, I wouldn’t recommend it! What happens is that you trade-off the effort of building and maintaining automated tests against the effort of fixing bugs from production. Organisations that practice Continuous Delivery normal report at least an order of magnitude reduction in bugs in production. Imagine what you could do if you had 1 in 10 of the bugs that you currently have. Imagine if your team could spend 44% more time on new work?
Thanks to everyone for all the questions, enjoy the rest of the conference!