For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it’s a great title and a timely one. The point is, I’m already excited about the book, and I’m excited about the premise and the way it all came together. But outside of all that… what does the book say?
Over the next few weeks, I hope I’ll be able to answer that, and to do so I’m going back to the BOOK CLUB format I used last year for “How We Test Software at Microsoft“. Note, I’m not going to do a full synopsis of each chapter in depth (hey, that’s what the book is for 😉 ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry.
We are now into Section 3, which is sub-titled “How Do We Do It?”. As you might guess, the book’s topic mix makes a change yet again. We have defined the problem. We’ve discussed what we can do about it. Now let’s get into the nuts and bolts of things we can do, here and now. This part covers Chapter 13.
Chapter 13: Exploiting the Testing Bottleneck by Markus Gaertner
Markus starts out this chapter by stating that in many organizations testing is seen as the bottleneck for software development projects. It’s often wrong. Requirements, architecture and code also play a hand into this, and all have their details fly under the radar until we get to the testing before delivery part. That’s when lots of inefficiencies come to the fore, and then we have to deal with them all at once. To reduce the cost of testing, we we need to explore the whole project and optimize all steps where we can. This chapter uses a typical Agile project and describes how it could be optimized.
Agile projects use iterations to define the “heartbeat” of the project. After each iteration, the team delivers a “shippable” product. The team plans each iteration uniquely. Business priorities may change, so iterations allow us to adapt to customer needs. The team creates acceptance tests by developing user stories. Testers get involved right from the start. By provide estimates during the planning of iterations, helping the customer identify acceptance tests, and defining the risks in the software, these efforts provide an up front reduction in the overall testing effort.
The product owner maintains a product backlog of prioritized user stories. The team discusses which stories they think they can finish during the iteration. Anytime a new requirement is identified, a new story card is created. To identify priority, the product owner and a programmer will determine how much effort the feature may take. That programmer then checks that estimate with a tester to make sure they understand the time commitments. The point again being, testers are involved in planning and estimating right from the start.
Testers help the customer define basic acceptance tests for user stories. These tests will likely consist of simple happy paths and some corner cases relevant to the story in question. Testers help the customer and the programmer to think about critical conditions, which the team many not have initially considered. Problems are discussed immediately, rather than waiting until the testing phase of the project. Trade-offs are considered. How thoroughly the case is tested may depend on how critical the functionality is and how much effort should be applied.
As stories are chosen to be implemented, testers contribute their view on the testability of features. By identifying potential issues early on, testing costs can be reduced before any implementation is done. Programmers become aware of testing challenges. Testers can learn about potential pitfalls of seemingly easy to test story. When a team gets together early in the project, they can build a shared mental model. This helps reduce misunderstandings.
Collaboration is key. Pair programming, daily stand-up meetings and pair testing with another tester, a developer, or a customer are all parts of this collaboration. When the whole team sits together, testers get a more thorough understanding of the problems. Testers contribute greatly just by overhearing the team’s talk. The daily stand-up is more than just saying what you have done the day before and will do today. By sharing progress and obstacles, we can build trust among team members. Testers do not hide problems in their progress. They discuss them openly. Because of this, testers are no longer left alone with the problems they encounter. Instead the whole team contributes to help solve the problems.
Testing, of course, occurs during each iteration. Setting some dedicated time aside to help the team learn new things (coding or testing related) helps the team prepare for possible future issues. Test Driven Development helps the testing process by including testing at the core of the development activities. Testers use Acceptance Test Driven Development to help develop tests that integrate with the features being developed. Exploratory Testing methods are used to inquire about feature behavior and follow paths that might not be initially consider.
Getting to Done means that the feature has been Implemented, Tested, and Explored
Implemented means Red – Green – Refactor (the Kent Beck model for Test Driven Development)
Tested means Discuss – Develop – Deliver
Explored means Discover – Decide – Act
The iteration is finally wrapped-up in a customer demonstration to get feedback about the just-developed features, and a reflection workshop helps the team to improve how they work. During an iteration demo, the just developed features are presented to the stakeholders. Since the features are shown in the working software, the development team receives direct feedback from the customer about progress.
Note, this just scratches the surface of the details provided in this chapter, but we can see already that there are many opportunities where the “testing bottleneck” can be avoided by having testing efforts be part of the project much earlier. Testing should not be done at the end of a project where a lot of scrambling needs to be done to examine issues discovered. There is lots of opportunity for up front testing, both from the developers and testers, and this up front testing can do a lot to help prevent a back up later on.
Acceptance Test-Driven Development allows us to examine requirements for a current iteration. By focusing on business-facing tests and meeting their expectations we can focus on meaningful tests. Outdated or needless tests can be eliminated.
Automated System Tests often flow from acceptance tests defined and delivered during a particular iteration. The team then has a large number of relevant tests that are automated and can be run at the press of a button. Over time the team can creates reliable tests, which can be run continuously.
Acceptance tests help spawn other tests. A tester working on a story can come up with additional tests which were previously not considered. those test can then be and allow for more functionality to be automated, freeing the tester to explore additional avenues.
opment’s primary mission is to drive the design of the code. The Red-Green-Refactor cycle allows for the envelopment of robust and flexible code. This avoids a big redesign if an issue is discovered late in a project because testing had not been done previously, as is often seen in traditional software projects.
Automated microtests are a by-product of TDD. Since every single new line of code is tested even before it is written, lots of microtests are created as the code gets written. This leads to unit tests which are run by the developers before submitting their code. These automated microtests provide instant feedback. When they pass, the developers check in their code. If the build environment differs from the programmers environment, or there is an incompatibility, Continuous Integration builds will notify the team about the problem. The automated microtests provides nearly instant feedback in case some functionality does not pass these tests.
Everyone on an Agile team is a tester, not just the dedicated testers. The customer helps define meaningful tests right from the start. Developers use TD to help make sure the code does what it’s supposed to do, CI builds help to determine if a change is incompatible with what’s been checked in previously. Testers make sure that all working parts are behaving as they are expected to, and utilize an Exploratory approach to determine how the application behaves under a variety of circumstances. Automated tests help to make sure that steps are not forgotten.
The key to exploiting the testing bottleneck is not to make the tester work faster or harder, or get more testers, it’s to understand that testing can, should, and must happen at all stages of the project. Agile methodologies are designed with this very idea in mind. By having the test processes start at the very beginning of a project iteration, the testing can be done at all levels of the project, from code creation to final system integration and everything in between. Testing is front loaded, not back ended, and thus the bottleneck, if not completely eradicated, can be greatly reduced.