One of the sessions I attended at this year’s CAST conference was Paul Holland’s presentation on how he runs his test team at Alcatel Lucent. His team DSL physical layer testing. Overall, the two themes that I got from Paul’s talk were to 1) only do what is really required and 2) only do what makes sense.
As an example, he talked about having to produce a 320-page document that was supposed to include every step of every test the that team had. There were problems with the document, the main one being that it didn’t match what the team actually did. When he asked why they needed the document, the reply was that they needed the document in case a customer ever asked for it. It turned out that a long time ago a customer did ask for the document and they didn’t have it so they created it and had been creating it ever since. In the intervening years no customer asked for the document but they kept creating it “just in case.” Another example: there was a policy of having a pass rate of 95%. As long as they reported a number at least that high, no one asked any questions about the testing itself. Since Paul thought that being satisfied with just a particular number didn’t make sense, he made sure that every time his team reported a pass rate it was less than 95% so they would have to report details on the testing that was done and what it found.
How can he get away with it? One reason is that his job isn’t particularly sexy – no one else really wants it. Not having to do things just to keep you job allows you to focus on what really matters. Another reason is that when he pushes back he offers alternatives that make sense. When he first started pushing back on things he thought he would be labeled and it turns out that he was. He was labeled as a “test expert.”
Currently, Paul’s team produces two documents for every round of testing that his team does – a planning document that contains the test plan, strategy, and test cases and a report at the end on the testing that was done and the results. Both documents are short and consider the audience. In the case of the planning document, for example, the assumption is that a member of his team (with domain knowledge of DSL physical layer) is the audience the document is written for.
The planning document is short (the one he showed in the presentation was (I think) 3 pages. it contined the testing charters, an estimate on the number of sessions that particular testing item would take, and testing ideas for each. The idea is that testing of a particular program is not exactly repeatable but “repeatable enough.”
Paul manages his team with stickies on a white board. He isn’t following any of well known ”stickies on the whiteboard” methodologies like Scrum or Kanban – it’s an approach that he developed himself. Some aspects of this management approach:
- Three areas of the whiteboard: To Be Done, Working On, and Done
- Stickies in the To Be Done section are prioritized by their vertical position (and re-prioritized about twice a week) – position of stickies in other areas has no significance
- No daily scrums – the testers “report” their status by updating the stickies
- Every stickie on the board represents 1/2 day of work
- Interrupts are OK – they are added to the white board in a special color (light yellow)
- At some point, a section for stickies that weren’t going to be done was added
When people ask “When will you be done with testing?” the answer starts with the whiteboard and what has been done so far. Then, ”When would you like it to be done?”
- “If you want us to be done in 2 hours, here’s what you get…”
- “If you want us to be done in 2 days, here’s what you get…”
- “If you want us to be done in 2 weeks, here’s what you get…”
- “If you want us to be done in 2 months, here’s what you get…”
Once the testing is done, the report is also brief.
- Results of each charter: # of sessions done, # of sessions not done, # of sessions originally planned
- A list of issues found or “no issues.”
Other items from the Q&A:
- Vertical movement of stickies in the To Be Done section always done by Paul
- Unlike Kanban, no WIP limit built into the system
- Work items in different feature areas are indicated by different color stickies
- Original batch of stickies come from the plan, reviewed by the team
- Stickies for a component are kept and reused (test library)
- Tracking information is not used for evaluating people – metrics to generate thinking, not for evaluation
- Session reports were not reviewed by Paul – he pushed it to the team
- Test escapes? 2 in 4 years, one his fault, one in an area they made a conscious decision not to test
I have to admit, I was pretty excited about this approach and I’m thinking about what, if any of it, I could use in my situation at work. This was a really great session in terms of ideas per square inch.