Blog

Day 2 of Test Coach Camp: Live Blog (TESTHEAD)

On July 15, 2012, in Syndicated, by Association for Software Testing
0

Another day, another opportunity to learn from peers, friends and luminaries. Yesterday was a great deal of fun, and today, we have a number of other topics that are begging to be covered and heard which will make for another vary full day.

We started at 9:00 AM and X-voted on topics, as well as adding additional topics that the participants wanted to cover. With that, we gathered the ideas together and made today’s schedule.

An interesting note, since yesterday there was a lot of sit-down talking, this time around, there’s quite a bit of active content. Wade Wachs is demonstrating juggling as a model for coaching.

Ben Kelly proposed a talk that was titled “It Would Be Easier to Coach Testers if I Could Hit Them With a Stick”. Sounds cheeky, but when we all realized who was proposing the talk, we understood that he meant to use Kendo as a model for coaching. Once we understood that, it quickly became one of the most anticipated talks (I know I’ll be there 🙂 ).

The participants also voted again and decided that they did want to hear about how to re-model Weekend Testing, so I’m going to hopefully get a chance to record that for a podcast, as well as to discuss a Scouting related topic, using EDGE to help coach and manage teams.

Session 1 for me is being facilitated by Philip McNeely, and it’s titled “How to Coach Testers Back from the Brink”. Philip was gracious to let me invade with my microphone and laptop to record this for a podcast. We have been having a terrific discussion on how to help testers who are dealing with burnout, disillusion or other issues affecting their performance. Helping them get more involved, engaged and excited to be dealing with testing again has also been a focus for this session.

For a lot of us, the highest burnout aspect is the sheer repetitious nature of a lot of what we do. Having to be the bearer of bad new also often weighs down on us, and we can often get down on ourselves because of the fact that we are just not often a welcome messenger. Topics such as bug triage, crisis situations, death marches and distractions because of not being engaged are other areas we need to deal with. Putting together a sustainable pace and having a realistic way of dealing with the stresses are all areas that we need to be aware of and work to help get people re-energized and engaged.

The second session was mine, and I had a chance to refine and expand on my talk that I did from last year at CAST, which was the stages of Team development and using the EDGE model to mentor testers.

For those not familiar, EDGE is an acronym that is used in scouting for a number of different disciplines; learning skills, teaching, leading, etc. Edge stands for Explain, Demonstrate, Guide and Enable (or Empower, as suggested by Wade, and I think that works fine as well).

I also explained how the teams go through the stages of team development (Forming, Storming, Norming, Performing) and how the EDGE principles fit with all of that. In the beginning, a leader is practically a dictator, and as the team grows, learns and develops skills and aptitude, that leader moves from being the dictator, to being a teacher, to being an aide, and then ultimately getting out of the way.


The session I looked forward to attending was Ben Kelly’s Kendo demonstration. Having participated in martial arts as a kid and young adult (Bok Fu Do and Aikido, respectively) I was curious to see how he was going to tie this into test coaching.

The idea that he wanted to make sure we understood was that there is a considerable amount of physical and muscle details that need to be applied. In addition, the rigor and the time it takes to develop the fundamental skills take a considerable amount of attention, so much so that it causes those not willing to put in the time to self select themselves out of the process. Testers do very much the same thing. the challenge we face is to see how we can help them get through the rigor and repetition without becoming brain dead in the process. We have an opportunity to help make the rigor mean something, and much like Kendo, regularly getting out and sparring with that rigor makes a lot of the difference between an engaged Kendo-ka and a disinterested one (as well as an engaged tester vs a disinterested one).

Following lunch, I had a chance to sit in on a conversation with Ken Pier, Cem Kaner, Claire Moss, Philip McNeely, Doug Hoffman and Matt Barcomb for a session called “A vs. E! Huh?!” This one interested me because it’s part of my presentation that I’m giving at CAST tomorrow. What’s the debate?

It seems like the debate is between Exploratory Testing vs. Automation, as though it’s an either/or situation. For many testers, they are led to believe that they can be a manual tester that is primarily exploratory, or they are automated testers that are programmers, and that there is a division between the two. Cem made an interesting point in that all testing is automated, and no testing is automated, really. The dichotomy as presented is flawed and doesn’t really exist.

The goal for exploratory testing is to focus on learning new things. In many ways, that’s not something that can be extensively automated, because how do we ask new questions that we haven’t even considered before without diving in and actually exploring? Many tests we learn about emerging principles of the design, and in that process, we may find several bugs. However, just because we found them while learning in this process, doesn’t mean that automating those same tests will necessarily give us any additional benefits. However, being able to run through those tests to make sure that the system is behaving the way that we have learned (and thus now expect). In other words, as long as we are learning and getting new information, then what we are doing is exploratory the fact that the process is computer assisted is a bonus.

The final session that I personally facilitated/presented was my questions and ideas about what to do with or how we could improve/modify weekend testing. I got some great feedback regarding a number of areas including the actual weekend testing site, the way that we present information, how we announce sessions, and to determine what our core competency and mission is (which isn my mind is to be an effective coaching and mentoring ground for testers, both for newer testers to be mentored as well as experienced testers to provide mentoring. I recorded this session and believe me, I’ll be going through this one with a fine tooth comb (not sure if this one will become a podcast, but I hope to act on as many of the suggestions as I can.

Right now, we are doing a retro on the day and the things that worked and what we could improve for next time. My suggested improvement? Anyone who didn’t attend this year, make a point to come the next time we schedule test Coach Camp. I had a great time, and I want to definitely participate again.

 

Comments are closed.