A note first, this is my own experience of the workshop. I’m sure there will be other experience reports popping up soon, and they may have different, but perfectly valid personal experiences to share.
Jess Lancaster, Jon Hagar, Doug Hoffman, Jeremiah Carey-Dressler, Nick Stefanski, Pete Walen, Rob Sabourin, David Hoppe, Chris George, Alessandra Moreria, Justin Rohrman, Matt Heusser (facilitator), Simon Peter Schrijver (facilitator), Erik Davis (facilitator)
Day one began with presentations from:
Jon Hagar on current industry resources for skill lists and education resources such as ISO standards, IEEE standards, SWEBOK, and ISTQB.
Matt Heusser spoke on the goal of the workshop, defining what a skill is, discussion on how and if we should model the skill list any particular way. The working definition we came up with for skill is any activity which can be isolated, demonstrated, evaluated, developed , and observed.
After presentations, we went around the room and did introductions and a brief statement of why we were there, what we planned to contribute, and what we hoped to take away from the event.
After this we began creating and categorizing the skill list. This activity took place by individuals writing single skills on index cards over the course of 45 minutes or so. I’m not sure how many cards we ended up creating, but I would guess it was over a hundred. Some were very similar, and some overlapped to a degree. We categorized the cards by theme (examples: social, tech, test design) and this categorized list became version .000001 of our skills inventory. Every skill noted was something someone in the room felt relates directly to the activity of software testing.
After this we formed groups and began to get the categorized list into a wiki. This initial version was a working definition of a skill, and a few resources of where someone could go to learn about that skill. At the end of the day, each group presented on what the work we had done. We were mostly unhappy with what we had at that point.
Day two began with a brief recap of the previous day and some talk about new tactics we could take. We “mobbed” one skill as a group and came up with a very good example of what would be the basis for the remaining work. This new style of skill list was significantly more time consuming to create, but, in my opinion, has far more value. We continued working in groups in this style for the remainder of the day with another recap at the end of the day. This work was mentally exhausting.
We was a half day which ended at noon. We spent the day closing the workshop. This consisted of talking about the remaining work (who was going to do and how would it get done), and closing remarks.
Some personal notes
Being in a room full of smart people, all actively working side by side to improve the craft of software testing was an amazing experience for me. I have never participated in a facilitated LAWST style workshop before, originally this was intended to be in that format. Groups formed and gelled very quickly, so there was little to no need for facilitation. I heard comments by folks that have been to and facilitated many LAWST workshops that WHOSE was unlike any other workshop they had been to.
The CDT community has a reputation for being contentious and having a certain amount of infighting. I witnessed absolutely none of this. Groups had cordial, open discussions with disagreements without any negativity or personal attacks. I think that is an important thing to note.
Day two was long and exhausting, I hit a wall around 2 and was struggling to produce good work after this despite a constant flow of coffee. This kind of work is far more difficult that I imagined prior to the workshop. A monumental effort was put in over the three days and I’m proud of what was created. It will take some time to get the work into a more complete, presentable state, but I’m looking forward to that day. Feedback and contribution from the testing community will make this living document even more valuable.