Lately I’ve been getting my feet wet with Session Based Testing. While it’s still fresh and new I wanted to take the timeout to write out my first impressions. Take a look and I’d love to hear other peoples experiences and feedback!
There’s this pressure when writing test plans and test cases to try to pre-plan. Once you’ve written everything up and then moved on to actually executing things always come up. Then as the plans change or evolve you carry the baggage of keeping that original documentation current and accurate. It turns testing into a waterfall style project inside an agile sprint.
It’s not that there is no planning with Session Based Testing. We are creating charters for testing that we can accomplish in a couple hours. The planning and execution is incremental. The debriefs take getting used and we are still honing in on the frequency and format that they occur. So far it is really tightening up the feedback cycle for testers.
It’s really been making me wonder, is this what it felt like when a team successfully shifts from waterfall to agile.
So literally, I take a look at a user story and start breaking down the testing effort into charters, chunks that will fit into our session timebox. I don’t stress over making sure I come up with every topic at once. I just need enough to get me started, since once I start testing and I learn more I can add or remove charters as needed.
Then it’s just a matter of focusing attention on the charter at hand, conceptually to me it’s very similar to the Pomodoro Technique. The difference now is I have my reporter hat on and I am documenting my actions and thoughts as I go in notepad.
I observer and report freely as long as it fits into the charter for the session. I really like not have to focus on prescriptive steps, or boxing results into pass or fail. I find that to become a rote exercise which leads to less productive testing.
Then at some point after the session, or at the end of the day I debrief my manager and we talk what I found. It’s not a review of my plan, or formal status report, it’s a discussion. We can share ideas or insights, talk about strategy going forward, but in the end I think we walk away from it mutually informed and aware of the state of the project.
It’s a Better Artifact
The argument I’ve always seen used for test cases is that they can be handed off to people. For training, or when things get busy just hand over the scripts and people can start testing. It sounds good and all but there’s a lot of subtly that’s lost between the author of the scripts and the reader of the scripts.
The artifact that comes out Session Based Testing is session notes. The tester describes what they have done, their actions, reactions, etc. It’s more a narrative, than set of reproduction steps. It takes practice, frankly I’m sure my notes on these first few users stories aren’t that great. Even these beginner test notes, I feel someone coming on to our team reading them will better understand what was done and be better equipped to contribute meaningfully to the team than if I had written up test cases.
It’s easier to consume, since its more narrative than procedural. There’s no intermediary translation step between the tester and test case.
Another thing we found is that it fits nicely into our existing ALM tooling, since we can include our session notes directly in our task work items. In the end it makes testing very transparent.
A Work In Progress
I glossed over a lot of the nitty-gritty details and this is by no means an instruction manual for getting started using Session Based Testing. Honestly, we’re taking it out for a test drive, kicking the tires and probably playing a little fast and loose with the process to get a feel for how it might feel using it every day.
So while I struggle with parts of it, my biggest take away is that the process feels very natural. It feels like it enables me to do better testing and that’s what is important.