Ideas. Ideas. Ideas. Ahhh, ideas.
Ideas are the fabric from which other ideas are made, the surface on which other ideas are sketched and the giants on whose shoulders other ideas stand. Ideas spawn ideas but can also weaken, damage and even destroy them. Ideas are plentiful and powerful and sometimes pathological. But we can’t advance without them.
The Cambridge Exploratory Workshop on Testing is a place for ideas. CEWT:
- Cambridge: the local tester community; participants have been to recent meetups.
- Exploratory: beyond the topic there’s no agenda; bring on the ideas.
- Workshop: not lectures but discussion; not leaders but peers; not handouts but arms open.
- Testing: I reckon you’ll know about testing…
The second CEWT ran on 28th February 2016 with the theme When Testing Went Wrong. There were ten presentations of ten minutes, each followed by 20 minutes of discussion. A short blog post can’t do justice to the range of material we covered, so here’s a handful of the threads that appealed to me with lines from my notes aggregated across talks:
When things have gone wrong (such as – yikes! – your application deleting the C drive of your customers’ customers!) there are many ways in which you can react. The way you choose says a lot about you. The way your company reacts says an awful lot about them. Will there be a witch hunt? Will there be knee-jerk policy change? Will there be blinkered focus on the risks that were, not the risk that are now? Why do we tend to overcompensate? Why is so much of our risk management focussed on the outcome rather than the potential for outcomes? Two bugs are known in production, each with equal likelihood of causing some problem of equal magnitude. The one which is seen by a customer will almost always get disproportionate attention.
Labels are powerful; peer opinion is powerful; preconceptions are powerful. These things are fences to keep ideas isolated and can be hard to break down once established. Things can go wrong when we don’t look past the barriers. We can fail to see useful connections. We can fail find solutions that would be useful in our contexts. Trying to broker agreement between two sides divided by a conceptual barrier is challenging. Standing on the “wrong” side of a conceptual barrier can be challenging.
Things that go wrong are rarely wrong in isolation. (And by the same token, although not the topic of this workshop, the things that go right are rarely right for one reason alone.) Predicting progress is hard: in order to have control we need feedback; feedback is dependent on learning; learning is non-linear. Testers are part of a larger system of software development and – as part of a quest for value to users – may do work not obviously designated as “testing”. This might make sense in the larger system but it’s important to be aware of the potential impact elsewhere in that system.
This kind of event energises me, which is why I’ll make sure we do it again. CEWT 3 here we come.