I’m testing. I can see a potential problem and I have an investigative approach in mind. (Actually, I generally challenge myself to have more than one.) Before I proceed, I’d like to get some confidence that the direction I’m about to take is plausible. Like this:
I have seen the system under test fail. I look in the logs at about the time of the failure. I see an error message that looks interesting. I could – I could – regard that error message as significant and pursue a line of investigation that assumes it is implicated in the failure I observed.
Or – or – I could take a second to grep the logs to see whether the error message is, say, occurring frequently and just happens to have occurred coincident with the problem I’m chasing on this occasion.
And that’s what I’ll do, I think.
James Lyndsay’s excellent paper, A Positive View of Negative Testing, describes one of the aims of negative testing as the “prompt exposure of significant faults”. That’s what I’m after here. If my assumption is clearly wrong, I want to find out quickly and cheaply.
Checking myself and checking my ideas has saved me much time and grief over the years. Which is not to say I always remember to do it. But I feel great when I do, yeah.
Image: Black Grape (Wikipedia)