By David Greenlees
In a recent testing coaching session with Anne-Marie Charrett we touched on the above three subjects. Obviously an hour and a half is no where near long enough to go into any great level of detail, so Anne-Marie left me with a challenge to write about oracles in software testing. I will be doing that, but will also be including the preceding actions of observation and inference. Why? They are part of a logical flow which I have begun to knowingly use in my testing. I highlight knowingly because I think I always have done this, but have not applied critical thinking to it at the time.
During the coaching session we played an online game which involved many observations, inferences, and oracles. At the time, this simply seemed to be a game which I was playing until Anne-Marie reminded me to slow down my thinking and focus on each action separately. Where I initially fell over was jumping directly to my inference without thinking about my observation. To cut a long story short, once I was critically thinking about each of these actions, I was able to win the game! I may have won it eventually anyway, but it would have taken much longer.
A definition for each from the World Wide Web:
- Observation – Detailed examination of phenomena prior to analysis, diagnosis, or interpretation.
- Inference – The process of arriving at some conclusion that, though it is not logically derivable from the assumed premises, possesses some degree of probability relative to the premises.
- Oracle (in software testing) – Heuristic (useful, fast, inexpensive, and fallible) principles or mechanisms by which we recognize problems.
How I define them:
- Observation – What you see.
- Inference – What your observation tells you.
- Oracle (in software testing) – How you know if your inference is correct.
After the coaching session I decided I would try this approach out again. Now, what was a simple application that I had not used before… Google Calendar!
Fig. 1 – Entry screen.
So what was my initial observation? The calendar view defaults to ‘Week’ view. Then I thought about it a bit more, is that truly an observation or an inference? The observation is that the calendar was presented in ‘Week’ view, and from that I had already inferred that it was a default. Maybe the correct observation is that my calendar had defaulted to ‘Week’ view. I cannot be 100% certain that this would be the case for all users. I could make the assumption, but we all know how dangerous they can be in testing.
So some more observations (the numbers for each will remain consistent throughout the article):
- The time zone was displayed;
- There was a red line on the calendar indicating the time;
- The Google search bar was displayed at the top of the page;
- The Google logo was displayed at the top left of the page; &
- A series of buttons were displayed above the calendar view on the right hand side.
Obviously there are many more, however I’ll move on for the purpose of this article.
So I then decided to work backwards and thinking about what inferences I would have made if I had looked at the application how I used to test:
- The time zone was displayed as my current time zone;
- The red line on the calendar matched the time zone and the correct time;
- I can search the web from the Google search bar;
- I can go to google.com via the Google logo; &
- I can change the calendar view via the buttons above it.
Now it was time to do some testing and see if my inferences would have been correct had I done it the old non-critical thinking way:
- Incorrect; &
Hey, 2 out of 4 isn’t so bad right? In this case maybe not. However what if I was observing some sort of mission critical product where lives were at stake? Then 2 out of 4 could be very bad. This is assuming that I had not gone on to test the product, which I would have of course!
So why were the 2 incorrect?
- This particular search bar was for the calendar application only. When you search for something, it only searched in your calendar for results; &
- The Google logo was not actually a link, it was simply an image of the logo.
I think they were fairly strong inferences to make! Why though? Why was I so confident in making those inferences? That’s right Anne-Marie, I haven’t forgotten about you… oracles!
One reference which I like to use when looking at oracles is the HICCUPS(F) mnemonic from James Bach (I believe the (F) came from Michael Bolton):
- History: The present version of the system is consistent with past versions of itself.
- Image: The system is consistent with an image that the organization wants to project.
- Comparable Products: The system is consistent with comparable systems.
- Claims: The system is consistent with what important people say it’s supposed to be.
- Users’ Expectations: The system is consistent with what users want.
- Product: Each element of the system is consistent with comparable elements in the same system.
- Purpose: The system is consistent with its purposes, both explicit and implicit.
- Statutes: The system is consistent with applicable laws.
That’s the HICCUPPS part. What’s with the (F)? “F” stands for “Familiar problems”:
Familiarity: The system is not consistent with the pattern of any familiar problem.
So back to the question of why I made those inferences that turned out to be incorrect:
- For the search function the Product heuristic would be relevant. For many other Google applications the search bar can be used to search the web, not just that particular application; &
- For the Google logo the Users’ Expectations and Comparable Products heuristics would be relevant. My expectation as a user would be that this image would in fact be a link back to google.com, and when observing the behaviour of a comparable product this is further confirmed.
Now what were my oracles? How did I recognise that these were in fact potential problems?
- Other Google applications (namely News and Gmail) were my oracles for the search function; &
- Bing.com (namely the Bing logo) was my oracle for the Google logo function (or lack thereof).
A simple, yet extremely valuable exercise. These principles and practices can be applied to your everyday testing with ease. It’s important to slow down and apply your critical thinking skills throughout. Making inferences and assumptions can save a bucket load of time, but if they are incorrect for whatever reason you may find you’re in a bucket load of…………..
Having said that, can we actually test all assumptions? Even if we could, would it be worth it? I would argue that you would very quickly end up testing assumptions that have a minimal impact, and the type of quality information you’re obtaining by doing so would not be value for money. A method I’ve found valuable in these situations is to apply a negative likelihood to the assumption.
So let’s take one of my inferences:
- I can search the web from the Google search bar.
This directly relates to an assumption, in fact it is an assumption. Now in reality it would be fairly quick to test this assumption, but for the purpose of my point let’s pretend that it would take up to one day to test. I’ll use an average contractor rate of $800 per day. So to test this assumption we’re looking at approx. $800 (that doesn’t take into account project time duration loss, etc).
If there was a high likelihood that users are going to make the same assumption, and therefore try to search the web from this search bar, spending $800 may be a wise move. However, is it the type of function that users will complain about if they cannot search the web? Are they more likely to just say, “Oh, you can only search the calendar from this search box.” And simply move on effectively making it a non-issue?
Let’s say we don’t test it to save $800 because we believe it to be such a trivial matter. Then, when it’s released the users go mad with frustration and call/email Google with complaints and upgrade suggestions. This could cost a lot more than $800 when considering the time it would take for staff to handle these requests, and potentially update the function to now include a web search.
It’s important to recognise your assumptions, and even more importantly to determine which of those assumptions you will spend time and money on testing. You need to consider the context of your environment and how that relates to the product you’re testing.
So now here is a challenge for you. Over the coming weeks while you’re testing, I’d like you to think about observations, inferences, and oracles. By think, I mean really think. Stop and reflect on each of these tasks one by one (similar to the process I have used above). Once you have done that, I’ve love to hear how you went and what difference you think it made. Email me at xtremedmgATgmailDOTcom
I’ll gather the responses and include them in a post on my Blog (with your permission of course).
Want to improve your craft? James Bach, Michael Bolton, Anne-Marie Charrett and Huib Schoots are offering individual coaching sessions via Skype.
Learn how James uses instant messaging to improve your testing skills. Read more »
I’m Michael Corum, from Knoxville, Tennessee in the US. I grew up here in Knoxville, and have lived here most of my life, except for a short time in Cincinnati, Ohio and two years in the US Army, where I served as a medic stationed in various parts of Texas. I got into software testing […]
We have another volunteer opportunity to ask our membership for help with. Previously, we’ve asked for help with the AST3 to help manage our technology, and we are always looking for help with teaching BBST Courses. Both of these opportunities are still open for your contributions. This particular request is hoping to locate 2+ people who […]
One of the first signs that this year’s AST board was going to work well together was unanimous agreement that we would not try to do everything ourselves, and that there is more to do than 7 people with families and jobs can handle. We agreed to ask our membership for help, and at the same time, […]
- April 2017 (22)
- March 2017 (29)
- February 2017 (31)
- January 2017 (25)
- December 2016 (32)
- November 2016 (27)
- October 2016 (32)
- September 2016 (23)
- August 2016 (41)
- July 2016 (48)
- June 2016 (51)
- May 2016 (39)
- April 2016 (69)
- March 2016 (61)
- February 2016 (54)
- January 2016 (66)
- December 2015 (63)
- November 2015 (86)
- October 2015 (50)
- September 2015 (31)
- August 2015 (81)
- July 2015 (63)
- June 2015 (36)
- May 2015 (45)
- April 2015 (30)
- March 2015 (34)
- February 2015 (29)
- January 2015 (36)
- December 2014 (51)
- November 2014 (31)
- October 2014 (20)
- September 2014 (21)
- August 2014 (25)
- July 2014 (33)
- June 2014 (34)
- May 2014 (34)
- April 2014 (31)
- March 2014 (53)
- February 2014 (55)
- January 2014 (63)
- December 2013 (36)
- November 2013 (34)
- October 2013 (73)
- September 2013 (61)
- August 2013 (97)
- July 2013 (80)
- June 2013 (57)
- May 2013 (48)
- April 2013 (60)
- March 2013 (61)
- February 2013 (76)
- January 2013 (87)
- December 2012 (47)
- November 2012 (43)
- October 2012 (50)
- September 2012 (41)
- August 2012 (38)
- July 2012 (50)
- June 2012 (28)
- May 2012 (52)
- April 2012 (55)
- March 2012 (87)
- February 2012 (48)
- January 2012 (76)
- December 2011 (66)
- November 2011 (81)
- October 2011 (132)
- September 2011 (34)
- August 2011 (60)
- July 2011 (64)
- June 2011 (64)
- May 2011 (54)
- April 2011 (57)
- March 2011 (33)
- February 2011 (33)
- January 2011 (58)
- December 2010 (66)
- November 2010 (56)
- October 2010 (36)
- September 2010 (21)
- August 2010 (19)
- July 2010 (12)
- June 2010 (21)
- May 2010 (24)
- April 2010 (3)
- March 2010 (15)
- February 2010 (9)
- January 2010 (12)
- December 2009 (21)
- November 2009 (30)
- October 2009 (9)
- September 2009 (17)
- August 2009 (9)
- July 2009 (4)
- June 2009 (8)
- May 2009 (7)
- April 2009 (17)
- March 2009 (7)
- February 2009 (3)
- January 2009 (4)
- December 2008 (4)
- November 2008 (2)
- October 2008 (2)
- September 2008 (2)
- August 2008 (2)
- July 2008 (6)
- May 2008 (1)
- AST News: CAST 2017 Program, Elections Announced, New Website, Research Survey, Board Changes - https://t.co/KjIQgwRsud 10:11:42 AM April 24, 2017 ReplyRetweetFavorite
- Listen to @techgirl1908 on The AST #CASTcast: Episode 2: A Day in the Life of an Automation Engineer https://t.co/0Xq84wjdYO #CAST2017 05:08:57 PM April 18, 2017 ReplyRetweetFavorite
- First episode of the AST #CASTcast: Pilot: Introducing Michael and Claire https://t.co/xtvNHb2v7H #CAST2017 @aclairefication @mkltesthead 05:06:48 PM April 18, 2017 ReplyRetweetFavorite
- .@TestMastersAcad is offering 5 free conference tickets to AST members. Contact them via twitter or through their website for details. 08:27:09 PM April 11, 2017 ReplyRetweetFavorite
- Wondering what else you can do while in Nashville for #CAST2017? https://t.co/4GgPqDHgFt We have some suggestions. 01:36:00 PM April 10, 2017 ReplyRetweetFavorite
- #CAST2017 will offer tutorials from @techgirl1908 @TheTestDoctor @FionaCCharles , @michael_palotas https://t.co/7Nmnk3zGew Get your ticket! 11:26:55 AM April 04, 2017 ReplyRetweetFavorite
- #CAST2017 will feature keynotes from @snowded and @MaryHThorn https://t.co/mL925gL5gD Early bird tickets available now! 12:10:26 PM April 03, 2017 ReplyRetweetFavorite
Looking for something?
Use the form below to search the site:
Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!