Speaking Truth to Power

Testers are paid to deliver unwelcome news to more powerful people. Too often, the interaction doesn’t go as well as we’d like and we have to deal with adverse reactions ranging from merely disbelieving to downright hostile.Delivering bad news well takes skill, as does dealing with many of the recipient’s reactions. For most people, the ability to do these things at all—let alone well—does not come easily.

In this full-day tutorial, we will practice speaking truth to power, examining and building relevant skills and knowledge along the way. Topics we’ll cover include:

  • Common pitfalls and how to avoid them
  • Questions to ask and things to do before we speak
  • Strategies, models and techniques that can help us understand and deal with difficult conversations successfully

Delivering bad news isn’t fun, but we can have fun exploring and practicing how to do it. This tutorial will consist primarily of experiential exercises and debriefs. We will practice with real situations that have happened for real testers, including current problems brought by participants.

Fiona Charles teaches organizations to match their software testing to their business risks and opportunities. With more than thirty years experience in software development and integration projects, she has managed testing and consulted on testing on many projects for clients in retail, banking, financial services, health care, and telecommunications.

Throughout her career Fiona has advocated, designed, implemented, and taught pragmatic and humane practices to deliver software worth having—in even the most difficult project circumstances. Her articles on testing and test management appear frequently in Better Software Magazine and on StickyMinds.com. She edited The Gift of Time, and guest-edited the January 2010 issue of Software Test & Performance magazine. Fiona is co-founder and host of the Toronto Workshop on Software Testing.

Scenario-Driven Testing

Scenarios are credible stories about something that could happen in the future. Scenario tests involve scenarios that are likely to motivate a stakeholder with influence to demand that the product be fixed if it doesn’t pass the tests. To achieve this, good scenarios convey human issues (why would people be unhappy, and how unhappy, if the program fails this test?) as well as the technical matters of software design.

The focus of this tutorial is how to design effective suites of scenario tests. The tutorial will start with a lecture that lays out several lines of analysis for creating scenarios. Each line will lead you to a different set of tests. Some are more productive for a given product (lead to more interesting tests) than others. The lecture will last about an hour. Then we’ll practice (in small groups) applying lines of analysis to different software products, tying what we find back into a couple of general-group presentations and a summary at the end of the day.

Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: Testing Computer Software (with Jack Falk and Hung Quoc Nguyen), Bad Software (with David Pels), and Lessons Learned in Software Testing (with James Bach and Bret Pettichord). At Florida Tech, his research is primarily focused on the question, How can we foster the next generation of leaders in software testing? See TestingEducation.org for some course materials and this Proposal to the National Science Foundation for a summary of the course-related research.

Exploratory Test Automation

Exploratory testing emphasizes human creativity and thinking, but its effectiveness is limited if you can only do manual testing.Test automation focuses on speed and power, but it rarely finds interesting bugs and is usually relegated to regression test duty. Exploratory test automation blends the best of the two approaches – combining human judgment and computer horsepower to create testing that is thorough, robust, and flexible. This tutorial shows you how to extend the reach of your exploratory testing using creative problem-solving, lightweight automation, heuristic oracles, and common sense.

As economist Leo Cherne said in the 1970s:”The computer is incredibly fast, accurate, and stupid. Man is unbelievably slow, inaccurate, and brilliant. The marriage of the two is a force beyond calculation.”

Harry Robinson is Principal SDET for Microsoft’s Bing team. He has twenty years of software development and testing experience at AT&T Bell Labs, Hewlett-Packard, Microsoft, and Google as well as time spent in the startup trenches. While at Bell Labs, Harry created a model-based testing system that won the AT&T Award for Outstanding Achievement in the Area of Quality. At Microsoft, he pioneered the model-based test generation technology, which won the Microsoft Best Practice Award. Harry holds two patents in software test automation. He coaches test teams throughout Microsoft and speaks and writes on software quality with a focus on innovative approaches to computer-assisted testing.


Comments are closed.