Track Sessions

Adopting and Adapting Open Source Testing Tools

by: Scott M. Allman
A skill in high demand is the ability to automate your work using popular, free, open source testing tools. TestLink, Selenium, FitNesse and Chainsaw are four universal tools that will help automate any software testing project.

Downloading and installing each tool requires only a few minutes. But that is far from the most important step. Since every software project is different a key to success is to think of unusual ways to adapt tools for your project’s needs. Who would have thought that Microsoft Excel – a spreadsheet engine – would become a popular tool for writing test procedures or test reports? We will show how to install each tool, a simple example of using it, and how we adapted these tools as we test embedded systems.

TestLink is a test case manager. Its test reports are available via the web and results are easily compared between different builds. Whether testing is manual or fully automated this tools organizes the test cases and test runs. Selenium was designed to automate testing browser based applications. Its playback and record feature is invaluable for automating the set-up of any test relying on servers. The tables in FitNesse are great for designing and building integration tests. Almost all systems output strings of text – in logs, console messages or to files. Chainsaw is the tool for automatically analyzing large sequences of output messages.

To get the most out of these tools you should join their open source community. Forums, FAQs and daily postings keep you informed. Each open source project is different but there is etiquette to follow. We will talk about our experience posting requests, filing bugs, contributing code patches and suggesting enhancements.

The presentation and conference paper were produced with Sun’s Open Office – a free, popular alternative to Microsoft Office products.

Scott Allman’s daily work as a QA/Test manager inspires his writings and presentations about software test automation. A software developer since the late 1960’s his career spans universities, startups, aerospace, consulting, and big corporations working on four continents. He is long time member of SQuAD, Software Quality Assurance of Denver, Colorado, USA.

Dealing with Passionless Testers

by: Henrik Andersson, Carsten Feilberg
How often have you been in a test team surrounded by passionless testers? What does that do to your passion about testing? There is a great risk that your fire slowly fade away.

Being surrounded by passionless testers puts you at a cross road and either drags you into trying to help and sort out the problems, or leave it be. It is so easy to slowly decay into a zombie without realizing it and that is a very dangerous situation to be in.

In this highly interactive and fun session we will try to find the causes and problems that leads to loss of passion. The game plan is to ask the audience to look for the different signs of lost passion or right out obstruction and then have a role play – a scenario that each of us could meet at our daily job – on stage. Through participation the audience will try to help out in the scenario and together we will learn how to deal with the passionless tester and figure out how we could help finding the passion again – for ourselves and our colleagues

Henrik Andersson is one of the leading European consultants in the field of testing in Agile environments and the use of Exploratory Testing. In 2008 he co-founded House of Test, a testing consultancy and outsourcing company based in Sweden and China.

His main areas of focus are in Test Improvement, Exploratory Testing, Session Based Test Management, Context-Driven Testing, Risk Based Testing and Agile Testing. Henrik works with helping organisations in transitioning from traditional development processes like waterfall into Agile processes such as Scrum.

Carsten Feilberg is a senior consultant at Strand & Donslund A/S in Denmark. He has a wide set of interests ranging from enterprise architecture to process management, problem solving and testing – all the way to ancient egyptian art and heading for the coast to do some fishing. He is also a regular conference speaker and blogs on http://carstenfeilberg.blogspot.com. He is constantly advocating for the context-driven point of view and encourage use of exploratory testing and session-based test management. He loves to discuss testing (and any other of his many interests) and explore new options.

High Stakes “Bug reporting skills”

by: Fadi Bachaalani
Bug reporting is a central testing skill. It is in fact a crucial one as it would take place when a tester actually succeeds in finding a bug, which is the quintessential objective of testing.

Given its importance as the mean to communicate to developers a problem found and the means to reproduce it so it can be addressed, it is hard to believe that on many modern software projects, the bug reporting skill is a significant source of time, quality and money waste. While obvious bad bug reports incur time and money waste by causing some back and forth between developers and test teams to investigate and clarify them and finally address them, the more dangerous ones are the ones that would seem as by-the-rule bug reports but end up sending a development team on a wrong solution track which is ultimately more dangerous and costly in the long run by adding to money and time waste the quality part.

The bug reporting skill is an area where addressing a simple problem would yield a significant payoff.

Fadi Bachaalani is a senior computer engineer, with more than 13 years experience in full cycle application development and related management. His career spanned multiple domains of application on diverse technology platforms, from mission-critical distributed systems to mobile wireless computing solutions through web-based systems.

Using a Wiki for Communication and Collaboration

by: Marlena Compton
Marlena Compton wants to share her experiences of using wikis for software testing and she also wants to hear yours. This session will focus on using wikis in widely differing software environments to bring conversation and collaboration into the software development process. Marlena began using wikis in a waterfall environment and has since moved to using and testing wikis in an agile environment. She will share:

  • how wikis can help to gently introduce agile principles and values to non-agile teams
  • how she used James Bach’s Low-Tech testing dashboard to delight her boss and improve her exploratory testing behind the waterfall
  • How she currently uses a wiki as part of an agile testing process
  • What to watch out for when using wikis for testing

Testers are encouraged to bring their own experiences of using wikis for software testing to the session.

Marlena Compton is a software tester for Atlassian Software in Sydney, Australia where she tests their wiki application, Confluence. She has been testing software for 4 years and recently completed a Masters of Science in Software Engineering at Southern Polytechnic State University. Her thesis handled the topic of using data visualization for software testing. She blogs about software testing at http://marlenacompton.com and participates in the Australia/New Zealand chapter of Weekend Testing. In her spare time she enjoys hiking the beaches of Sydney.

Communication Chameleons

by: Selena Delesie
Some testers enjoy working in isolation to critically explore software to find bugs. Other testers work in solitude to find bugs while repeatedly running the same test scripts. The very best testers fit neither of these groups. While they maintain intense focus in their test activities, they also invest time in talking with colleagues about their work and the product – and are able to do so beautifully, regardless of who they speak with.

These testers are chameleons. They communicate effectively with different stakeholders to engage in valuable conversations, for example:

  • Other Testers: In planning test focus and comparing notes on product observations
  • Programmers: In discussing design implementations and bug fix solutions
  • Management: In identifying risks, providing status, and sharing plans
  • Product managers: In providing results of research data to guide roadmaps toward viable and value-adding products
  • Executives: In summarizing project progress, research endeavours, business impact, and identifying technical boundaries for future product visions

In this session we will look at personal experiences to explore the approaches testers use to communicate with different stakeholders in more effective and value-adding ways.

Selena Delesie is a consulting software tester and agile coach with 10 years of experience testing, managing, and coaching in software, testing, and agile practices for leading-edge technologies. Her experiences ignited her passion for creating empowered and collaborative organizations, and in discovering more effective ways to create high quality products. She facilitates the evolution of good teams and organizations into great ones using individualized and team-based coaching and interactive training experiences.

Selena is co-founder and host for the Waterloo Workshops on Software Testing, past chair for the Kitchener-Waterloo Software Quality Association (KWSQA), and can be reached via her blog and website: www.selenadelesie.com.

The Art of Visualization

by: Selena Delesie
Good testers use their experiences and understanding of a product to apply oracles and heuristics that guide testing in uncovering problems and to communicate with a variety of stakeholders.

Great testers go beyond.

Great testers are skilled in employing different techniques to visualize requirements, user stories, designs, and problems to understand what they are working with. They use visual models to comprehend what the customer really wanted, brainstorm design solutions that are low risk yet high value implementations, and understand root causes for issues and risks. They recognize that the simplicity of a visual aid can disseminate information more accurately than the spoken and written equivalent. Great testers are talented in using such visual representations when collaborating and communicating with a variety of stakeholders so participants arrive at a shared understanding.

In this session we will investigate different methods testers can use to efficiently create visual representations of software requirements, solutions, and problems. These skills will help improve your effectiveness in testing, and allow you to add more value in conversations with stakeholders.

Selena Delesie is a consulting software tester and agile coach with 10 years of experience testing, managing, and coaching in software, testing, and agile practices for leading-edge technologies. Her experiences ignited her passion for creating empowered and collaborative organizations, and in discovering more effective ways to create high quality products. She facilitates the evolution of good teams and organizations into great ones using individualized and team-based coaching and interactive training experiences.

Selena is co-founder and host for the Waterloo Workshops on Software Testing, past chair for the Kitchener-Waterloo Software Quality Association (KWSQA), and can be reached via her blog and website: www.selenadelesie.com.

Nice words are not enough

by: Carsten Feilberg , Louise Perold
Though words like session-based test management and exploratory testing are popular, in many situations they are not enough. Due to fear, insecurity and lack of knowledge suppliers and customers both revert to waterfall thinking, which induces a false sense of security and safety. Customers want waterfall because they believe it will solve their problems like scope creep, auditable test results and requirements traceability. They see it as a proven method for developing software that will ensure success. And suppliers tend to fuel this belief, by setting up waterfall-plans.

And when we enter the stage saying, let’s test it, but we won’t write down all test cases in advance, they get all worried and fearful.

Our experiences, albeit from opposite ends of the world, have been around facing these fears and attempting different ways of dealing with them. We would like to share this experience by demonstrating an example, following a test managers attempt to introduce session-based test management in a waterfall-minded environment. There were pros and cons to the chosen strategy and some interesting observations on the effects, which we look forward to present and discuss with the audience.

Carsten Feilberg is a senior consultant at Strand & Donslund A/S in Denmark. He has a wide set of interests ranging from enterprise architecture to process management, problem solving and testing – all the way to ancient egyptian art and heading for the coast to do some fishing. He is also a regular conference speaker and blogs on http://carstenfeilberg.blogspot.com. He is constantly advocating for the context-driven point of view and encourage use of exploratory testing and session-based test management. He loves to discuss testing (and any other of his many interests) and explore new options.

Louise Perold first attended CAST in 2007 where she heard about Session based testing and decided to try it out for herself. Today, she is a passionate believer in context-driven testing and is enthusiastically refining her approach to Session based testing. Louise spends most of her time leading and mentoring test teams within the Financial services industry in Johannesburg, South Africa. This provides the ideal platform to catch bugs with context-driven testing. (The rest of her time is spent delving into the fast paced social life that living in Joburg demands).

Downsizing, Offshoring, Outsourcing: What Skills are needed where?

by: Jane Fraser
With the pressures to downsize North American teams & offshore or outsource testing, there is a real need to determine what skill sets are needed in the different locations. When determining who to layoff and who to keep, you need to look at the inventory of skills each individual can bring to the team.  Determining which skills are more easily outsourced. It isn’t always the top testers that you need to keep; it’s the testers with good communication skills, knowledge of your practices, and ability to train your new staff. Looking at what skills we have, what skills we are missing, what skills are now surplus, we can determine what the skill set is needed and then determine where these skills should be.

Jane Fraser;Test Director, Electronic Arts:
As an industry veteran with more than fifteen years of experience, Jane Fraser brought her expertise from the ecommerce and telecomm industry to the online gaming world when she joined Pogo in 2004. In her role as QA director, Jane oversees the QA Department, which includes Pogo.com, Club Pogo, Facebook games, Iphone games and a downloadable business. She has successfully launched more than sixty games in six territories including Scrabble and Battleship. During her time with Pogo, Jane has provided leadership, established testing process, and managed a team of testers, which she has grown from six to a robust team of eighty in six countries. Prior to joining Pogo, Jane tested products from Word Processing and Desktop Publishing, cell phone services, ecommerce site, and ticket forwarding and management product.

Technical versus non-technical skills in test automation

by: Dorothy Graham
There is view that testers should be technical, especially if they are involved in test automation.

Although this can work well, particularly in an agile team, not all testers need or want to be technical, especially those with a business or application background – it is my belief that this should not preclude them from using test automation!

There are two levels of abstraction needed for good automation. A separation of testware from the detailed working of any particular tool is needed to minimise testware maintenance – this requires technical skill.

In order to gain widespread use of test automation, the testers’ view of the tests should be separated from the technical view, as in a keyword-driven approach (aka “hybrid”, “framework”, “Domain-Specific Test Language”). This abstraction level enables system or business acceptance testers to write automated tests without needing technical skills. Technically skilled test automators should support and enable non-technical testers to write and run automated tests.

Dorothy Graham is an internationally renowned consultant, speaker and author, who has been in software testing for more than 30 years.

Dot is co-author with Tom Gilb of “Software Inspection”, co-author with Mark Fewster of “Software Test Automation”, and co-author with Rex Black, Isabel Evans and Erik Van Veenendaal of “Foundations of Software Testing”. She is currently working on a new book “Experiences in Test Automation” with Mark Fewster, to be published in 2010.

Dot was Programme Chair for EuroSTAR in 1993 and 2009. She has been on the boards of a number of conferences and publications in software testing. She was a founder member of the ISEB Software Testing Board and was a member of the working party that developed the ISTQB Foundation Syllabus. She founded Grove Consultants in 1989, and in 2008 returned to being an independent consultant.

She is a popular and entertaining speaker at conferences and seminars world-wide and holds the European Excellence Award in Software Testing. Her main hobby is choral singing.

Mining for Gold: Bug Isolation

by: Jean Ann Harrison, Molly Mahai
Software testers typically write bug reports. Testers see an error message or witness unacceptable behavior in applications and create bug reports. But is the error message a real bug? Why does a web page load slowly due to a single input? Is the input a bug?

This presentation will not only address general testing practices into finding bugs but testers will learn to locate the mother lode of triggers. Go beyond the symptoms like error messages and explore behavior patterns to discover gold mines of information. Molly will expand on the many benefits when software testing resources can provide more descriptive information to Development. Attendees will learn how to recognize triggers of bugs rather than report just symptoms.

Real life situations will be shared with attendees along with exercises to expand attendees’ skill set. When an error condition exists, Jean Ann will explain what kinds of variables can be added or detracted in reproducing the bug which will expose more information about the behavior.

Finally what knowledge does a gold mining bug reporter need? Throughout the session, Jean Ann & Molly will use personality traits and helpful technical skills to further expand upon the golden nuggets of bug isolation capabilities.

Jean Ann Harrison is a Lead Quality Assurance Engineer at CardioNet, Inc providing ambulatory cardiac monitoring service for physicians’ patients. Jean Ann is currently the software quality assurance lead on the next generation mobile heart monitor device and has been the lead on all embedded software testing at CardioNet. Jean Ann’s background also includes a variety of projects of large multi-configured applications for client/server, web, Unix and mainframe systems. Her experience is primarily manual testing with occasional automation and a strong focus on building quality into design.Constantly working to perfect her craft, Jean Ann attends and presents at conferences, takes various courses, networks and actively participates in software testing forums. She believes software testing takes daily practice to contribute to a project’s success.

Molly Mahai has over 15 years in the software industry and is currently QA Manager at the Arizona State Retirement Systems. After obtaining a bachelor’s degree in computer science, Molly worked as a developer for 9 years before entering management. After a year of managing developers, Molly has found her place managing QA Engineers and Testers. Molly has a Bachelor’s in Computer Science and a Masters in Business Administration.

Adopting and Adapting Open Source Testing Tools

by: Karen N. Johnson
There are three R’s that I learned from newspaper reporting that have helped me in becoming a better software tester: rapport, record, and report.

Rapport: I need to be able to gather information from a variety of people. I’ve become skilled at opening up channels of communication and sometimes with the most introverted of software developers and system architects.

Software testers need to gather information; we need to be able to interview our team mates to learn more about the products we’re testing. How do gain the artful skills of interviewing people for information??

Record: Testers can benefit from taking good notes before, during and after testing. What are “good” notes? And how do we become adept at recording information?

Before we test, we’re likely in information gathering mode. While we’re talking with a developer or product designer, what types of information should we record? During testing, we might record our observations and additional ideas for testing. We might record details about a defect. After a testing session, we might record yet more information and ideas for further and future testing. How do we record notes through each of these scenarios in an effective way that doesn’t intrude upon our work?

Report: What do we report about product status – just the facts?

We test, we learn. And we gain opinions. How do we remain objective? How do we report project status objectively? Do we know when we’ve developed a bias? Does it matter if we have a product opinion? How do we deliver product news?

In my presentation, I want to talk about these three skills and offer ways for software testers to practice, acquire, and hone these skills.

Karen N. Johnson is an independent software test consultant. She is frequent speaker at conferences. Karen is a contributing author to the book, Beautiful Testing released by O’Reilly publishers. She has published numerous articles and blogs about her experiences with software testing. You can visit her website at: http://www.karennjohnson.com She is the co-founder of the WREST workshop, more information on WREST can be found at: http://www.wrestworkshop.com/Home.html

Joining the Scrum Team: A tester’s story

by: Johan Jonasson
Joining a Scrum team as the only test professional on that team is filled with challenges. Scrum doesn’t say much about testing, it’s up to the team to decide the most appropriate strategy. I will present an experience report from when I joined a newly formed Scrum team unfamiliar with all forms of structured testing. I will share the assumptions we made as a team fairly new to Scrum as well as our failures and successes. The talk will center around the context specific challenges from a test perspective and how to show the organization the value of having a dedicated test professional on the team.

Being fairly new to agile in general, the team members were faced by many questions. I’ll use these questions as a starting point in my talk, present our solutions and the pitfalls some of them led to. From a tester’s perspective, the main challenges were connected to finding the appropriate mix of scripted and freestyle testing, what to automate, bug handling and teaching the organization the value of having dedicated test professionals.

Learning outcomes:

  • The role of the test professional on a Scrum team. Needed or not?
  • Defect management. Don’t let the bugs weigh your team down.
  • Adapting testing strategy to change as the project progresses.
  • Finding the right level of test automation.

Johan Jonasson works as an independent test consultant at House of Test, a testing services company based in southern Sweden. Originally coming from a strict, European style, scripted testing background, Johan started taking an interest in agile and exploratory approaches to testing a few years ago and has since then worked with several large organizations to help them transform their testing approaches and become more responsive and adaptive to change.

Unifying industrial and academic approaches to domain testing

by: Cem Kaner, Sowmya Padmanabhan
The most widely used technique in software testing is called Equivalence Class Analysis. Or Category Partitioning. Or Boundary Value Testing. Or Domain Testing. It’s a black-box test technique—except when it’s used as a glass-box technique. Its focus is on input variables. Or output variables. Or variables that hold results, or are impacted by variables that hold results. I’ve been trying to muddle through this literature for a decade, coming at it with a practitioner’s bias and wondering whether the academics really had anything useful (or comprehensible) to offer. Sowmya Padmanabhan did her M.Sc. thesis research on this and has just co-authored the Domain Testing Workbook with me, which will be published soon by AST Press. This talk will present a worksheet that we’ve developed for planning and creating domain tests of individual variables or multiple variables that are independent or linearly or nonlinearly related. I’ll brush on some of the theory underlying the approach, but mainly I want to present some of the lessons that this work brought home to me about skilled (contrasted with inexperienced or unskilled) domain testing.

Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: Testing Computer Software (with Jack Falk and Hung Quoc Nguyen), Bad Software (with David Pels), and Lessons Learned in Software Testing (with James Bach and Bret Pettichord). At Florida Tech, his research is primarily focused on the question, How can we foster the next generation of leaders in software testing? See TestingeEducation.org for some course materials and this Proposal to the National Science Foundation for a summary of the course-related research.

Sowmya Padmanabhan currently works as Program Manager, at Hotmail, Microsoft. She has been with Microsoft for 5 years. Prior to that, she worked at Texas Instruments. She has over 8 years of industry experience that has revolved around product management, development and testing. She has shipped several desktop, embedded and web based consumer products that are international in scope. Sowmya has experience with varied product development methodologies ranging from waterfall to agile methods such as Scrum. She has Master’s degree in Computer Sciences with specialization in Software Testing. She did her M.Sc. thesis research on domain testing and is the co-author of the Domain Testing Workbook with Cem Kaner, to be published soon by AST Press.

Cutting the Mustard – Lessons Learned in Striving to Become a Superstar Tester

by: Nancy Kelln
As IT professionals we strive to deliver quality projects to our business stakeholders and end users. We focus on improving our project delivery by examining our development processes, encouraging continuous learning of team members, and spending time on project retrospectives to look for areas of improvement. Sometimes as we strive to improve things external to us we find that we cannot always effect change.

There is something we do have control and the ability to change, ourselves. I have found that as I continually strive to become a superstar tester, I am continuously challenging myself to grow. This presentation will examine what it means to “Cut the Mustard” and will challenge attendees to cut the mustard in their own ways. By using the same continuous improvement skills we often focus externally, we can refocus that energy inwards on ourselves and redefine our own excellence. Leveraging our role in the industry of software quality and focusing first on ensuring quality within ourselves and all we do, we can effect positive change and transformation within the teams we work with.

Key Points of the Presentation include:

  • Examining traits of superstar testers and what it means to “Cut the Mustard”;
  • How the essence of continuous improvement can be leveraged for self-improvement;
  • Analyzing the pathways for growth;
  • Sharing inspirational stories of incremental to monumental team transformations by individuals through their actions.

Nancy Kelln is an independent consultant with 12 years of diverse experience within the IT industry. Nancy is motivated by working with teams who are implementing or enhancing their testing practices; providing adaptive testing approaches in both agile and traditional testing teams. She has coached test teams in various environments and facilitated numerous local and international workshops and presentations. She is an active member of the Calgary Software Quality Discussion Group, Association for Software Testing and the Scrum Alliance and has co-founded the Calgary Perspectives on Software Testing Workshop (POST) with Lynn McKee. Nancy has also been published in various software testing magazines. You can reach Nancy online at www.unimaginedtesting.ca.

Coping With Complexity: Lessons From a Medical Device Project

by: Yaron Kottler
Medical devices can involve a lot of risk, thousands of pages of standards, heaping crate-loads of documentation, traceability of everything to everything, and developers who want to skip along at mach speed as if they were developing “Google Buzz” instead of, say, a cardiac arc welding system. This is a talk about how one team dealt with that complexity. Hints: use dependency graph diagrams that summarize the testing problem on one page, avoid procedural test documentation and use testing playbooks instead, develop the playbooks progressively, use session-based test management with exploratory testing, insist on comprehensive function-level logging, don’t just question requirements– rewrite them, while you’re at it, rewrite the standards, too, and of course, tell management that your testing will be a lot more focused and streamlined if they help you understand the internals of the system.

Yaron Kottler is a senior test specialist and the CEO at QualiTest USA, part of the QualiTest Group. Yaron has over a decade of QA and testing experience in both technical and management roles. A speaker at international testing conferences, Instructor and hands-on mentor at dozens of organizations in the US, Europe and the Middle East, Yaron is an expert on such topics as: Test process improvement, Load & Performance testing, KDT Test Automation and Exploratory Testing.

Enjoys working with high-potential and driven testing professionals to explore new and better ways of testing and believes that the most important quality in an engineer is the ability to learn.

Panel discussion on assessing testing skill during the job interview

by: David Liebreich
As testers, we want hiring managers to recognize our testing skills so that we can get and keep great testing jobs. As test managers, we want to assess a job candidate’s testing skills, so that we can hire and keep great testers. How does this play out in the real world?

At CAST 2010, we’ve assembled a panel of practicing tester interviewers, and we’ll ask them to share their experiences and practical advice. We won’t talk about topics such as how to find candidates, how to assess cultural fit, or how to end an interview early; instead, we’ll keep the focus on assessing testing skills, and talk about examples and concrete experiences.

David Liebreich has been interviewing testers for over 10 years, and has been interviewing as a tester for over 20. He’s been involved in the interviewing process at small startups and large companies, and loves to talk about making the interview process better. Dave also does some testing.

Assessing Your Value as a Tester

by: Lynn McKee
Testers aim to be valued, respected members of our teams and by our managers and organizations. We pursue excellence in our craft by honing our skills and diversifying our techniques. Our goal is to deliver value through the quality of the information we provide our stakeholders.  How do we know if our work is viewed as valuable? What criteria are we applying to determine if we are adding value to our teams? Have we identified our diverse clients and do we understand their needs and perspective on testing value?

Value is subjective. Individual perspectives on value differ and can be influenced by organizational goals, project mandates, and functional roles. To assess the value we are providing it is imperative to gather feedback from our clients. As testers, our clients may include business analysts, designers, programmers, writers, trainers, support teams, project managers, customers, functional managers, and stakeholders. The needs and perspectives of each client will vary and we need to understand these differences in order to align ourselves and effectively deliver value.

Lynn McKee is an independent consultant with 15 years experience in the IT industry and a passion for helping organizations, teams and individuals deliver valuable software. Lynn provides consulting on software quality, testing and building high performing teams. An advocate of the context-driven perspective, her focus is on ensuring testing teams are enabled with effective, adaptive and scalable approaches aligned with the organization’s quality needs. Lynn is an active member of numerous software testing associations, speaks at conferences, writes articles and contributes to blogs and forums. Lynn is the co-founder and host for the Calgary Perspectives on Software Testing Workshop. You can reach Lynn online at www.qualityperspectives.ca.

A Framework to Evaluate Testing Skills Effectiveness

by: Alan McKellar
Testing is an empowering role because there is ambiguity allowing the engineer to develop their own approach to the testing problem. In recent years, R&D projects at HP have shifted from large multi-year projects to targeted short-term projects.  When multi-year projects were the norm, teams were built based upon a narrow range of critical skills.  Recently, projects have become focused on short-term deliverables which can generate revenue quickly.  In such environments, the value of having a flexible team of empowered individuals cannot be overlooked.

Building a team of individuals with strong traits in technical acumen, communication, collaboration, and leadership has been the basis of the HP NAS team’s framework.  The team has demonstrated their flexibility and usefulness through the delivery of high quality results supporting four different product families in just two years. The framework has been applied extensively in Agile environments.

Framework to assess testing skills: We discuss the Framework that evaluates engineers across four dimensions.

Bridge from Practical to Theory: We continue the discussion on how this framework has been successfully applied by a highly experienced test lead.

Theoretical Model to Measure Testing Skills Effectiveness: We then close with how the Program Manager measures the effectiveness of the framework.

Alan McKellar is a certified Project Management Professional (PMP) who has led technical teams for over 16 years.  He began his career in the United States Navy then led customer focused IT projects at Procter & Gamble and Hewlett Packard.  Six years ago he joined HP’s R&D arm and now leads software testing efforts for the software arm of StorageWorks Division.

Alan holds a MBA from the University of Notre Dame, Mendoza College of Business.  When he and his teams are not “attacking” products, he enjoys leisure time with his family, photography and reading.

Kim Jensen has been with HP R&D for 20+ years. As a software engineer, she has done current product engineering, systems integration testing, development of graphics software, 3rd party management to enable customer specials, integration testing and enablement of Linux on HP Workstations. She has worked with remote teams in India and Taiwan as well as across the US.  Most recently, she joined Alan’s QA team, leading functional and integration testing of Network Attached Storage.

Kim holds a Master Degree in Computer Science from Colorado State University and a BS in Computer Science and Mathematics from University of California at Davis. In her spare time, she enjoys mountain biking and skiing with her family, and gardening.

Stuart Bobb has25 years of experience in the computer industry. As a software tester, he has tested business server and workstation operating systems.

As a software developer, he has delivered software ranging from Unix kernel code to system administration GUIs.  He has over 14 years of experience leading complex research & development projects as both a project and program manager.

Stuart’s project and program management experiences have included high level bundled software solutions as well as the lowest levels of the hardware and software interface.

In his most recent role, Stuart has been deeply involved in the work surrounding two small company acquisitions and their approaches to software quality.

He holds a BS in Computer Science and a PMP certification from PMI.

Testability and Technical Skill

by: Greg McNelly
This session draws from my testability experiences as both a tester and a developer, and promotes the idea that a tester’s technical skill substantially impacts system testability. My experiences include a futile attempt at convincing a development team to build a more testable system, and an eye-opening assignment as a developer in the same organization.

Testability is often defined as the ease with which a tester can observe and control a system. The importance and perception of testability can differ according to one’s role on a project. However, testability is not merely a system property; it describes a relationship between properties of the system, the test environment, and the tester.

I will present examples to illustrate the exposure of testability through technical knowledge and skill for an ASP .Net web application. Other examples demonstrate how a system’s design can be leveraged to improve testability. I also offer advice for testers who seek to utilize and improve their technical skills.

Greg McNelly: Computer programming has been a passion of mine since 1982, and my profession since 1993. My programs have helped people insure automobiles, predict laboratory test results, precision-align machinery, process payrolls and practice math facts.

In 2003, I became fascinated with test automation as a type of programming; and, shortly thereafter, its limitations led me to a tremendous respect and passion for the cognitive challenges of testing.  Now I work with project teams seeking to leverage testing as an effective component of their overall software development process.

Currently, I am an in-house software development consultant at Progressive Insurance, in Mayfield Village, Ohio. This is also where I live with my wife and two daughters.

Communicating With Non-Testers

by: Catherine Powell
This session will help you communicate clearly, precisely and effectively by considering the context of the communication and the needs of your audience – even when that audience includes non-testers.

Talking to non-testers requires a lot more thought and a lot more explanation than talking to testers who share your background. Management wanting status reports, release teams asking about when something will ship, developers and semi-technical customers wanting analysis of a defect or risk: all these are non-testers who need information presented in a way that lets them make good decisions. Communicating test concepts and outcomes to non-testers is a skill that turns a good tester into an invaluable team member.

I will share examples of communications about the same (real) project intended for several different audiences, including:

  • two different status dashboards (one for development and internal management, one for release teams and upper management)
  • a defect summary of a technical issue for customers and developers
  • a risk analysis for a feature
  • a test estimate with reasoning and cost breakdown
  • two different test plans (one for internal use, one for client consumption)

For each example, the session will consider:

  • the intent of the communication
  • the audience
  • selecting useful content
  • what to leave out
  • conveying uncertainty, risk and estimates
  • what works
  • what doesn’t work

Catherine Powell has been testing and managing testers for about ten years. She has worked with a broad range of software, including an enterprise storage system, a web-based healthcare system, data synchronization applications on mobile devices, and webapps of various flavors. She is an author and a formal mentor to testers and test managers.

Catherine focuses primarily on the realities of shipping software in small and mid-size companies. Specifically, she highlights and works to explicate the “on-the-ground” pragmatism necessary for testers to work effectively with both software and humans from product definition through release, and in the field.

Exploratory Test Automation

by: Cem Kaner, Doug Hoffman
Cem’s keynote, to be held the second day of the conference, focuses on investment modeling and illustrates a specific example of exploratory test automation. There are many different test automation techniques that we can call exploratory. This talk supports the keynote and presents a conceptual framework for exploratory automation techniques that Cem and Doug have been organizing over the past 12 years. This talk will provide several examples that illustrate that framework. The paper will collect ideas that Doug or Cem have published in several slidesets but not yet in any citeable paper.

Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: Testing Computer Software (with Jack Falk and Hung Quoc Nguyen), Bad Software (with David Pels), and Lessons Learned in Software Testing (with James Bach and Bret Pettichord). At Florida Tech, his research is primarily focused on the question, How can we foster the next generation of leaders in software testing? See TestingeEducation.org for some course materials and this Proposal to the National Science Foundation for a summary of the course-related research.

Douglas Hoffman is a management consultant and trainer in strategies and tactics for software quality assurance with over 30 years experience. The President of the Association for Software Testing (AST) and a Fellow of the ASQ (American Society for Quality), he holds degrees including MBA, MSEE, and BACS. He is certified by ASQ as a Software Quality Engineer and as a Manager of Quality/Organizational Excellence. Douglas is a founding member, past Chair, and current Treasurer of SSQA (Silicon Valley Software Quality Association), past Chair of the Silicon Valley Section of ASQ, a founding member for AST, Invited Speaker Chair for PNSQC, and a member of ACM and IEEE. He has spoken at dozens of conferences and has been Program Chair for several international conferences on software quality. He has also been an active participant in the Los Altos Workshops on Software Testing (LAWST) and dozens of the offshoot workshops.

Internationalization and Localization Testing Skills

by: Matta Saikali
Internationalization and localization testing involves a unique blend of application domain skills, technology skills, language skills and testing skills.

This presentation reviews:

  • Test design
  • Test management
  • Test Workflow
  • Bugs, Reporting, Advocacy, Workflow

The presentation demonstrates how you can test in all target languages and on all target platforms in a cost effective manner. You will learn how to balance domain, testing, technical and language skills in set up a testing organization along with associated workflow, planning and management structures. You will learn about deciding what to test and how to test it.

Plenty of examples from real projects are used to illustrate the concepts presented.

Matta Saikali has more than ten years experience in internationalization and localization testing. His testing experience covers more than 30 languages including European, Asian, Arabic, Hindi, etc.

Formerly Director of Software Quality Assurance at Gemplus, Matta built up and managed a team of 50 SQA professionals responsible for testing globalized Windows applications and embedded systems in European and Asian languages.

As Director of SQA at Purkinje, Matta managed the testing team for a multilingual multi-user client-server application for clinical data entry.

Matta was also SQA team leader at ALIS where he was involved in testing all ALIS products, notably their Arabic/Farsi product line.

Experiences and Insights of a Novice Agile Software Tester

by: Zachary Spencer
Often at conferences domain experts will present brilliant insights and observations that help people understand how to effectively hone their skills as software testers. Instead, why not listen to one where a novice reviews the assumptions, challenges, and critical lessons he’s learned in the first few months of being an agile software tester! In this session you will:

  • Learn about horrible horrible mistakes that have been made
  • Learn what skills a novice should develop in order to be more effective
  • Learn what mindsets and attitudes were helpful
  • Hear a grown man cry. Maybe.

Disclaimer: The presenter is a novice. He will be wrong. But he will be right, potentially at the same time. At the very least he promises to try to be amusing.

Zachary Spencer started off writing programs in BASIC at the age of 12. Since then he has learned that breaking diagnosing software can be as fun as creating it. He has over 5 years of experience as a professional software developer creating everything from custom content management systems to RESTful APIs. He currently works at Pillar Technology as a software tester, writes a blog at http://www.zacharyspencer.com/, hosts a podcast at and spouts random gibberish as @zspencer on Twitter.

Testing in large-scale scientific computation: the short circuit method

by: Mónica Wodzislawski
We consider the problem of testing large-scale scientific computations, which are typical in the case of physics, weather prediction, computational fluid dynamics, etc. These are characterized by large programs which run for a significant time (e.g. 100 hours) usually in large/fast/parallel computers. The ones studied here are those which produce a relatively small output compared to the amount of input data and/or computing time.

For these problems we have no external way to decide if the results are correct or not. In terms of software testing, we have no oracle. In some cases the errors may be discovered later, but their cost may be intolerable, e.g. designing an airplane wing does not tolerate errors even if those may be discovered during usage.

We present a typical diagram of the main steps present in massive scientific computation. These many steps are typical of function optimization or the solution of differential equations. Most notably, these steps come from a variety of sources, many of them outside the control of the programmer.

The main idea is to verify some property of the results with the original equations. That is, connect the source description to the final results. Of course we are not going to recompute them; the idea is to verify properties that the results should satisfy given the original equations. This may require some ingenuity and some minor additional computing. Two examples of such short circuits are explained in some detail.

This approach, to be effective, requires high domain and technical skills of the testers. The software, hardware, packages, libraries, parallel computing are all suspects. It is a very multidisciplinary scenario, where the programmers do not have testing culture, maybe not even software engineering culture. But they have the domain knowledge which is needed to build effective short-circuits.

Mónica Wodzislawski manages the Functional Testing Laboratoy of the Software Testing Centre, Montevideo, URUGUAY, since its creation, in 2004. She has a vast experience as quality assurance and testing local and regional consultant, as well as managing software projects. Mónica teaches Software Engineering and Testing at the University.

Gaston Gonnet is a computer science professor since 1977. He is best known for the creation of the Maple computer algebra system and an electronic version of the Oxford English Dictionary. Gonnet is presently a professor at the Institute of Scientific Computation, ETH Zurich, Switzerland.

He is the Director of the Computational Biochemistry Research Group, where has developed the Darwin system and server for these computations.

 

Comments are closed.