Sessions

Track Sessions

All Track Sessions take place on August 8-9, 2011

“agile” Test Teams (Even in Non-agile World)

Paul Holland
Paul Holland has been managing his team in an agile way for the past several years. Over this time he has developed a methodology that can be used by test teams in both agile and non-agile environments. After his company moved to an agile methodology, he has been able to modify his techniques to continue to provide a high level of service while satisfying the needs of the new organization. He is going to share his methodologies, planning & tracking strategies, metrics and documentation techniques. During an evening session at STAR East in May 2011, Michael Bolton called Paul’s strategies “brilliant” and stated that “you have to see how Paul is managing his team”.
Paul Holland has 15 years’ experience in testing telecommunications equipment and is currently the manager of a verification group at Alcatel-Lucent in Ottawa, Canada. He has an extensive background in test automation, DSL physical layer testing and performance testing. Paul has been involved in the context-driven testing community for the past 10 years’. He has been on the Association for Software Testing Board of Directors for the past three years and is currently the Treasurer of AST. Just over 2 years’ ago Paul started to provide training in Performance Testing and Exploratory Testing. Paul has been the primary facilitator at the past four CAST conferences and has become one of the most sought-after facilitators of LAWST inspired peer conferences; acting as the facilitator at over 25 workshops.

Trouble in Mind: Troubleshooting Skills for Fortune and Glory

Chris Blain
Troubleshooting is something you’ve likely been pulled into from time to time, but might not have thought of it as a specific set of skills and techniques. I will discuss troubleshooting methodologies, books, and tools that have helped me learn to be a better troubleshooter. The emphasis will be on techniques that work in a wide variety of contexts.
Chris Blain has worked as a developer, support engineer, tester, and test manager for over 14 years in the software industry. He believes in the form of software development espoused by Gerald Weinberg, rather than the named processes so popular today which secretly draw from Fredrick Taylor and destroy the soul of the industry and those who work in it.

Paths For Self-Education In Software Testing

Markus Gärtner
Software Testing is the most controversial profession in computer science. Given the lack of credible classroom training in software testing, successful software testers have to educate themselves, often in their spare time. Because it’s crucial for software testers to know what they are doing, testers have recently evolved several paths to self-education.In this session, Markus Gärtner explains alternative paths to the knowledge a software tester needs. While traditional classroom training provides one way to teach a professional on software testing, several emerging techniques value collaborative learning approaches over certification, thereby forming communities of software testing professionals.
Markus Gärtner studied computer sciences until 2005. He published his diploma thesis on hand-gesture detection in 2007 as a book. In 2010 he joined it-agile GmbH, Hamburg, Germany, after having been a testing group leader for three years at Orga Systems GmbH. Markus is the co-founder of the European chapter in Weekend Testing, a black-belt instructor in the Miagi-Do school of Software Testing, contributes to the ATDD-Patterns writing community as well as the Software Craftsmanship movement. Markus regularly presents at Agile and testing conferences, as well as dedicating himself to writing about testing, foremost in an Agile context.

*Now* what’s your plan?

Henrik Andersson, House of Test
In this talk, I will speak about how context impacts your test plan, and why that matters. This will be an interactive session where we will look at a specific application and consider how to test it. During the discussion, new aspects of context will be revealed that could cause us to reconsider the plan. Maybe some “Best Practices” will not seem that “best” when considered in context.We will end this session with a brainstorm on context variables to consider when test planning.
Henrik Andersson is consultant and founder of House of Test, consultancy and outsourcing based in Sweden and China. He helps companies increase their efficiency and reconstructing their testing. He provides leadership and consulting for managers and leads. He tests, coaches, consults, speaks, writes, manages and thinks about software testing and problem solving. Henrik has worked in a broad range of fields such as medtech, defense, financial, pension, commercial web apps, logistics, SAP, search engines and telecom. This has given him a deep understanding of the way differences of context influences testing.

Computer-Aided Exploratory Testing

Harry Robinson, Microsoft
Exploratory testing emphasizes creativity and thinking, but is often limited by how fast humans can think and type. Automated scripts provide speed and accuracy, but are often used only as regression tests.Computer-aided exploratory testing blends human judgment and computer horsepower to create testing that is thorough, robust, and flexible. This new approach brings new possibilities and new challenges, and requires us to re-think how we do our job as software testers. This session covers the basics of computer-aided exploratory testing and several examples of how we use it to test Bing.
Harry Robinson is a Principal Software Design Engineer in Test (SDET) for Microsoft’s Bing team, with over twenty years of software development and testing experience at AT&T Bell Labs, Hewlett-Packard, Microsoft, and Google, as well as time spent in the startup trenches. He currently works with Bing teams on developing effective test strategies across the product. While at Bell Labs, Harry pioneered a test generation system that won the 1995 AT&T Award for Outstanding Achievement in the Area of Quality. At Microsoft, he championed the model-based test technology which won the Microsoft Best Practice Award in 2001. He holds two patents for software test automation methods.  He speaks and writes frequently on software testing and automation issues.

Weekend Testing: Skilled Software Testing Unleashed

Ajay Balamurugadas
This is the story of Weekend Testing (WT), an innovative idea which is bringing passionate testers together to practice their skills. It’s a journey of how a practice of self-learning graduated into passion to serve software community. WT is BY the testers, FOR the testers, TO the testers. WT is about sharing passion to improve testing skills and contribute to software community.

  • Testers are free to TEST their ideas in WT sessions; the TESTER is the TEST MANAGER.
  • Development of testing skills [Questioning, Bug Hunting, Observation, Recognizing and Clearing traps, Bug Investigation, Note taking, Collaboration, Rapid learning, Time management etc] is given prime importance.
  • WT is FREE of cost and is conducted on every weekend across multiple chapters across the globe.
  • An opportunity to share passion for SKILLED TESTING
Ajay Balamurugadas is a software tester passionate to learn to test any software. He has been awarded scholarship from Software Testing Club and is a brown-belt student of Miagi-Do School run by Matt Heusser. He is co-founder of “Weekend Testing”. Ajay shares his testing activities and learning through his blog http://EnjoyTesting.blogspot.com and tweets under @ajay184f

What do Auditors Expect from Testers: Audit survival heuristics of an FDA-regulated exploratory testing team.

Griffin Jones, iCardiac
In FDA regulated industries, compliance audits are fact-finding exercises used to determine the degree of compliance to rules and regulations. Executive management considers these audits high stakes events.Exploratory Testing (ET) has clearly emerged as a test approach in these industries. An audit is often the impact point where ET and auditor worlds have collided. During an audit, there are many opportunities for mutual misunderstanding. These misunderstanding can trigger episodes of incongruent behavior.  Incongruent audits are not successful audits for anyone involved.This presentation describes and applies an example of one team’s set of audit survival heuristics, (called “chocolate mousse”). These heuristics can be used by an ET team to both: prepare for an audit; or to maintain their composure and effectiveness during a stressful audit. Also highlighted are some of the common misconceptions and traps about ET in a regulated industry. We’ll conclude with some of the positive opportunities that can occur for the ET team during these audits.
Griffin Jones is currently the Director of Quality and Regulatory Compliance, and the former Senior Software Quality Lead for iCardiac Technologies.  iCardiac provides core lab services to the pharmaceutical industry for molecules in clinical development. These highly automated services assess the risk of sudden cardiac death caused by an elevated cardiac QT interval. Griffin specified the company’s initial software exploratory testing strategy. He is currently responsible for all matters relating to quality assurance and regulatory compliance for an FDA, ICH, GCP regulatory compliant Quality System, including presenting the verification and validation (testing) results to external auditors.

Economics of Automated Test Evaluation

Matt Heusser
While it might not be possible automate sapient testing,  it certainly is possible to have the computer go do some things in the corner and come back with a thumbs-up or a set of errors.  Some key context-driven questions would be along the lines of “What should those things be?  What is the opportunity cost of developing automation?  How can we measure the value?”  These are questions we don’t address enough in the world of software testing.Building upon the economic model from (T1), this talk will took at a very specific kind of test automation – “traditional” -pre-scripted GUI automation – and provide some tools to analyze the value of that work.  Matt will talk about maintenance cost, false errors, what those tests actual describe, tell a few common failure stories, along with a few, well-grounded, reality-based long term success stories and comparisons to other forms of automation, such as model-driven and high-volume automated methods.
Matt Heusser is a member of the technical staff at SocialText, which he joined in 2008 after a decade or so developing, testing and/or managing software projects. Along the way Matt has had to the opportunity to do some interesting things, including serving as the lead organizer of the Great Lakes Software Excellence Conference (now in it’s fifth year), organizing the workshop on technical debt, and teaching information systems at night at Calvin College.  Matt is probably best known for his writing, including the blog “Creative Chaos” (xndev.blogspot.com), and role as a contributing editor in Software Test & Quality Assurance Magazine.  Matt is also serving as lead editor for the “Cost Of Testing Book”, with an expected publication date of August, 2011;. You can Matt on twitter at mheusser.

Telling a Compelling Testing Story

Ben Kelly, Cerego
As a tester, your role is not to “find all the bugs,” but to reveal information in a way that informs and benefits your clients. A key to that is to understand your audience. But your audience is likely a lot of different people all at the same time: project managers, programmers, C-level management, Analysts, Salespeople, Subject Matter Experts and so on. How do you know who needs to know what and what format suits them? How do you find out what they need to know if they themselves don’t have a clear idea of what that is? How do you use your (probably very limited) time to make sure the most important questions are answered? This presentation will provide strategies to take the data that your testing gives you and turn it into compelling information that your audience can digest and take action on.
Ben Kelly works for Cerego Japan, in Shibuya, Tokyo and holds the title ‘software test wizard’, which is handy only because he can pull out Gandalf’s ‘a wizard is never late or early’ line as needed. He has been testing for 8 years with stints in Internet measurement and insurance industries. When he is not agitating lively debate in the comments of other people’s blogs, He sporadically blogs about his own testing experience at testjutsu.com. Outside of the testing arena, he is a practitioner of kendo (Japanese swordsmanship) and has represented Australia on several occasions at the world kendo championships. He is available on twitter as @benjaminkelly.

Build a Deeper Community of Practice: How to Organize a Peer Conference

Paul Holland
One of the innovations of the Context-Driven School is the “peer conference.” Introduced to us by Cem Kaner in 1997, it galvanized a small community of testers in Silicon Valley, and led directly to the concept of context-driven testing. A peer conference is different from an exhibition-style conference. In a peer conference, every attendee is an active participant and most often also a speaker. Everyone helps guide the direction of the meeting with their own ideas, questions, and energy. It’s a round table. It’s a facilitated discussion forum where experiences are shared, rather than best practices. In fact, CAST has many elements of a peer conference.This presentation tells you how you can organize and run your own small peer conferences, so that you can supercharge your own local testing community.
Paul Holland has 15 years experience in testing telecommunications equipment and is currently the manager of a verification group at Alcatel-Lucent in Ottawa, Canada. He has an extensive background in test automation, DSL physical layer testing and performance testing.Paul has been involved in the context-driven testing community for the past 10 years. He has been on the Association for Software Testing Board of Directors for the past three years and is currently the Treasurer of AST. Just over 2 years ago Paul started to provide training in Performance Testing and Exploratory Testing.Paul has been the primary facilitator at the past four CAST conferences and has become one of the most sought after facilitators of LAWST inspired peer conferences; acting as the facilitator at over 25 workshops.

Vendor Meets User: The Hexawise Test Design Tool and a Tester who Tried to Use It in Real Life

Justin Hunter, Hexawise & Lanette Creamer
Dr. William G. Hunter helped manufacturers create small numbers of prototypes that were each carefully designed to reveal as much actionable information as possible. He did this using Design of Experiments methods that he taught as a professor of Applied Statistics. Five years ago, while working at Accenture, Hunter’s son Justin began to apply some of these Design of Experiments-based methods to the software testing field. After seeing promising results from 17 pilot projects he helped manage at Accenture, Justin created Hexawise, a software test design tool that generates tests using Design of Experiments-based methods.Justin will introduce the tool. But this is not the typical vendor talk. Tester Lanette Creamer recently used Hexawise for the first time on a real project. She will share her experiences, covering both where it helped and where she experienced limitations of the tool and the test design technique.
Justin Hunter, Founder and CEO of Hexawise, is a test design specialist who has enjoyed teaching testers on six continents how to improve the efficiency and effectiveness of their test case selection approaches. The improbably circuitous career path that led him into the software testing field included working as a securities lawyer based in London and launching Asia’s first internet-based stock brokerage firm. The Hexawise test design tool is a web-based tool that is available for free to teams of 5 or fewer testers, as well as to non-profit organizations.Lanette Creamer: After 10 years at Adobe, including working as a Quality Lead testing across the Creative Suites, Lanette is now a Senior Consultant with Sogeti. She is currently working as a Test Lead at Starbucks.  Lanette has been evangelizing test collaboration and promoting advancement in human test ideas for the past 5 years. With a deep passion for collaboration as a way to increase test coverage, she believes it is a powerful solution when facing complex technical challenges. Lanette has presented at PNSQC, Better Software/Agile Development Practices, Writing About Testing, and STPCon in 2010. She’ll be participating at CAST 2011 in her home city of Seattle. She actively participates in the testing community and has written two technical papers and a published article on testing in ST&P Mag January 2010 (now ST&QA magazine).

How to Coach Testers Using Skype

Anne-Marie Charrett, Testing Times
Nothing beats learning how to test than actually performing testing exercises and challenges.IM coaching uses this principle with Instant Messaging as the medium. Guided by experienced coach, a tester is able to immerse themselves in a testing exercise. Discover what IM Coaching is all about in the real-time, fast paced session and learn how it can help you and your organisation learn more about testing. Watch as simultaneous online coaching sessions take place side by side. Have online coaching strategies explained as the coaching sessions take place. Each coaching session will consist of an exercise or puzzle which a tester will work through.The coaching sessions will be performed by Anne-Marie Charrett. As they take place, James Bach will narrate the sessions identifying patterns in the sessions. These and other patterns will be made available for all attendees to take home.
Anne-Marie Charrett is a professional software test consultant and runs her own company Testing Times. An electronic engineer by trade, software testing chose her, when in 1990 she started conformance testing against European standards. She was hooked and has been testing since then.She assists in the online BBST training courses run by the Association for Software Testers as well as a provider of Exploratory Testing workshops. She provides free IM coaching sessions on Skype (charretts).Anne-Marie also writes on her blog Maverick Tester (http://mavericktester.com). She runs a twitter account @dailytestingtip where she provides daily testing tips to the software community. Current projects she’s working on include an e- book “If were a test case I would…” for the Chandru Fund and contributing the chapter “the cost of starting up a test team” for the Cost of Testing.

Developing a Professional Testing Culture

Greg McNelly, Progressive
Culture happens. If you work with other testers, voila! You do so in a testing culture. But are you aware of it? Has it been shaped consciously? You may consider yourself a professional tester, but how is that attitude shared by your coworkers? How is it supported by your organization? And if you don’t like your answers to these questions, how would you change things? This is the story of our testing culture at Progressive Insurance, a testing culture comprised of hundreds of testers, a testing culture that is being shaped by the work of professional testers, by their their sense of community, and by the support of their organization.In this talk, I will describe some of the key struggles, events, decisions and people who have shaped our testing culture.  What are our attitudes about testing, and how do they influence our work?  How did we come to have these attitudes, and what were they before?  Why did they change?  How well are they supported?  I will address these topics, and more, including the corresponding perspectives of several other participants in our culture.
Greg McNelly: Computer programming has been a passion of Greg’s since 1982, and his profession since 1993.  His programs have helped people insure automobiles, predict laboratory test results, precision-align machinery, process payrolls and practice math facts. In 2003, Greg became fascinated with test automation as a type of programming; and, shortly thereafter, its limitations led him to a tremendous respect and passion for the cognitive challenges of testing.  Now he works with project teams seeking to leverage testing as an effective component of their overall software development process.  Currently, he is an in-house software development consultant at Progressive Insurance, in Mayfield Village, Ohio. This is also where he lives with his wife and two daughters.

Panel: How to decrease the cost of testing

Matt Heusser
The constant pressure to do more with less, shrink the timing cycles, increase efficiency and decrease cost in testing can be a total drag.  After all, if we want to reduce cost, we could just not test at all, and see what happens. But if it weren’t said as a vapid cliche – if there was some actual meat on the idea, maybe we could see some benefit. This panel discussion brings together experts in the field, the contributors to the “Reducing The Cost of Testing” book, to discuss how we can respond to the challenge of reduced cost with integrity and success. You’ll go home with more than ideas to reduce cost; you’ll have a balanced view of the real cost of those tradeoffs, and you’ll have tools to discuss with senior management in a reasoned, articulate way.
Matt Heusser is a member of the technical staff at SocialText, which he joined in 2008 after a decade or so developing, testing and/or managing software projects. Along the way Matt has had to the opportunity to do some interesting things, including serving as the lead organizer of the Great Lakes Software Excellence Conference (now in it’s fifth year), organizing the workshop on technical debt, and teaching information systems at night at Calvin College.  Matt is probably best known for his writing, including the blog “Creative Chaos” (xndev.blogspot.com), and role as a contributing editor in Software Test & Quality Assurance Magazine.  Matt is also serving as lead editor for the “Cost Of Testing Book”, with an expected publication date of August, 2011;. You can Matt on twitter at mheusser.

Introducing Session-Based Test Management in Your Project

Carsten Feilberg
Session-based test management is a systematic way of controlling and accounting for softly structured processes such as ET. If you wonder how it works, or why it’s becoming popular, this is the session for you. In this talk, I present the basic elements of SBTM.But we will also take a good long look at all the things that threatens the process. Stakeholders expectations, time constraints, and just ‘letting go’ because ‘we know all this stuff’ can have a devastating effect. To make SBTM work now, also in the longer run, requires constant adaptation.In this talk I will draw on my own as well as others’ experiences and hopefully spark a lively discussion on this subject, that can help us all survive and succeed with SBTM.
Carsten Feilberg has been testing or managing testing for more than a decade working on various projects covering the fields of insurance, pensions, public administration, retail and other back office systems as well as a couple of websites. With more than 17 years as a consultant in IT his experience ranges from one-person do-it-all projects to being delivery and test manager on a 70+ system migration project involving almost 100 persons. He is also a well known blogger and presenter on conferences and a strong advocate for context-driven testing.

Going Against the Stream

Alexandru Rotaru, Altom
This is the story of my software testing company’s challenges over the past 3 years, of the testing experiences I’ve had and the changes I’ve made when it comes to my overall approach to software testing. Working with both clients and potential employees that know all about “best practices” in software testing and offering offshore testing services when most of our potential customers think that our only advantage is our price might have been sometimes frustrating, but it has also been enlightening. This is the story of my company’s struggle to resist the temptation to cut corners ‘just this once’ and the lessons I’ve learned from this experience.
Alexandru Rotaru discovered testing six years ago after graduating from university, and realized that he actually loved it. Three years later he co-founded Altom, a software testing lab in Romania, and started to be more and more interested in how to get better at testing and how to pass on his enthusiasm to others. He thinks he still has a lot to learn, and that he has made an important step when he became an AST member 2 years ago.

When Should A Tester Test Less?

Adam Yuret, Volunteermatch & Lanette Creamer, Sogeti
Ostensibly the goal of this testing is to provide test coverage of a software product to uncover and document bugs. What if a stakeholder doesn’t want you to report bugs? What if they want you to test less?Let’s discuss scenarios where the tester is explicitly asked to ignore most bugs, not because the product is so polished that the only probable defects are minor, but because the opposite is true. There are so many problems that to document them all and act on them would have a crippling effect on the project.   What would you do in this scenario? Come join Lanette Creamer and Adam Yuret as they discuss how these type of dilemmas face them in their current, and past projects. Share your experiences, ideas and insights into these dilemmas as they host a spirited discussion on the potential hazards of filing bugs.
Adam Yuret: After 8 years at WebTrends testing an enterprise level SaaS data warehousing product which included building and maintaining a large scale testing environment, Adam currently works as an “army of one” tester for Volunteermatch. VolunteerMatch is a national nonprofit organization dedicated to strengthening communities by making it easier for good people and good causes to connect. Adam is a relative newcomer to the context driven community and is currently working to build a testing process for a project that is transitioning to an agile/scrum methodology.Lanette Creamer: After 10 years at Adobe, including working as a Quality Lead testing across the Creative Suites, Lanette is now a Senior Consultant with Sogeti. She is currently working as a Test Lead at Starbucks.  Lanette has been evangelizing test collaboration and promoting advancement in human test ideas for the past 5 years. With a deep passion for collaboration as a way to increase test coverage, she believes it is a powerful solution when facing complex technical challenges. Lanette has presented at PNSQC, Better Software/Agile Development Practices, Writing About Testing, and STPCon in 2010. She’ll be participating at CAST 2011 in her home city of Seattle. She actively participates in the testing community and has written two technical papers and a published article on testing in ST&P Mag January 2010 (now ST&QA magazine).

Game Films: a Technique for a Reflective Tester

Grig Melnik, Microsoft
What can be gained from examining your own practice or someone else’s practice?  A lot! I’ve experimented with using recordings of test sessions to help both aspiring and professional testers examine and rethink their actions, their techniques, their creative process… and most importantly, to learn! It turned out this practice, known as “game films”, has been successfully used by sport coaches, who spend hours and hours watching, dissecting, and analyzing games (both of their own teams and competitors) to polish their tactics and to draw specific game plans based on those analyses. Musicians also listen to and analyze their own performances and of others.In this talk, I’ll present my experiments with students and practitioners and share some lessons learnt about reflection-on-testing (while watching/analyzing a game film) as well as reflection-in-testing (while thinking out loud during testing).
Grig Melnik is a thinker and enthusiastic learner. Currently, leading software development teams at Microsoft and shipping software. Previously, educating future software professionals at several universities in Canada and conducting software engineering research.

Crafting Our Own Models of Software Quality

Henrik Emilsson, Qamcom Research & Technology
In late 2010, thetesteye.com published a poster called Software Quality Characteristics which was the result of trying to invent the best model of quality characteristics. When we started this, there was only one realistic and thorough model available – Bach’s CRUSSPIC STMPL – but we didn’t think that this model was perfect for us. Instead we attacked this model and started to question it in order to come up with a model that we thought were more true and valid for us and in our context.In this talk I will describe what we did during this interesting journey; a journey perhaps more important than the result itself.
Henrik Emilsson, Test Manager at Qamcom Research & Technology, started his testing career in 2000 and have been working as test lead, tester and team leader in a wide variety of business applications and business areas. He is one of the founders of www.thetesteye.com which has become one of the greatest Swedish blogs on software testing; and he is chairman and co-founder of SAST Värmland, a local chapter of Swedish Association for Software Testing. Henrik has been a speaker at EuroSTAR (2005), SAST Q15 (2010), and several other smaller conferences, and was one of two EuroSTAR TestLab Apprentices in Copenhagen 2010. In 2011 he will manage the EuroSTAR TestLab together with a colleague. Henrik is also part time adjunct professor at Karlstad University for the course “Testdesign av programvara” (Software Test Design) which he also have co-created.

Working on a Virtual Team

Karen N. Johnson
Working on a virtual team often means adjusting to time zone differences and cultural differences. There are other challenges in working on a virtual team as well – such as creating a working environment and building a working rapport with people who you have possibly never met. How can you work transparently with people you cannot see? How do you lead a team when time differences are so large that you never work during the same hours?  What happens when your team uses an Agile development process and you’re not collocated? How do you arrange Sprint planning sessions when your team is in different time zones? Webcams, Skype sessions, instant messenger and web conferencing are some of the tools used to communicate to team members in different locations but those are technical tools. This session focuses on skills to develop when you’re working on a virtual team.
Karen N. Johnson is a software test consultant. She is frequent speaker at conferences. Karen is a contributing author to the book, Beautiful Testing by O’Reilly publishers. She has published numerous articles and blogs about her experiences with software testing. She is the co-founder of the WREST workshop, more information on WREST can be found at: http://www.wrestworkshop.com/Home.html  Visit her website at: http://www.karennjohnson.com

Tackling Barriers in Multi-Customer Contract Acceptance Testing

Maaret Pyhäjärvi, Ilmarinen Mutual
Exploratory testing is an effective and efficient way to organize testing working in collaboration with the developers. However, the pension insurance sector is known for long waterfall-like projects, where a customer procures the changes to the system from a contractor with fixed-price contracts. The customer’s contract-acceptance testing happens in a limited timeframe at the end of the project with scripted test cases.In this presentation, I go through a project from the insurance pension sector in Finland, where we made an effort to change multi-customer contract acceptance testing from pre-scripted test cases created from specifications to data-oriented exploratory testing. Within the project we faced several objections and lost many battles of doing things our way. Keeping the goals of sector collaboration and good testing in mind, we sought compromises and learned valuable lessons for improving our testing.
Maaret Pyhäjärvi works as test manager at Ilmarinen Mutual Pension Insurance in Finland, contributing to testing in customer-contractor settings in contract-driven development. She’s been with testing since 1994 in various roles: tester, test manager, test consultant, test researcher and teacher of software testing both at university and commercially. Maaret is a frequent speaker in Finnish testing seminars. She is currently the chair of Finnish Association for Software Testing (FAST).

Understanding Gut Feelings in Testing

Sajjadul Hakim, Therap
Although most testers do not want to admit it, many times we do not find critical problems because of careful planning, or because we were intentionally looking for it, but rather due to luck. My study is basically on how I can try to make it more likely that luck favors me. One unlikely source for this is our gut feelings during testing or debrief sessions. I have quite a number of interesting stories to tell and would like to talk about when it might be a good idea to listen to your instincts. Of course I would also like to touch on how certain biases can make us rely on the wrong gut feelings. This is actually something I am recently working on and I hope to have this ready before the conference. I will probably be blogging and tweeting about it until then to get feedback to improve my talk.
Sajjadul Hakim is the Director of Software Testing at Therap (BD) Ltd. He is an insightful tester and influential testing coach. He is leading the Context-Driven Testing movement in Bangladesh. He is the founder and coordinator of SQABD, a voluntary nonprofit organization that is very well known in the software industry of Bangladesh. His blog is considered one of the very first writings on exploratory essays from the Indian Subcontinent. He has over 10 years experience in various roles in programming and testing, at leading multinational companies in Bangladesh.

The BBST Experience

Doug Hoffman, Selena Delesie, Michael Larsen, Mimi Mendenhall
Four panelists who have been through the BBST series of courses will briefly present their experience and no holds barred opinions about the classes and how they might be improved. The bulk of the discussion will be fielding questions and gathering experience and opinions from the delegates in attendance.
Douglas Hoffmanis a management consultant and trainer in strategies and tactics for software quality assurance with over 30 years experience. The President of the Association for Software Testing (AST) and a Fellow of the ASQ (American Society for Quality), he holds degrees including MBA, MSEE, and BACS. He is certified by ASQ as a Software Quality Engineer and as a Manager of Quality/Organizational Excellence. Douglas is a founding member, past Chair, and current Treasurer of SSQA (Silicon Valley Software Quality Association), past Chair of the Silicon Valley Section of ASQ, a founding member for AST, Invited Speaker Chair for PNSQC, and a member of ACM and IEEE. He has spoken at dozens of conferences and has been Program Chair for several international conferences on software quality. He has also been an active participant in the Los Altos Workshops on Software Testing (LAWST) and dozens of the offshoot workshops.Selena Delesie is a consulting software tester and agile coach who runs her own company, Delesie Solutions. Selena has been managing and coaching on software, testing, and agile practices for a range of leading-edge technologies for about a decade. She facilitates the evolution of good teams and organizations into great ones using individualized and team-based coaching and interactive training experiences. Selena is an active speaker, participant, and leader in numerous industry-related associations and conferences. Links to Selena’s published works, blog, and contact information can be found at DelesieSolutions.com.Michael Larsen is a “Lone Tester” with SideReel.com in San Francisco, CA. He is a brown belt in the Miagi-do School of Software, an instructor with the Association for Software Testing, facilitator for Weekend Testing Americas and the producer of Software Test Professionals “This Week in Software Testing” podcast. He can be found on Twitter at @mkltesthead and blogs at http://mkl-testhead.blogspot.com.
 

Comments are closed.