With our first teaching of the new BBST:Domain Testing course (based on The Domain Testing Workbook) and our revision of BBST:Foundations with the Foundations of Software Testing workbook, Rebecca Fiedler and I have started to introduce the next generation of BBST. Recently, we’ve been getting requests for papers or interviews on where BBST came from and where it’s going.
- This note is a short summary of the history of BBST. You can find many more details in the articles I’ve been posting to this blog over the last decade, as I tried to think through the course design’s strengths, weaknesses and potential.
- My next post, and an upcoming article in Testing Circus, look at the road ahead.
What is BBST™ ?
BBST is a series of courses on Black Box Software Testing. The overall goal of the series is to improve the state of the practice in software testing by helping testers develop useful testing skills and deeper insights into the challenges of the field.
(Note: BBST is a registered trademark of Kaner, Fiedler & Associates.)
Today’s BBST Courses
Today, most people familiar with the BBST courses think of a four-week, fully online course. Rebecca Fiedler and I started working on the instructional design for the BBST online courses back in 2004, with funding from the National Science Foundation. The courses have gotten excellent reviews. We’ve taken Foundations through three major revisions. Bug Advocacy and Test Design have had two. We’re working on our next major update now. You’ll read more about that in my next post.
The typical instructor-led course is organized around six lectures (about six hours of talk, divided into one-hour parts), with a collection of activities. To successfully complete a typical instructor-led course, a student spends about 12-15 hours per week for 4 weeks (48-60 hours total). Most of the course time is spent on the activities:
- Orientation activities introduce students to a key challenge considered in a lecture. The student puzzles through the activity for 30 to 90 minutes, typically before watching the lecture, then sees how the lecture approaches this type of problem. The typical Foundations course has two to four of these.
- Application activities call for two to six hours of work. It applies ideas or techniques presented in a lecture or developed over several lectures. The typical Foundations course has one to two of these.
- Multiple-choice quizzes help students identify gaps in their knowledge or understanding of key concepts in the course. These questions are tough because they are designed to be instructional or diagnostic (to teach you something, to deepen your knowledge of something, or to help you recognize that you don’t understand something) rather than to fairly grade you.
- Various other discussions that help the students get to know each other better, chew on the course’s multiple-choice quiz questions, or consider other topics of current interest.
- An essay-style final exam.
In the instructor-led course, students get feedback on the quality of their work from each other and, to a lesser or greater degree (depends on who’s teaching the course), they get feedback from the instructors. Students in our commercial courses (which we offer through Kaner, Fiedler & Associates) get a lot of feedback. Students in courses taught by unpaid volunteer instructors are more likely to get most of their feedback from the other students.
So, that’s today (in the online course).
However, BBST has actually been around for 20 years.
Background on the BBST Course Design
I started teaching BBST in 1994, with Hung Quoc Nguyen, for the American Society for Quality in Silicon Valley. This was the commercial version of the course (taught to people working as testers). Development of the course was significantly influenced by:
- Detailed peer reviews of the live class and of the circulating slide decks. The reviews included detailed critiques from colleagues when I made significant course updates (I offered free beta-review classes to test the updates).
- Co-teaching the material with colleagues. We would learn together by cross-teaching material, often challenging points in each other’s slides or lecture in front of the students. For example, I taught with James Bach, Elisabeth Hendrickson, Doug Hoffman and (for the metrics material) Pat Bond, a professor at Florida Tech.
- Rational Software, which contracted with me to create a customized version of BBST to support testing under the Rational Unified Process. They criticized the course in detail over several pilot teachings, and allowed me to apply what I learned back to the original course.
In 1999, I decided that if I wanted to learn how to significantly improve the instructional value of the course, I was going to have to see how teachers help students learn complex topics and skills in university. My sense was, and is, that good university instruction goes much deeper and demands more from the students than most commercial training.
Florida Tech hired me in 2000 to teach software engineering and encouraged me to evolve BBST down two parallel tracks:
- a core university course that would be challenging for our graduate students and a good resume-builder when our students looked for jobs
- a stronger commercial course that demanded more from the students.
We correctly expected that the two tracks would continually inform each other. Getting feedback from practitioners would help us keep the academic stuff real and useful. Trying out instructional ideas in the classroom would give us ideas for redesigning the learning experience of commercial students.
By 2003, I realized that most of my students were doing most of their learning outside the classroom. They claimed to like my lectures, but they were learning from assignments and discussions that happened out of the classroom. In 2004, I decided to try taping the lectures. The students could watch these at home, while we did activities in the classroom that had previously been done out of class. This went well, and in 2005, I created a full set of course videos.
I used the 2005 videos in my own classes. I put a Creative Commons license on the videos and posted them, along with other supporting materials, on my lab’s website. Rebecca Fiedler and I also started giving talks to educators about our results, such as these two papers (Association for Educational & Communications Technology conference and the Sloan Conference on Asynchronous Learning Networks in 2005).
These days, what we were doing has a name (“flipping“) and the Open Courseware concept is old news. Back then, it was still hard to find examples of other people doing this. Even though many other people were experimenting with the same ideas, not many people were yet publishing and so we had to puzzle through the instructional ideas by reading way too much stuff and thinking way too hard about way too many conflicting opinions and results. We summarized our own design ideas in the 2005 presentations (cited above). A good sample of the literature we were reading appeared in our applications for funding to the National Science Foundation, such as the one that was funded (2007), which gave us the money to pay graduate students to help with evaluation and redesign of the course (yielding the current public version).
For readers interested in the “science” that informed our course design, I’m including an excerpt from the 2007 Grant Proposal at the end of this article.
The Collaboration with AST
While we were sketching the first BBST videos, we were also working to form AST (the Association for Software Testing). AST incorporated in 2004. Perhaps a year later, Rebecca and I decided that the academic version of online BBST could probably be adapted for working testers. The AST activists at that time were among my closest professional friends, so it was natural to bring this idea to them.
We began informally, of course. We started by posting a set of videos on a website, but people kept asking for instructor support—for a “real” class. By this point (late 2006), the Florida Tech course was maturing and I was confident (in retrospect, laughably overconfident) that I could translate what was working in a mixed online-plus-face-to-face university class to a fully online course for practitioners located all over the world. The result worked so badly that everyone dropped out (even the instructors).
We learned a lot from the details of the failure, looked more carefully at how other university instructors had redesigned traditional academic courses to make them effective for remote students who had full-time jobs and who probably hadn’t sat in an academic classroom for several years (so their academic skills were rusty). After a bunch more pilot-testing, I offered the first BBST:Foundations as a one-month class (essentially the modern structure) in October, 2007.
We offered BBST:Foundations through AST, adding BBST:Bug Advocacy in 2008, redoing BBST:Foundations (new slides, videos, etc.) in 2010, and adding BBST:Test Design in 2012.
AST was our learning lab for commercial courseware. Florida Tech’s testing courses, and my graduate research assistants at Florida Tech, were my learning lab for the academic courseware. I would try new ideas at Florida Tech and bring the ones that seemed promising into the AST courses as minor or major updates. All the while, I was publishing the courseware at my lab’s website, testingeducation.org, and encouraging other people to use the material in their courses.
We trained and supervised a crew of volunteer instructors for AST’s BBST, but other people were teaching the course (or parts of it) too. This included professors, commercial trainers, managers teaching their own staff how to test, etc. Becky created an instructor’s course (how to teach BBST), which we offered as an instructor-led course through AST but which we also offered as a free learning experience on the web (study it yourself at your own pace). In 2012, we published a 357-page Instructor’s Manual for BBST. We published the book as a Technical Report (a publication method available to university professors) so that we could supply it to the public for free.
Underlying much of the AST collaboration was a hope that we could create an open courseware community that would function like some of the successful open software communities.
- In the open source software world, many of the volunteers who maintain and enhance open source software are able to charge people for support services. That is, the software (courseware) is free but if you want support, you have to pay for it. The support money creates an income stream that makes it possible for skilled people to spend time improving the software.
- We hoped that we could create a similar type of structure for open source courseware (the BBST courses). You can see the thinking, for example, in a 2008 paper that Rebecca and I wrote with Scott Barber, Building a free courseware community around an online software testing curriculum.
It turns out that this is a very complex idea. It is probably too complex for a small professional society that handles most of its affairs pretty informally.
For now, Rebecca and I have formed Kaner, Fiedler & Associates to sustain BBST instead. That is, KFA sells commercial BBST training and the income stream makes it possible for us to make new-and-improved versions of BBST.
AST might also create its own project to maintain and enhance BBST. If so, we’ll probably see the evolution of contrasting designs for the next generations of the courses. We think we’d learn a lot from that dynamic and we hope that it happens.
An Excerpt from our 2007 Grant Proposal
This is from our application for NSF Award CCLI-0717613, Adaptation and Implementation of an Activity-Based Online or Hybrid Course in Software Testing. (When we acknowledge support from NSF, we are required to remind you that National Science Foundation does not endorse any opinions, findings, conclusions or recommendations that arose out of NSF-funded research.) The full application is available online but it is very concisely written, structured according to very specific NSF guidelines, and packed with points that address NSF-specific concerns. Here is the most relevant section of that 56-page document here, in terms of explaining our approach and literature review for the course’s instructional design.
3. Our Current Course (Black Box Software Testing—BBST)
We adopted the new teaching method in Spring 2005 after pilot work in 2004. Our new approach spends precious student contact hours on active learning experiences (more projects, seminars and labs) that involve real-world problems, communication skills, critical thinking, and instructor scaffolding [129, 136] without losing the instructional benefits of polished lectures. Central to a problem-based learning environment is that students focus on “becoming a practitioner, not simply learning about practice” [122, p. 3]
Anderson et al.’s  update to Bloom’s taxonomy  is two-dimensional, knowledge and cognitive processing.
- On the Knowledge dimension, the levels are Factual Knowledge (such as the definition of a software testing technique), Conceptual Knowledge (such as the theoretical model that predicts that a given test technique is useful for finding certain kinds of bugs), Procedural Knowledge (how to apply the technique), and Metacognitive Knowledge (example: the tester decides to study new techniques on realizing that the ones s/he currently knows don’t apply well to the current situation.)
- On the Cognitive Process dimension, the levels are Remembering (such as remembering the name of a software test technique that is described to you), Understanding (such as being able to describe a technique and compare it with another one), Applying (actually doing the technique), Analyzing (from a description of a case in which a test technique was used to find a bug, being able to strip away the irrelevant facts and describe what technique was used and how), Evaluating (such as determining whether a technique was applied well, and defending the answer), and Creating (such as designing a new type of test.).
For most of the material in these classes, we want students to be able to explain it (conceptual knowledge, remembering, understanding), apply it (procedural knowledge, application), explain why their application is a good illustration of how this technique or method should be applied (understanding, application, evaluation), and explain why they would use this technique instead of some other (analysis).
3.1 We organize classes around learning units that typically include:
- Video lecture and lecture slides. Students watch lectures before coming to class. Lectures can convey the lecturer’s enthusiasm, which improves student satisfaction  and provide memorable examples to help students learn complex concepts, tasks, or cultural norms [47, 51, 115]. They are less effective for teaching behavioral skills, promoting higher-level thinking, or changing attitudes or values . In terms of Bloom’s taxonomy [11, 20], lectures would be most appropriate for conveying factual and conceptual knowledge at the remembering and understanding levels. Our students need to learn the material at these levels, but as part of the process of learning how to analyze situations and problems, apply techniques, and evaluate their own work and the work of their peers. Stored lectures are common in distance learning programs . Some students prefer live lectures [45, 121] but on average, students learn as well from video as live lecture [19, 139]. Students can replay videos  which can help students whose first language is not English. Web-based lecture segments supplement some computer science courses [34, 44]. Studio-taped, rehearsed lectures with synchronously presented slides (like ours) have been done before . Many instructors tape live lectures, but Day and Foley [30-34] report their students prefer studio-produced lectures over recorded live lectures. We prefer studio-produced lectures because they have no unscripted interruptions and we can edit them to remove errors and digressions.
- Application to a product under test. Each student joins an open source software project (such as Open Office or Firefox) and files work with the project (such as bug reports in the project’s bug database) that they can show and discuss during employment interviews. This helps make concepts “real” to students by situating them in the development of well-regarded products . It facilitates transfer of knowledge and skills to the workplace, because students are doing the same tasks and facing the same problems they would face with commercial software . As long as the assignments are not too far beyond the skill and knowledge level of the learner, authentic assignments yield positive effects on retention, motivation, and transfer [48, 52, 119, 153].
- Classroom activities. We teach in a lab with one computer per student. Students work in groups. Activities are open book, open web. The teacher moves from group to group asking questions, giving feedback, or offering supplementary readings that relate to the direction taken by an individual group. Classroom activities vary. Students might apply ideas, practice skills, try out a test tool, explore ideas from lecture, or debate a question from the study guide. Students may present results to the class in the last 15 minutes of the 75-minute class. They often hand in work for (sympathetic) grading: we use activity grades to get attention  and give feedback, not for high-stakes assessment. We want students laughing together about their mistakes in activities, not mourning their grades .
- Examples. These supplementary readings or videos illustrate application of a test technique to a shipping product. Worked examples can be powerful teaching tools , especially when motivated by real-life situations. They are fundamental for some learning styles . Exemplars play an important role in the development and recollection of simple and complex concepts [23, 126, 146]. The lasting popularity of problem books, such as the Schaum’s Outline series and more complex texts like Sveshnikov  attests to the value of example-driven learning, at least for some learners. However, examples are not enough to carry a course. In our initial work under NSF Award EIA-0113539 ITR/SY+PE: Improving the Education of Software Testers, we expected to be able to bring testing students to mastery of some techniques through practice with a broad set of examples. Padmanabhan [113, 132] applied this to domain testing in her Master’s thesis project at Florida Tech, providing students with 18 classroom hours of instruction, including lecture, outlines of ways to solve problems, many practice exercises and exams. Students learned exactly what they were taught. They could solve new problems similar to those solved in class. However, in their final exam, we included a slightly more complicated problem that required them to apply their knowledge in a way that had been described in lecture but not specifically practiced. The students did the same things well, in almost exactly the same ways. However, they all failed to notice problems that should have been obvious to them but that only required a small stretch from their previous drills. This result was a primary motivator for us to redesign the testing course from a lecture course heavy with stories, examples and practice to more heavily emphasize more complex activities.
- Assigned readings.
- Assignments, which may come with grading rubrics. These are more complex tasks than in-class activities. Students typically work together over a two-week period.
- Study guide questions. At the start of the course, we give students a list of 100 questions. All midterm and final exam questions come from this pool. We discuss use and grading of these questions in  and make that paper available to students. We encourage group study, especially comparison of competing drafts of answers. We even host study sessions in a café off campus (buying cappuccinos for whoever shows up). We encourage students to work through relevant questions in the guide at each new section of the class. These help self-regulated learners monitor their progress and understanding—and seek additional help as needed. They can focus their studying and appraise the depth and quality of their answers before they write a high-stakes exam. Our experience of our students is consistent with Taraban, Rynearson, & Kerr’s —many students seem not to be very effective readers or studiers, nor very strategic in the way they spend their study time—as a result, they don’t do as well on exams as we believe they could. Our approach gives students time to prepare thoughtful, well-organized, peer-reviewed answers. In turn, this allows us to require thoughtful, well-organized answers on time-limited exams. This maps directly to one of our objectives (tightly focused technical writing). We can also give students complex questions that require time to carefully read and analyze, but that don’t discriminate against students whose first language is not English because these students have the questions well in advance and can seek guidance on the meaning of a question.
Excerpt from the Proposal’s references:
11. Anderson, L.W., Krathwohl, D.R., Airasian, P.W., Cruikshank, K.A., Mayer, R.A., Pintrich, P.R., Raths, J. and Wittrock, M.C. A Taxonomy for Learning, Teaching & Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Longman, New York, 2001.
19. Bligh, D.A. What’s the Use of Lectures? Jossey-Bass, San Francisco, 2000.
20. Bloom, B.S. (ed.), Taxonomy of Educational Objectives: Book 1 Cognitive Domain. Longman, New York, 1956
23. Brooks, L.R. Non-analytic concept formation and memory for instances. in Rosch, E. and Lloyd, B.B. eds. Cognition and categorization, Erlbaum, Hillsdale, NJ, 1978, 169-211.
25. Clark, R.C. and Mayer, R.E. e-Learning and the Science of Instruction. Jossey-Bass/Pfeiffer, San Francisco, CA, 2003.
29. Dannenberg, R.P. Just-In-Time Lectures, undated.
30. Day, J.A. and Foley, J. Enhancing the classroom learning experience with Web lectures: A quasi-experiment GVU Technical Report GVU-05-30, 2005.
31. Day, J.A. and Foley, J., Evaluating Web Lectures: A Case Study from HCI. in CHI ’06 (Extended Abstracts on Human Factors in Computing Systems), (Quebec, Canada, 2006), ACM Press, 195-200. Retrieved January 4, 2007, from http://www3.cc.gatech.edu/grads/d/Jason.Day/documents/er703-day.pdf
32. Day, J.A., Foley, J., Groeneweg, R. and Van Der Mast, C., Enhancing the classroom learning experience with Web lectures. inInternational Conference of Computers in Education, (Singapore, 2005), 638-641. Retrieved January 4, 2007, fromhttp://www3.cc.gatech.edu/grads/d/Jason.Day/documents/ICCE2005_Day_Short.pdf
33. Day, J.A., Foley, J., Groeneweg, R. and Van Der Mast, C. Enhancing the classroom learning experience with Web lecturesGVU Technical Report GVU-04-18, 2004.
34. Day, J.A. and Foley, J.D. Evaluating a web lecture intervention in a human-computer interaction course. IEEE Transactions on Education, 49 (4). 420-431. Retrieved December 31, 2006.
43. Felder, R.M. and Silverman, L.K. Learning and teaching styles in engineering education. Engineering Education, 78 (7). 674-681.
44. Fintan, C., Lecturelets: web based Java enabled lectures. in Proceedings of the 5th annual SIGCSE/SIGCUE ITiCSE Conference on Innovation and technology in computer science education, ( Helsinki, Finland, 2000), 5-8.
45. Firstman, A. A comparison of traditional and television lectures as a means of instruction in biology at a community college., ERIC, 1983.
47. Forsyth, D., R. The Professor’s Guide to Teaching: Psychological Principles and Practices. American Psychological Association, Washington, D.C., 2003.
48. Gagne, E.D., Yekovich, C.W. and Yekovich, F.R. The Cognitive Psychology of School Learning. HarperCollins, New York, 1994.
51. Hamer, L. A folkloristic approach to understanding teachers as storytellers. International Journal of Qualitative Studies in Education, 12 (4). 363-380, from http://ejournals.ebsco.com/direct.asp?ArticleID=NLAW20N8B16TQKHDEECM.
52. Haskell, R.E. Transfer of learning: Cognition, instruction, and reasoning. Academic Press, San Diego, 2001.
53. He, L., Gupta, A., White, S.A. and Grudin, J. Corporate Deployment of On-demand Video: Usage, Benefits, and Lessons, Microsoft Research, Redmond, WA, 1998, 12.
60. Kaner, C., Assessment in the software testing course. in Workshop on the Teaching of Software Testing (WTST), (Melbourne, FL, 2003), from http://www.kaner.com/pdfs/AssessmentTestingCourse.pdf
113. Kaner, C. and Padmanabhan, S., Practice and transfer of learning in the teaching of software testing. in Conference on Software Engineering Education & Training, (Dublin, 2007).
115. Kaufman, J.C. and Bristol, A.S. When Allport met Freud: Using anecdotes in the teaching of Psychology. Teaching of Psychology, 28 (1). 44-46.
118. Lave, J. and Wenger, E. Situated Learning: Legitimate Peripheral Participation. Cambridge University Press, Cambridge, England, 1991.
119. Lesh, R.A. and Lamon, S.J. (eds.). Assessment of authentic performance in school mathematics. AAAS Press, Washington, DC, 1992.
121. Maki, W.S. and Maki, R.H. Multimedia comprehension skill predicts differential outcomes of web-based and lecture courses.Journal of Experimental Psychology: Applied, 8 (2). 85-98.
122. MaKinster, J.G., Barab, S.A. and Keating, T.M. Design and implementation of an on-line professional development community: A project-based learning approach in a graduate seminar Electronic Journal of Science Education, 2001.
126. Medin, D. and Schaffer, M.M. Context theory of classification learning. Psychological Review, 85 (207-238)
129. National Panel Report. Greater Expectations: A New Vision for Learning as a Nation Goes to College, Association of American Colleges and Universities, Washington, D.C., 2002.
132. Padmanabhan, S. Domain Testing: Divide & Conquer Department of Computer Sciences, Florida Institute of Technology, Melbourne, FL, 2004.
134. Paris, S.G. Why learner-centered assessment is better than high-stakes testing. in Lambert, N.M. and McCombs, B.L. eds.How Students Learn: Reforming Schools Through Learner-Centered Education, American Psychological Association, Washington, DC, 1998.
136. Project Kaleidoscope. Project Kaleidoscope Report on Reports: Recommendations for Action in support of Undergraduate Science, Technology, Engineering, and Mathematics. Investing in Human Potential: Science and Engineering at the Crossroads, Washington, D.C., 2002. Retrieved January 16, 2006, from http://www.pkal.org/documents/RecommentdationsForActionInSupportOfSTEM.cfm.
138. Rossman, M.H. Successful online teaching using an asynchronous learner discussion forum. Journal of Asynchronous Learning Networks, 3 (2), from http://www.sloan-c.org/publications/jaln/v3n2/v3n2_rossman.asp.
139. Saba, F. Distance education theory, methodology, and epistemology: A pragmatic paradigm. in Moore, M.G. and Anderson, W.G. eds. Handbook of Distance Education, Lawrence Erlbaum Associates, Mahwah, New Jersey, 2003, 3-20.
141. Savery, J.R. and Duffy, T.M. Problem Based Learning: An Instructional Model and Its Constructivist Framework, Indiana University, Bloomington, IN, 2001.
146. Smith, D.J. Wanted: A New Psychology of Exemplars. Canadian Journal of Psychology, 59 (1). 47-55
148. Sveshnikov, A.A. Problems in probability theory, mathematical statistics and theory of random functions. Saunders, Philadelphia, 1968
149. Taraban, R., Rynearson, K. and Kerr, M. College students’ academic performance and self-reports of comprehension strategy use. Reading Psychology, 21 (4). 283-308.
153. Van Merrienboer, J.J.G. Training complex cognitive skills: A four-component instructional design model for technical training. Educational Technology Publications, Englewood Cliffs, NJ, 1997.
158. Williams, R.G. and Ware, J.E. An extended visit with Dr. Fox: Validity of student satisfaction with instruction ratings after repeated exposures to a lecturer. American Educational Research Journal, 14 (4). 449-457.