On Testing and Quality Engineering

The other day I read an article on how Quality Engineering was something beyond testing. It struck me that, in the course of reading that article, it struck me that the author had a totally different understanding of those two terms.

Here then, is my response…



On Testing and Quality Engineering

A common view of testing, perhaps what some consider is the “real” or “correct” view, is that testing validates behavior. Tests “pass” or “fail” based on expectations and the point of testing is to confirm those expectations.

The challenge of introducing the concept of “Quality” with this conception of testing brings in other problems. It seems the question of “Quality” is often tied to a “voice of authority.” For some people that “authority” is the near-legendary Jerry Weinberg: “Quality is value to some person.” For others the “authority” is Joseph Juran: “fitness for use.”

How do we know about the software we are working on? What is it that gives us the touch points to be able to measure this?

There are the classic measures used by advocates of testing as validation or pass/fail: 

·         percentage of code coverage;
·         proportion of function coverage;
·         percentage of automated Vs. manual tests;
·         number of test cases run;
·         number of passing test cases;
·         number of failing test cases;
·         number of bugs found or fixed.

For some organizations, these may shed some light on testing or on the perceived progress of testing. But they speak nothing about the software itself or the quality of the software being tested, in-spite of the claims made by some people.

One response, a common one, is that the question of the “quality of the software” is not a concern of “testing,” that it is a concern for “quality engineering.” Thus, testing is independent of the concerns of overall quality.

My view of this? 

Hogwash.
Rubbish. 

 When people ask me what testing is, my working definition is:

Software testing is a systematic evaluation of the behavior of a piece of software,
based on some model.

By using models that are relevant to the project, epic or story, we can select appropriate methods and techniques in place of relying on organizational comfort-zones. If one model we use is “conformance to documented requirements” we exercise the software one way. If we are interested in aspects of performance or load capacity, we’ll exercise the software in another way.

There is no rule limiting a tester to using a single model. Most software projects will need multiple models to be considered in testing. There are some concepts that are important in this working.

What does this mean?

Good testing takes disciplined, thoughtful work. Following precisely the steps that were given is not testing, it is following a script. Testing takes consideration beyond the simple, straightforward path.

As for the idea of “documented requirements,” they serve as information points, possibly starting points for meaningful testing.

Good testing requires communication. Real communication is not documents being emailed back and forth. Communication is bi-directional. It is not a lecture or a monologue. Good testing requires conversation to help make sure all parties are in alignment.

Good testing looks at the reason behind the project, the change that is intended to be seen. Good testing looks to understand the impact within the system to the system itself and to the people using the software.

Good testing looks at these reasons and purposes for the changes and compared them to the team and company purpose and values.  Are they in alignment with the mission, purpose and core values of the organization? Good testing includes a willingness to report variances in these fundamental considerations beyond requirements and code.

Good testing can exercise the design before a single line of code is written. Good testing can help search out implied or undocumented requirements to catch variances before design is finalized.

Good testing can help product owners, designers and developers in demonstrating the impact of changes on people who will be working with the software. Good testing can help build consensus within the team as to the very behavior of the software.

Good testing can navigate between function level testing to broader aspects of testing, by following multiple roles within the application and evaluating what people using or impacted by the change will experience.

Good testing can help bring the voice of the customer, internal and external, to the conversation when nothing or no one else does.

Good testing does not assure anything. Good testing challenges assurances. It investigates possibilities and asks questions about what is discovered.

Good testing challenges assumptions and presumptions. It looks for ways in which those assumptions and presumptions are not valid or are not appropriate in the project being worked on.

Good testing serves the stakeholders of the project by being in service to them.

What some people describe as “quality engineering” is, in my experience, part of good software testing.

 

Book review – Threat Modeling

This post is quite long, for sure it came out longer than I intended. The English version is still below, but you might need to scroll down a bit.

לפני כמה חודשים הגיע הזמן לעדכן את מסמך “מודל האיומים” (Threat modeling) של התוכנה שלנו. וכן, באנגלית זה לגמרי נשמע יותר טוב. בשל צירוף נסיבות אקראי למדי, יצא שאני האדם הכי מתאים בצוות לביצוע המשימה (למעט מנהל צוות הפיתוח, אני היחיד בצוות שהשתתף בתהליך בעבר, ובניגוד אליו אני יכולתי למצוא קצת זמן להקדיש לזה). חדור מוטיבציה ועזוז, ניגשתי למשימה רק כדי לגלות שאין לי מושג ירוק איפה להתחיל – גם עם המסמך אותו אני צריך רק לעדכן מול העיניים. אז התחלתי לחפש מידע – קראתי את הפרק הרביעי בספר הזה שהתגלגל אצלנו במשרד, קראתי הנחיות בדף ויקי פנימי שהיה זמין לי, וישבתי לדבר עם מי שכתב את המסמך המקורי אותו רציתי לעדכן, שאמנם עזב את הצוות, אבל נשאר בחברה בתפקיד אחר. הוא הסביר לי המון על התהליך ועל מה צריך לעשות וכך יכולנו לעדכן את המודל בצורה אפקטיבית. כשהתברר לשנינו שאני גם נהנה מתהליך מידול האיומים, הוא השאיל לי ספר Threat Modeling : Designing for Security שכתב אדם שוסטק (Adam Shostack). 

זה לקח לי קצת זמן, אבל קראתי את הספר מכריכה לכריכה (לא כולל חלק מהנספחים) ובקיצור נמרץ – זה ספר מצויין. חובה לכל מי שרוצה ללמוד קצת על העולם של מידול איומים בפרט ושל אבטחת תוכנה בכלל, והוא גם נראה כמו משהו שיכול להיות שימושי גם למי שיש לו מידה מסויימת של ניסיון. גם אם אבטחת תוכנה אינה מה שאתם עושים, ונדמה לכם שאין למוצר שום קשר לאבטחת תוכנה – אני חושב שכדאי לכם לקרוא את הספר הזה.
אז מה יש בו? הספר מחולק לחמישה חלקים, וכל חלק מכיל כמה פרקים:
  • החלק הראשון , שהיה הכי חשוב לי כאדם חסר ניס&#14
    97;ון, הוא “איך מתחילים?” המשפט הראשון בספר (אחרי בערך שלושים וחמישה עמודים של הקדמה שמסבירים מה נרוויח מהספר, למי הוא מיועד, איך הספר בנוי וכיצד להשתמש בו) הוא “כל אחד יכול ללמוד איך למדל איומים, ומעבר לכך – כל אחד צריך.” עם כזה עידוד, איך אפשר שלא להתחיל מייד?
    ובאמת, אנחנו מתחילים מייד. מהר מאוד אנחנו מתחילים לצייר מודל פשוט של התוכנה שלנו ורואים כמה דרכים להוסיף לו מידע מצד אחד, אבל להשאיר את המודל קריא ולא עמוס מאוד. המודל שאנחנו בוחרים בו הוא דיאגרמת זרימת נתונים (Data Flow Diagram) ואחרי שיש לנו דיאגרמה כזו, אנחנו נעזרים בה כדי לשחק במשחק הקלפים שפותח יחד עם הספר וזמין להורדה בחינם (דרך כאן). אפשר, כמובן, לקנות חבילה מסודרת יותר עם קלפים של ממש). אחרי המשחק – יש לנו איומים וצריך להחליט מה לעשות איתם. הספר מזכיר לנו שיש ארבע אפשרויות לניהול איום שמצאנו –
    • אפשר להפעיל מנגנוני פיצוי ובקרה (כלומר,להוסיף הגנות שיעזרו לבטל את האיום הזה, או לצמצם אותו מספיק כדי שנוכל לבחור באחת האפשרויות האחריות). 
    • אפשר למחוק את הקוד הפגיע, או לא לאפשר את התרחיש המסוכן. 
    • אפשר להעביר את האחריות למישהו אחר. למשל – אפשר להשאיר את זיהוי המשתמש למערכת ההפעלה, או שאפשר להודיע ללקוח ש”בזה התוכנה לא מטפלת” (למשל, Mailinator מכריזים על מדיניות הפרטיות שלהם: “אין פרטיות”). כמובן, ההחלטה הזו צריכה להתקבל ברמה העסקית, ויש גבול לעד כמה ניתן להשתמש בה. 
    • אפשר לקבל את הסיכון ולחיות איתו.

    סיימתם עם הצעדים האלה? מצויין. זוהי התורה כולה. שאר הספר הוא רק הרחבה. מכאן אנחנו ממשיכים לדבר על  אסטרטגיות לבצע את תהליך המידול ומתחילים בשאלה שהתחבבה עלי  – מהו
    מודל האיומים שלך? זו שאלה קצרה שעוזרת להתמקד והתשובה לה יכולה להעביר המון מידע. התשובה לשאלה הזו קצרה גם היא. משהו בסגנון של “תוקף עם מחשב נייד”, או “רשת לא מאובטחת”. כשאנחנו יודעים מה האיום נגדו אנחנו רוצים להתגונן אנחנו יכולים לבצע בחירות שיעזרו לנו להשקיע את המאמץ בהתגוננות מפני גורמים רלוונטיים. כמו שמציין הספר, תשובה נפוצה מאוד לשאלה עשוייה להיות “הא?”, וגם התשובה הזו מספרת לנו לא מעט. בין כל האסטרטגיות השונות בולט במיוחד הדגש שניתן לדיאגרמות זרימת נתונים (יש גם הסבר על איך לכתוב דיאגרמות יעילות), אבל מוצגות גם גישות אחרות, פורמליות יותר ופורמליות פחות.

  • החלק השני מתמקד ב”מציאת איומים” ועוסק בהרחבה במודל STRIDE (שכבר הזכרתי), בעצי התקפה ובמאגרי התקפות\חולשות, כשלכל אחת מהשיטות האלה מוקדש פרק שלם. הפרקים שונים מאוד זה מזה מבחינת איכות החומר הכתוב ומבחינת התועלת שהצלחתי למצוא בהם. כך למשל, בעוד שהפרק על STRIDE מאורגן בצורה מופתית – לכל אחת מהקטגוריות מצורף הסבר קצר על מהות הבעיה, יחד עם טבלה שמציגה מגוון אפשרויות למימוש התקפה – הפרק על עצי התקפה משמעותית חלש יותר ולאחר קריאתו נותרתי בתחושה שאני לא יועד הרבה יותר על השימוש בהם מאשר ידעתי קודם. אולי זה קשור לטענה בסוף הפרק: “מאוד קשה ליצור עצי התקפה”.  במקרה של ספריות ההתקפה, יש פחות צורך לחשוב – שימוש ברשימות כמו Owasp top 10 (רשימה עם עשר המתקפות הכי נפוצות\מזיקות באינטרנט), או בCAPEC לא דורש הרבה מאוד הסברים, והספר מציין, כמו שכדאי לעשות, שלעבוד עם רשימה מפורטת כזו יכול להתברר כלא מעט עבודה.
     לבסוף, כדי להזכיר לנו ש
    העולם מסובך יותר משנדמה לנו, הפרק האחרון בחלק הזה מתעסק באיומים על פרטיות ובכלים מחשבתיים שיכולים לעזור לנו לשים לב לפגיעה בפרטיות. בעיקר מצא חן בעיני הרעיון של הנוטריקון  LINDDUN
  • החלק השלישי עוסק בניהול האיומים – החל בסוגיות כמו “מתי לבצע פעולות של מידול איומים?” או “כמה זמן להשקיע בזה?”, דרך נקודות שכדאי לשקול בתיעדוף הטיפול ובדרכיםמקובלות לטיפול באיומים נפוצים.
    חמשת הפרקים שבפרק הזה ממוקדים יחסית, ומעניינים במידה כמעט שווה.
    הפרק הזה ארוך למדי, וסקירה מפורטת של כל הפרקים בו תהיה מתישה. לכן, אני רוצה להתעכב על שתי נקודות קטנות.
    הראשונה היא דוגמה לסיבה בגללה אני מוצא את הספר מאוד נגיש ומאוד קריא לציבור הרחב: בפרק השביעי מזכיר המחבר בדיחה מוכרת על שני אנשים (אליס ובוב, כי אנחנו מתעסקים באבטחת תוכנה) שבורחים מדוב, בוב עוצר לנעול את נעלי הריצה שלו, כי הוא לא צריך לרוץ מהר יותר מהדוב, רק מהר יותר מאליס. הספר מסביר שהאנלוגיה הזו לא טובה במקרה שלנו, או, כפי שהוא מנסח זאת: “… לא זו בלבד שיש דובים רבים ביער, אלא שהם מקיימים כנסים בהם הם מדברים על טכניקות שיאפשרו להם לאכול את אליס וגם את בוב לארוחת צהריים”. והנה לכם – תמונה מנטלית שקל לזכור.
    הנקודה השנייה היא הפרק העשירי, בו מצאתי את עצמי מהנהן בראשי שוב ושוב – נושא הפרק הוא “איך לוודא שאיומים מטופלים”, או, בתרגום לעברית, “איך מוודאים שעשיתי עבודה טובה?” זה נשמע דומה לבדיקות תוכנה? זה לא במקרה. בתוך הפרק אפשר למצוא את הציטטה שתמיד נתקלים בה – you can’t test quality in, והכותב מבהיר שבאותו אופן בדיוק אי אפשר לבדוק אבטחה לתוך המוצר. מה שכנראה מצא חן בעיני הכי הרבה היה כשתחושת הבט&#1
    503; שהייתה איתי לאורך קריאת הספר קיבלה אישור כשהכותב טען שבעת מידול איומים, בודקי התוכנה הם “בעלי ברית טבעיים”בין היתר כי הם כבר מתורגלים בלחפש את המקומות בהם דברים עשויים להשתבש.
  • בין הפרקים השונים בחלק הרביעי ניתן למצוא “ספר מתכונים לדרישות”, שאמור לעזור בהתמודדות עם כתיבת דרישות אבטחה (או במצב בו דרישות האבטחה לא כתובות היטב, או בכלל), קצת על איומים בענן ובסביבות web, כמה שאלות לא פשוטות שכדאי להיות מודעים אליהן בזמן שמטפלים בזיהוי משתמשים וניהול זהויות, דיון עמוס למדי על שימושיות, ומעבר זריז על עקרונות בסיס בהצפנה והתקפות נפוצות (ההתקפה החביבה עלי היא קריפטואנליזה בעזרת צינור גומי,  שזו דרך אלגנטית לתאר את השיטה בה תופסים את הג’ינג’י עם המפתח ומפרקים לו את הצורה עד שהוא מספר לנו מה שאנחנו רוצים) ובהקשר הזה אנחנו מקבלים גם תזכורת לגבי פרטיות, הפעם – איך הצפנה יכולה לעזור לנו איתה. 
  • החלק החמישי הוא סיכום, ומתוך שלושת הפרקים שלו – שני פרקים קצת “נמצאים באוויר” כשהם מדברים על “גישות נסיוניות במידול איומים” (כמו למשל, משחק בשם FlipIT), או ברעיונות כלליים כמו עומס קוגניטיבי או  תיאוריית הזרימה. הפרק השלישי ממוקד במטרה קונקרטית מאוד: איך להכניס מידול איומים לארגון שלך? כאן אפשר למצוא עצות פרקטיות על איך ניתן “למכור” את הרעיון להנהלה או למהנדסים מהם יצפו לעשות את העבודה, כולל תשובות להתנגדויות נפוצות. אם טרם השתכנעתם שיש קשר הדוק בין בדיקות תוכנה למידול איומים והתזכורת שיש בפרק הזה לא משכנעת אתכם, אחת ההתנגדויות שמוזכרות כאן היא “אבל אף אחד אף פעם לא יעשה משהו כזה”. 
בס&#14
93;ף הפרק, מצורפים חמישה נספחים. ברור לגמרי שהנספחים האלה הם כלי עבודה – לא כולם יהיו קריאה קלילה, אבל הם כנראה יעזרו עם העבודה עצמה.

  • נספח א’ – תשובות נפוצות לשאלה “מה הוא מודל האיומים שלך?”
  • נספח ב’ – עצי איומים. זו הרחבה של הפרק הרביעי והיא מתישה לקריאה. 
  • נספח ג’ – רשימות תוקפים. כאן מוצג הרעיון הנחמד של שימוש בפרסונות, יחד עם כמה הצעות לרשימות תוקפים. 
  • נספח ד’ – חוקים למשחק הקלפים Elevation of Privelege
  • נספח ה’  – ארבעה מקרים לדוגמה (case studies). המקרים אינם מקרים אמיתיים, אבל הם מעניינים. 

טוב, זה יצא ארוך יותר ממה שהתכוונתי, אבל זה נתן לי תירוץ לרפרף דרך כל הספר שוב, וזה בפני עצמו הצדיק את המאמץ.
אם אתם רוצים, אפשר לקרוא כמה עמודים (=כל הפרק הראשון והשני, עם עמוד מהפרק השלישי)  בחינם בגוגל ספרים. והכי חשוב, במילותיו של הספר – Go Threat model, and make things more secure.

——————————————————-

A while ago, just before I started writing this blog, we got to that point in the year where we should update our threat model (one of the cool things in working in a large company is that there are some mandatory procedures, and for some of them there actually is someone who is responsible of verifying they are done). By what seems to me as a sheer coincidence, I was the most suitable person to lead the threat modeling, as I was the only one who participated in the process last year besides our dev team leader who is awfully occupied (so, me being able to muster some free time trumps his superior knowledge and experience). Full of motivation to dive into this new and shiny field, I started out just to find out that I didn’t have the slightest clue as to where can I begin – even when all I had to do was to update an existing document with the changes we made in the past year. So I starting looking for information and instructions – I read the fourth chapter of this book that was rolling around in the office, read an internal Wiki page that was supposed to help me, somehow, and then I sat down to talk with the guy that has composed the original document I was updating (who, despite having moved to another team, remained at hands reach and was happy to help). His knowledge and expertise helped me tremendously, and I was able to actually start updating our model and even add some improvements to the document (as this document was the product of his self-education, there were some things there that could have been done better, and unlike him – I had help from the first moment I dove into this matter). Anyway, once we both found out that I was enjoying the process, maybe more than I should, he lent me his copy of Adam Shostack’s Threat Modeling:  Designing for Security.
It took me a while to do so, but I read this book front to back (excluding some of the appendices), and in short – that’s a great book. It’s a must-read for anyone who wished to learn a bit about the world of threat modeling, and it is a good source of knowledge for those who wish to learn about software security. On top of that, it seems to my inexperienced eye to be something useful even for those that are familiar with threat modeling. Finally, even if you believe that your product has nothing to do with software security, I think you’ll find reading this book worthwhile. 
So, what’s in it? 

  • The book has five parts, each containing several chapters, the first of which was the most important for me – being completely inexperienced in that field – is labeled “Getting Started”. The first sentence in the book (excluding the thirty odd introductory pages explaining what can the reader expect to gain, who are the intended audience and how to use that book – still it’s the first sentence of the first chapter, so there you have it) is “Everyone can learn to threat model, and what’s more, everyone should“. with such encouragement, how can we not start threat modeling immediately?
    And indeed we do. Very quickly we are drawing a diagram of a software, and discussing a bit how to add information to the diagram without making it unreadable or over detailed. Once we have our diagram (In our case, A Data-Flow Diagram, or a DTD) we go on and start finding threats by playing “Elevation of Privileges” – a card game that was developed along with this book and is freely available to download (you can find it here). Naturally, you can buy a proper deck and save yourself the fuss of printing it. Once we are done playing, the book r
    eminds us that we have four options to deal with a threat we found:
    • Mitigate the threat – Add some control mechanisms and defenses that will make exploiting the weakness you found harder, limit the amount of damage an attack can do via this venue, etc. this is done in order to allow us to choose one of the other options comfortably. 
    • Eliminate the threat – delete the relevant piece of code, block a vulnerable functionality – do something that will make sure that this vulnerability is simply not out there anymore. 
    • Transfer the risk – you can transfer the problem to someone, or something else. For instance, you can decide to leave authentication to the OS, or just notify the user that “The software does not deal with this type of risk (A great example for this is Mailinator, a temporary email service, that have the following privacy policy in their FAQ page – “There is no privacy”). Obviously, such decisions should be business decisions, and you should be careful when deciding to transfer a risk, as you can do that to a limited extent only. 
    • Accept the risk – Sometimes, it’s acceptable to acknowledge a risk and respond with “I’ll take my chances”. This should not be your go-to approach, but if the risk is improbable enough (e.g.: The server drowning if the Netherlands dam breaks and the sea floods everything), or if the impact is low enough in comparison to the cost of fixing it (e.g.: protecting your server physically can cost you a small fortune to maintain something like this) , then just living with the risk might be acceptable. 
    Done with that? Great. That’s all folks, let’s go home. The rest of the book is just some elaboration and expansion of what was done up to this point. The book then continues with a question I learned to like – “what’s your threat model?”. This short question can help learning quite a lot, despite having a short answer as well. Answers can be “a single attacker with a standard laptop”, or “a disgruntled employee”, “cyber-criminals” or “NSA, Mossad, MI5 and every other intelligence agency”. Once you know what is the threat you want to protect yourself against, you can make good choices that will help defending against it. As the book mentions, a common answer to this question is “huh?”, and this answer provides us with a lot of information as well. Specifically, it tells us that we need to start thinking and identifying the threats we are concerned about.  At this point, we get into more details and discuss a bit approaches to threat modeling. The book covers some unstructured & structured approaches to threat modeling (with a bit of focus on Data-Flow-Diagrams, or DFDs, which seem to be the author strong recommendation). While reading this, we encounter some useful terminology and concepts such as assets & stepping stones or trust-boundaries. 
    • The 2nd section deals extensively with the analysis part, or, as the section is named “Finding threats”. It starts by focusing on the STRIDE model (on which I already wrote a bit), and then goes to discuss other methods such as attack trees and attack libraries. Finally, just as a reminder, the section shifts its focus to privacy threats and some tools that may help us noticing those.
      The quality of the different sections vary greatly – while the STRIDE chapter is well organized and very readable, with several examples of how to apply each part of the STRIDE model, the one about attack trees has left me not much better off than where I first began. Perhaps it is simply a proof by example of the claim in the end of the chapter: “It’s very hard to create attack trees”. The chapter on attack libraries is somewhat in the middle – it’s very clear, but a bit boring, since using lists such as OWASP top 10, or CAPEC requires very little explanation. The book reminds us that using attack libraries can be quite a bit of work.
      The chapter about privacy is great – it gave me the feeling that I now have the basic knowledge about some mental tools to address the issue, but also that there is so much more to learn about it by following the hints there (The one i liked best is the LINDDUN acronym). 
    • The third part deals with “Managing and addressing threats”, with all 5 chapters written in about the same quality. The section deals with a lot of subjects, from managing the threat modeling process, to defense strategies to available tools that can be used while threat modeling.
      In this section I want to delve on a short anecdote that demonstrates why I find this book so readable: When dealing with approaches to threats, the author mentions the story about Alice and Bob who were running from a bear, and Bob stopping to put on his running-shoes, justifying that he does not need to run faster than the bear, only faster than Alice. At this point, the author breaks the metaphor by stating that in the software vulnerabilities world, “not only there are multiple bears, but they have conferences in which they discuss techniques for eating both Alice and Bob for lunch”.
      The tenth chapter is the one I found myself nodding the most, as it dealt with the question “How can I make sure my work is complete and that threats are dealt with?”. Does it sound a bit like testing to you? it sure did to me. In this chapter we can find the always repeated truth “you can’t test quality in” (which leads to “and neither can you test security in”). Perhaps the thing that got me most was when my gut feeling matched what I read in the book that stated that testers are “close allies” in threat modeling, as they are already trained to think of “what might go wrong?”
    • The fourth section is dealing with “Threat modeling in technologies and tricky areas”. This section is actually a mixture of various subjects that have very little in common, except being significant enough to have a chapter of their own. Among its chapters we can find “requirements cookbook” which contains possible examples of security requirements for us to write, Web & cloud common threats, some tough questions to consider when dealing with identities and accounts, a content-heavy discussion with tons of references about usability, some basic cryptography concepts and common attacks (the one I liked most was rubber-hose cryptanalysis, which is a fancy term to say “beat the cr** out of someone until they tell you what you want to know”) and we get another reminder about privacy, this time about how can encryption help with privacy. 
    • The fifth and last section is more of a summary. It has three chapters, two of which are somewhat intangible, speaking of ideas such as experimental approaches to threat modeling such a FlipIt, or about stuff like Flow theory and cognitive load. The most concrete chapter in this section is about bringing threat modeling into your organization, with practical advice for “selling” the idea to management or to the engineers who’ll be doing it with you, and how to deal with common objections, and if you haven’t yet become convinced that threat modeling is very much like software testing, and the reminder in this chapter was not enough, one of the  objections mentioned is “No one would ever do that”.
      My impression was that the intangible chapters are meant for experts in threat modelin
      g that are looking for ideas to stimulate their thoughts – For me, most of it went way above my head. 
    By the end of the book there are five appendices, some short, some a bit longer:

    • “Helpful tools” – This appendix contains some answers to “what is your threat model” and some of the common assets a software project might have. 
    • Threat trees – This is an extension of chapter 4 (part of the 2nd section) and is really taxing to read. 
    • Lists of potential attackers. Along with several generic lists, we have also the idea of using attacker personas along with some examples. Also, it’s fun to read. 
    • Rules for the card-game “Elevation of Privileges” that was linked above. 
    • Four “case studies”. They are not real, but are interesting to read. 
    That’s about it, and it definitely came out longer than I expected, but writing this post gave me an opportunity to go over the book again, and this by itself was totally worth it. 
    Also, if you are interested, or think you might be interested, the first sixty-odd pages of the book are free to read on google-books
    And most important, as the book states: Go Threat model, and make things more secure.

    Selecting Platform Configuration Tests

    I’ve been developing a GUI acceptance test suite to increase the speed of specific types of feedback about our software releases. In addition to my local environment I’ve been using Sauce Labs to extend our platform coverage (mostly by browsers and operating) and to speed up our tests by running more tests in parallel. This […]

    Bug-Free Software? Go For It!

    This post is a prettied-up version of the notes for my talk at the second Cambridge Exploratory Workshop on Testing last weekend. The topic for the workshop was When Testing Went Wrong

    Cold fusion is a type of nuclear reaction that, if it were possible, would provide a cheap, clean and safe form of energy. In 1989 two scientists, Fleischmann and Pons, made worldwide headlines when they claimed to have generated cold fusion in a test tube in their lab. Unfortunately, subsequent attempts to replicate their results failed, other scientists started to publicly doubt the experimental methodology, and the evidence presented was eventually debunked.

    Cold fusion is a bit of a scientific joke. Which means that if you are a researcher in that field – now also called Low Energy Nuclear Reactions – you are likely to have a credibility problem before you even start. And, further, that kind of credibility issue will put many off from even starting. Which is a shame because the potential payoff  in this area is high and the cost of it being wrong is relatively low.

    In a fascinating article in Aeon magazine, Huw Price, a Philosophy professor at Cambridge University, writes about how, even if unlikely, cold fusion is not theoretically impossible, and that the apparent orthodox scientific opinion on it is not based on science:

    Cold fusion is tainted, and the taint is contagious … So the subject is stuck in a place that is largely inaccessible to reason – a reputation trap, we might call it.

    This is echoed by Harry Collins in Are We All Scientific Experts Now?:

    There is always enough room to interpret data in more than one way … We need to know motivations as much as we need to know results if we are to understand science.

    Science is not pure. It is not driven only by evidence. Collins observes that, particularly at the cutting edge of research, scientists can easily split into camps. These camps agree on the result, but don’t agree on what it means. When this is the case, when there is room for more than one interpretation, then – since scientists are human – it’s natural for there to be human biases and prejudices at play. And those factors, those frailties, those foibles include things like reputation, preconception and peer pressure.

    You might have seen Bob Marshall blogging and tweeting about whether we really need testers, and using the hashtag #NoTesting? He is provocative:

    So, do we have to test, despite the customer being unkeen to pay for it? Despite it adding little or no value from the customer’s point of view? 

    And he provokes, for example, reactions like this from Albert Gareev

    Recently I’ve been observing some new silly ideas about testing – on how to do as less of it as possible or not do it at all. I don’t bother posting links here – reading those isn’t worth the time.

    To me, there can be value in wondering what Marshall is getting at (which Gareev also does). Engaging with someone with an apparently fundamental opposition to you can be elucidating. A contrary position can make us see our own position in a new light and it’s healthy to do that sometimes.

    There was an interesting (to most testers anyway, I’d hope)  headline out of Princeton earlier this year: Computer scientists launch campaign to guarantee bug-free software. What’s your gut reaction to that? Something like this, perhaps? You can’t get rid of bugs …  and it’s stupid to even think you might be able to!

    But read behind the headline only a little way and you will find that the project is trying to write formal (mathematical logical) specifications for software building blocks, such as a C compiler or an OS, and then chain together such components using consistent specifications.

    Doesn’t a formal spec just shift the specification problem? It still has to be written, right? Perhaps, but a formal language can be reasoned about; proofs can be created for aspects of it; other tools can be brought to bear on it in a way that they cannot with user stories or other (natural language) devices for specification.

    For sure, it’s a non-trivial problem. And perhaps it won’t work. And perhaps it will even prove to have been misguided. And, absolutely, it won’t catch the class of bugs that are do to with the specification being for something other than what the users actually want. But should that mean that we shouldn’t pursue this line? A line that has (relative to all the research being done) low cost, potentially high benefit.

    James Bach might put this kind of effort into the Analytical School. For example:

    The Analytical School way is to limit themselves to laboratory contexts where the numbers apply or trying to change projects to fit the assumptions of the numbers […]  I have a fondness for the Analytical School, but I’m not an academic, so I have to live in a world where I must solve the problems that come to me, rather than the ones I choose.

    He and Cem Kaner, founders of the Context-Driven School of testing, have publicly disagreed here. Kaner says:

    I think it’s a Bad Idea to alienate, ignore, or marginalize people who do hard work on interesting problems.

    Bach speaks to this:

    One of the things that concerns Cem is the polarization of the craft. He doesn’t like it, anymore. I suppose he wants more listening to people who have different views about whether there are best practices or not. To me, that’s unwise.

    And Kaner responds:

     I’ve learned a lot from people who would never associate themselves with context-driven testing.

    And, in fact he actively engages folk outside of the context-driven community, such as with Rex Black, who many would regard as a Factory Schooler. 

    When thought leaders like Bach and Kaner, both of whom contribute so much to the community and craft of testing, say these kinds of things it’s wise to listen. They clearly fall into two different camps on this topic, but they would both, I’m sure, encourage us to think critically about what we are hearing from them, and to take our own view, for ourselves.

    So, to the question that CEWT #2 is posing: when does testing go wrong?  Maybe in ways like this:

    • When we look inwards too much: if we stay in our own bubble we risk lack of exposure to useful information, to things that can help us make connections.
    • When we don’t apply critical thinking: we should strive to understand our sources
      and the degree of confidence we have in them, and in which areas we think that confidence is justified.
    • When we don’t consider human factors: we should ask ourselves why something is being claimed. 
    • When we create reputation traps: we should be wary of closing off topics for others. Sure, we may legitimately have nothing to learn; but others might.

    Like the scientists mentioned up top, testers are humans, and we have, do, and will continue to make these kinds of mistakes. Testing will always go wrong because it is done by people.

    But that’s also the good news: people have the capacity to observe this happening and attempt to take action to avoid or mitigate it. I want to give myself a chance of spotting approaches that are appropriate to whatever context I find myself in and I think (and perhaps this is my bias) that a sensible way to go about this is to be open to information from anywhere.

    This doesn’t mean that I have to accept everything or even that I shouldn’t be sceptical of everything. Nor that I have to give equal time, effort or respect to everything. It doesn’t mean that I can’t take someone else’s word for something, but I challenge myself to have considered whether that’s sensible this time, for this thing.

    So, if you want to tell me that you’re going to find a way guarantee bug-free software, I say go for it. But when you do, explain what you did and show me the results you got and don’t be surprised if I question them and your motivation.

    Here’s my slides:

    Some kick ass blog posts from last week #8

    Here’s the new portion of kick-ass blog posts from the last week: Great post by Michael Fritzius on continuous integration and how to prioritize our tests into portions, how to run them separately, in parallel and many other cool stuff: Automate Your Automation Interesting set of advises from Simon Knight on how to leave the comfort […]

    The post Some kick ass blog posts from last week #8 appeared first on Mr.Slavchev().