Archives

More Than One Way to Confer – CAST2014 (Rhythm of Testing)

On August 30, 2014, in Syndicated, by Association for Software Testing
0

In August I was in New York City for CAST 2014.  This was the Ninth installment of the Conference of the Association for Software Testing.  Like many non-profit conferences, there is a mix of staff and volunteers making sure things run smooth…

A tester tribute to Ruby ecosystem (zagorski software tester)

On August 30, 2014, in Syndicated, by Association for Software Testing
0

 Four years ago, I saw my first lines of Ruby, written by Zeljko Filipin in interactive ruby shell:require ‘watir-webdriver’b = Watir::Browser.newb.goto ‘bit.ly/watir-webdriver-demo’What? Tester is under control of Firefox browser!?Software testin…

On auditing, standards, and ISO 29119 (Markus Gärtner)

On August 26, 2014, in Syndicated, by Association for Software Testing
0

Disclaimer: Since I am publishing this on my personal blog, this is my personal view, the view of Markus Gärtner as an individual. I think the first time I came across ISO 29119 discussion was during the Agile Testing Days 2010, and probably also during Stuart Reid’s keynote at EuroSTAR 2010. Remembering back that particular […]

How Not to Standardize Testing (ISO 29119) (James Bach’s Blog)

On August 25, 2014, in Syndicated, by Association for Software Testing
0

Many years ago I took a management class. One of the exercises we did was on achieving consensus. My group did not reach an agreement because I wouldn’t lower my standards. I wanted to discuss the matter further, but the other guys grew tired of arguing with me and declared “consensus” over my objections. This […]

August, 1914 and Confirmation Bias (Rhythm of Testing)

On August 24, 2014, in Syndicated, by Association for Software Testing
0

People following me on Twitter know that I regularly, though not always, tweet something about an event that occurred in history that day.  People paying attention have noticed that this month, August, I have paid particular attention to August of…

Something is fishy with this user scenario (zagorski software tester)

On August 23, 2014, in Syndicated, by Association for Software Testing
0

In this post, I will try to provide answer what services software tester could offer. I see a lot of job ads that seek for software tester that can code. Noting wrong with that fact, but the trap is that you will most likely hire developer that is very…

A TESTHEAD Wayback Machine Find: ALM Forum Talk From April 2014 (TESTHEAD)

On August 20, 2014, in Syndicated, by Association for Software Testing
0

I am grateful for a variety of friends and acquaintances in the testing world who keep me alert to things they discover. What makes it even more fun is when I’m alerted to things I did and forgot about, or someone discovers something I didn’t know was …

Takeaways from the Continuous Automated Testing Tutorial at CAST2014 (Testing Bites)

On August 19, 2014, in Syndicated, by Association for Software Testing
0

I had the opportunity to attend Noah Sussman’s tutorial on Continuous Automated Testing last week as part of CAST2014. It was a great tutorial, with most of the morning spent on the theory and concepts behind continuous automated testing, and the after…

Takeaways from the Continuous Automated Testing Tutorial at CAST2014 (Testing Bites)

On August 19, 2014, in Syndicated, by Association for Software Testing
0

I had the opportunity to attend Noah Sussman’s tutorial on Continuous Automated Testing last week as part of CAST2014. It was a great tutorial, with most of the morning spent on the theory and concepts behind continuous automated testing, and the afternoon spent with some hands-on exercises. I think that Noah really understands the problems associated with test automation in an agile environment, and the solutions that he presented in his tutorial show the true depth of his understanding of, and insight into, those problems. Here are some of the main highlights and takeaways that I got from his tutorial at CAST2014.

Key Concepts

  • Design Tools – QA and testing are design tools, and the purpose of software testing is to design systems that are deterministic
  • Efficiency-to-Thoroughness-Trade-Offs – (ETTO) We do not always pick the best option, we pick the one that best meets the immediate needs
  • Ironies of automation – Automation makes things more complex and, while tools can make the process safer or faster, they cannot make things simpler
  • Hawthorne Effect –  Productivity (temporarily) goes up when you get a new process or tool
  • Goodhart’s Law – Simplified for the tutorial, the law states that people will game the system. Period.
  • Diseconomies of scale – The opposite of economies of scale, producing services at an increased unit cost
  • Conway’s Law  –  Simplified for the tutorial, the law states that software looks like your organization
  • Bikeshedding – It’s hard to build a complex, multipart system, but building a bike shed is easy, so organizations tend to spend too much time on trivial items

Automated Monitoring

In 2007 it was proposed by an engineer at Google that sufficiently advanced monitoring is indistinguishable from testing. This statement highlights the relationship that exists between monitoring and testing, and we can certainly use advanced monitoring to help us in our testing efforts. For example, we can use statsd as a means of instrumenting production code to gather high-volume data with minimal or no performance impact. The statement also highlights the issue of monitoring vs. testing. Noah provided a list of four things we should be doing as part of our monitoring efforts:

  • We should monitor all things
  • We should build real-time dashboards
  • We should deploy continuously
  • We should fix production bugs on the fly 

We should perform these four things, keeping in mind that while monitoring does provide visibility into implementation, it has nothing to do with design, and so does not replace QA and testing because they are design tools. Thus, while monitoring and testing are both necessary, it is only when practiced jointly that they are sufficient.

The Problem of Abstraction

We use abstractions as a means of hiding information, and we layer abstractions on top of the universe around us in an attempt to make things appear simpler than they really are. Eventually, however, we reach a point of complexity at which, even with multiple layers of abstractions to hide the information from us, our brains cannot process any more information. The Law of Leaky Abstractions, by Joel Spolsky, basically says that shifting layers of abstraction leak and, when this happens with software it results in bugs, commonly at the points where the abstractions integrate with each other.

Conway’s Game of Life

One approach to addressing the risk introduced into a system by leaky abstractions is to limit complexity. Limiting complexity was something that Noah stressed several times, saying that “systems are safer if people keep the system under control” and “simple rules take you a lot further.” He also said that safety is derived from being able to predict system behavior, and suggested using Conway’s Game of Life as a learning environment, especially Golly. Golly serves as a good tool for learning and predicting system behavior because it can be used as a Read-Eval-Print loop (REPL), allowing you to predict what the output of the next step will be and executing that step while still maintaining the program state so that you can toggle back and forth to better understand and refine your predictions.

Jenkins for Testing and Monitoring

The hands-on portion of the tutorial walked through setting up Jenkins on your local machine, taking it beyond just a tool used for continuous integration to show how it could be used as a sort of “fancy cron” for scheduling test execution and other tasks. This is especially useful in continuous automated testing as Jenkins can be set up as a Read-Eval-Print loop for use in the rapid development of automation scripts. The tutorial continued by showing how to take advantage of other useful aspects of Jenkins, such as manipulating the URL to access the API documentation, using JSON and manipulating the URL to create real-time dashboards, and using Jenkins as a database for historical records of test executions.

Lightweight Automation

Automation in an agile environment is often too cumbersome, too brittle, and created too late in the sprint to be of any benefit to development activities in the current iteration. But it doesn’t have to be. Automated monitoring, REPLs, and less complex automated scripts can be utilized as a lightweight automation “framework” for continuous automated testing without the overhead typically associated with traditional automation techniques. Implementing an automation strategy in this way allows not only for much more agility in our automation efforts, but it also allows us to use automation as a design tool alongside our other testing activities.

Examples of wrong Croatian metrics (zagorski software tester)

On August 16, 2014, in Syndicated, by Association for Software Testing
0

Taken from iruler.netThis week was CAST2014. I managed to attend CASTLive session: “Looking to social science for help with metrics” by Justin Rohrman that was aired over USTREAM.Metrics are all around us. From Google PageRank to how tall we are (very …

Page 1 of 3123

Looking for something?

Use the form below to search the site:


Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!