The following is a Guest post by Hans Buwalda, CTO of LogiGear, and is a part of a guest blog post exchange with LogiGear.
An important, but often underestimated, part of the work around software development is the testing. Testing is by definition challenging: if bugs were easy to find, they wouldn’t be there. A tester has to think outside of the box to find the bugs that others have missed. And in many cases understanding the business domain of an application is more crucial for effective testing as detailed knowledge of the application itself.
Automation adds another dimension of complexity. In Agile DevOps processes tests need to be both easy repeatable and effective, making automation a necessity. At the same time the automated tests should be tough enough to spot bugs creeping in when a system evolves.
The testing pyramid, proposed by Mike Cohn in his book: “Succeeding with Agile”, positions the UI as the smallest part of testing. Most of the testing should focus at the unit and service or component levels. I agree that this is a good strategy. It makes it easier to design tests well and automation at unit or component/service level tends to be easier and more stable. However, from what I observe in project the UI testing remains an important part. In the web world for example the availability of techniques like Ajax and AngularJS allow designers to create interesting and highly interactive user experiences, in which many parts of the application under test come together. I therefore like to leave some more room in the top of the picture.
Even for UI automation the technical side is fairly straightforward. Tools like Selenium, Coded UI and our own TestArchitect can take care of interfacing with a UI, typically emulating the interaction of an end user with the application under test. For Selenium, in particular, there are also a variety of frameworks available to make scripting easier. Tests through the UI are often mixed with non-UI operations as well, like service calls, command line commands and SQL queries.
The problems with UI tests come in maintenance. A small change in a UI design or UI behavior can knock out large amounts of the automated tests interacting with them. Common causes are interface elements that can no longer be found or unexpected waiting times for the UI to respond to operations. UI automation is then avoided for the wrong reason: inability to make it work well. Let me describe a couple of steps you can take to alleviate these problems.
A good basis for success automation is test design. How you design your tests has a big impact on their automation. In other words: successful test automation is not as much a technical challenge as it is a test design challenge. As I see it there are two major levels that come together in a good test design:
- the overall structure of the tests
- the design of individual test cases
For the structure of tests we follow a modularized approach, which is a similar approach to how applications are designed. Tests cases are organized in “test modules,” think of them like the chapters in a book. We have some detailed templates for how to do that but, at the very minimum try to distinguish between “business tests” and “interaction tests”. The business tests look at the business objects and business flows, hiding any UI (or API) navigation details. Interaction tests look at whether users or other systems can interact with the application under test, and consequently care about UI details. The key goal is to avoid mixing interaction tests and business tests, since the detailed level of interaction tests will make them hard to understand and maintain.
Once the test modules have been determined, they can be developed whenever it is convenient. Typically business tests can be developed early, known as “shift left”, because they depend more on business rules and transactions than on how an application implements them. Interaction tests can be created when a team is defining the UI’s and API’.
Another step that is effective is to use a domain language approach, like BDD or actions (keywords). In BDD (Behavior Driven Development) scenarios are written in a format that comes close to natural language. “Actions” are predefined operations and checks that describe the steps to be taken in a test. In our “Action Based Testing” approach they’re written in a spreadsheet format, to make them easier to read and maintain than tests scripted in a programming language. Since I noticed that actions are more concrete and easier to manage than sentences, I created a tool to flexibly convert back and forth between the two formats, combining that best of both worlds. You can read more about it in my Techwell article on BDD and actions.
This picture gives an overview on how testing is organized in ABT, with the test modules and the actions in them. Note the difference in interaction tests and business tests. The automation focuses exclusively on automating the actions.
One other major factor to define automation success is known as “testability”. Your application should facilitate testing as a key feature. Agile teams are particular well suited to achieve this, since product owners, developers, QA people and automation engineers cooperate. Aspects of testability are:
- overall design of the application, with clear components, tiers, services etc.
- specific features, like API hooks or “ready” properties to help in timing, clear and unique identifying properties for UI elements, and white box access to data and events (before they’re even displayed
Automation can be challenging, in particular via the UI. However, it cannot be avoided , nor should it be avoided because it is “difficult”. Cooperation between all participants in a project can lead to good results that are long term stable and maintainable, and not inefficient to achieve.