After reviewing reporting on the difficulties with results in the Iowa Caucus, it is clear to the Association for Software Testing that the software project to tabulate votes electronically was not well conceived or well planned. While it is clear that there were issues in conception, coding, deployment, and testing, focusing on the project’s execution when its circumstances and context made it unlikely the project could succeed would be a mistake.
“Most software deployments are inherently complex and risky to some extent.”
Software almost always includes complexity that is very hard to master even before considering execution environments after deployment. When deploying software to heterogeneous environments – such as the 1700 mixed-model, mixed-operating system mobile devices belonging to Iowa precinct captains – it becomes very difficult indeed. In consumer-facing technology companies, this complexity is a constant challenge that makes even modest software updates risky, and release of new products quite risky indeed.
“Testing can help to identify and mitigate risks in even the most complex software projects.”
Software Testing can help identify and mitigate risks in even the most complex software projects during conception, planning, and execution. After code is written, testing can reveal unexpected and potentially harmful failures, enabling diagnosis and repair of code when appropriate. In nearly every context, no amount of testing will guarantee perfectly functioning software in all situations. With thoughtful preparation and planning, skilled software engineers can model and examine most of the riskiest, most impactful, and most likely to be encountered software states and predict their outcomes.
“Risk assessment and mitigation should be appropriate to the software’s context.”
Software is invisible, so it can be difficult for customers to assess and understand what they are buying and how the project is coming along. The amount of care taken in purchasing software – expressed in time, budget, flexibility in the schedule – should vary considerably by context. Risk assessment and mitigation should be appropriate to the software’s context. A mobile game malfunctioning is unlikely to harm a person, though that may be a risk to the financial results of the company that released it.
“When the stakes are higher, the need for high-quality testing is greater.”
When the stakes are higher, the need for high-quality testing is greater. For contexts such as aeronautics, medicine, traffic control, and mass transit, there are extensive and careful practices around many aspects of conceiving, building, deploying, and running software as part of the culture. The people working in those contexts know that they have to get it right in evaluating the creation and operation of their software to manage risks in losing money, regulatory certification, personal privacy, time, and/or lives. These practices might include some of the following, which is not an exhaustive list by any means:
- Procurement/vendor evaluation and contractual acceptance terms
- Requirements examination
- Coding standards
- Change management
- Testing of functionality, security, scalability, recoverability, and other factors
- Failover/Recovery planning
- Observability of the deployed system
- Failure contingencies
It appears that various national and Iowa Democratic officials – the customers of Shadow, the software company that created the vote tabulation software – did not plan this project effectively, or with respect for the context: the first real contest in a very high stakes election cycle. The budget cited ($70,000), the time allocated (less than three months), and the experience of the developers in creating mobile applications (very little) separately are enough to make success unlikely. The combination of these deficits made success improbable.
While examination of the project’s execution may yield some lessons, the decision to operate the election on unproven software hastily written by an unqualified vendor is the crucial mistake made in this situation. The people who built the software almost certainly did their best, given the context they found themselves working in. The people who participated in the decision to commission and rely on this new software should answer for the outcome.
“We ask the public to skeptically examine any and all claims about the reliability, security, and dependability, and operation of electronic voting and voting machines.”
The Association for Software Testing urges significant caution when choosing to use software to manage, count, track, or tabulate votes, and/or capture and store voter registration data. Skilled testing and operational planning are essential before deploying any such system, as are secured, reliable paper records for both fallback and auditing. We ask the public to skeptically examine any and all claims about the reliability, security, dependability, and operation of electronic voting and voting machines.