On a whim, with hardly any forethought and with even less expectation that it’d turn up some gold, at the last Cambridge Lean Coffee I asked whether it was possible to quantify user experience and whether any of the testers there had tried.
Some of the things you’d expect were suggested, including A/B testing, wireframes, and putting the thing in front of a handful of tame users. Of those only A/B testing quantifies in any meaningful, statistical sense (the other approaches described were essentially qualitative) but has a significant flaw in that possessing the data about user behaviour without understanding the intent behind it is only half the story.
To the extent that I’d given it any consideration as I walked to the meeting that day, what I’d been wondering about was the possibility of rating a design before it gets in front of users. There are many toolkits that provide sets of components to be used in building a user interface and it might be reasonable to think that these have been crafted in such a way as to be suitable for the kind of use they’re expected to be put to, and for them to be consistent across the set so that the same kind of gestures have similar functions, that accessibility concerns have been taken into account and conventions about, say, when to display tool tips and so on are common.
But are there guidelines about ways to put them together?
Questions. One of the things I get out of this kind of event are questions. Here’s a few more:
- Can a design be scored for usability based on some general principles?
- Are there – or could there be – design principles which give a good “average design”?
- If so, how do they differ across applications? (Usability considerations in the design of a paper cup are probably quite different from those in the design of an airport.)
- Like quality, is the concept of good design so context-dependent that there’s, in general, little chance of evaluating it objectively?
- Even if so, are there areas where there is a chance of getting such a measure?
- There are trends in design and – just like trends elsewhere in the world – their visual appeal changes over time. But I’m less bothered about visual appeal and more about usability. How related are the two concepts of visual appeal and usability?
- Is there something like the golden rectangle for designs? (Side question: is there in fact even much evidence for a golden rectangle with appealing aesthetic properties?)
One of the other things I get out of events like this is new references, new perspectives, new vistas. And that’s what turned up while talking afterwards. We were trying to get to the bottom of a disagreement we’d had over analogies between software development and civil engineering or construction industries that turned into a conversation about design patterns and eventually resulted in David suggesting that I might be interested in Steve Krug’s book, Rocket Surgery Made Easy.
I borrowed it from Roger at work and read it in two sittings this weekend.
The book doesn’t answer the kinds of questions I’m asking above. It doesn’t even pretend to try. In fact, it’s not a book about usability or design considerations at all. It’s a book about how to test for usability in a cheap, convenient way and then how to decide what to fix and when and how to get buy-in for the whole process. And it is very explicitly not about quantification.
It’s a book written by a practitioner for practitioners, written by someone who’s been there and done it. And done it over. And seen it done well. And seen it done badly. And paid very careful attention to what actions tend to result in what outcomes. And then boiled that down. And experimented with what he found. And repeated that. And then boiled that down. And then looked for only the essential stuff in what’s left. And applied that pragmatically, with a keen eye to the need for context to be at the fore. And turned that into a book.
So: an idle thought lead to some interesting thoughts and then to much food for thought. Not what I expected, but very, very welcome. File this one next to Lessons Learned in Software Testing.