One of the more interesting aspects of my recent move over to Socialtext is that I’m dealing with a product that is used in many corporations, and, as I recently discovered, within the federal U.S. government. Because of these customers, I’ve come face to face with an interesting new aspect of everyday testing that, so far, I’ve heard about, and heard other people doing, but never did any of it myself.
For those who are not familiar with the whole idea of “adaptive software”, it’s any range of software enhancements that make it possible for users to interact with a computer in ways that they might not physically be able to. To start of this little series of articles, I’ll be talking about a screen reader application that is part of our testing suite. That tool is called JAWS.
JAWS is an interesting application in that it is designed to take any text that would appear on a screen and will read out the user what is appearing on the screen. More than just reading out the text on a page, it’s also reading out the elements that are appearing on the screen. Think of the challenge that this might represent. Imaging, if you can, what it would be like to interact with your computer if you were sight impaired, or more dramatically, completely blind. to experiment with this, I make the following suggestions:
1. Blindfold yourself so that you cannot see your screen at all.
2. grab hold of your mouse or keyboard and try to open up a common application that many of us might take for granted (Google Chrome to access Facebook, as an example).
Just with that, what would you do? How do you know where to find the application? How can you be sure you are able to open the application and navigate? With JAWS, the challenge is broken down into screen elements that are “spoken”. there are certain hot keys that can be set, and those hot keys can correspond to frequently used applications. Within those applications, the screen reader then “speaks” to you an tels you what element it is able to interact with that that given moment.
Once you get to the element you are interested in, hitting the tab key or enter key will focus the screen reader on the next element to be accessed. If this next element is a large block of text, you can then have the contents “read to you” by one of the voice synthesis engines used in the application. On the surface, this sounds col, and it is. Having said that, I’m also finding that, as a sighted user, they are also frustrating tools. this is why I’m doing my best to try to turn of my “sighted biases” and see how they interact when I don’t have the luxury of my sight to work with. The blindfold test really draws a distinction as to how challenging this testing can be, not just from the perspective of “does the app work” but “what experience is delivered in the process?”.
When I work with these tools in a “sighted” environment, it’s easy to get impatient and overlook the various clues given, but put a blindfold on, and those clues become very important. It still feels very awkward, since JAWS reads everything on the screen. Every punctuation mark is called out. Parenthetical statements (which, I have to admit, are part of my writing style) suddenly become very tedious. I ran a timed test of my blog posts being spoken by JAWS, and wow, maybe I need to practice a little more brevity.
Sometimes all it takes is a change in perception or a change in the way we view the world, or in this case, when we can’t view the world, to really see how difficult it can be to accomplish what we ultimately see as simple tasks. they’re simple because we have adapted over time to understand them. However, all it takes is a change in reality (or a blindfold) to help one realize that our neat and ordered world can be thrown into complete chaos, and the tools that we have at our disposal, while they may work, might be tremendously foreign and intimidating. I’ve found this new approach to dealing with software from a non-sighted perspective to be fascinating, and I will most likely be doing a lot more of it in the coming weeks and months.