Last week, I hosted STP’s Online Performance Summit, a 3 half-day, 9 session, live, interactive webinar. As far as I know, this was the first multi-presenter, multi-day, live webinar by testers for testers. The feedback from attendees and presenters that I have seen has all been very positive, and personally, I think it went very well. On top of that, I had a whole lot of fun playing “radio talk show host”.
The event sold out early at 100 attendees with more folks wanting to attend, but were unable. Since this was an experiment of sorts in terms of format and delivery, we made a commitment to the smallest and least expensive level of service from the webinar technology provider, and by the time we realized we had more interest than “seats”, it was simply too late to make the necessary service changes to accommodate more folks. We won’t be making that mistake again for our next online summit to be held October 11-13 on the topic of “Achieving Business Value with Test Automation”. Keep your eyes on the STP website for more information about that and other future summits.
With all of that context, now to the point of this post. During Eric Proegler’s session (Strategies for Performance Testing Integrated Sub-Systems), a conversation emerged in which it became apparent that many performance testers conduct some kind of testing that involves real users interacting with the system under test while a performance/load/stress test was running for the purposes of:
- Linking the numbers generated through performance tests to the degree of satisfaction of actual human users.
- Identifying items that human users classify as performance issues that do not appear to be issues based on the numbers alone.
- Convincing stakeholders that the only metric we can collect that can be conclusively linked to user satisfaction with production performance is the percent of users satisfied with performance during production conditions.
The next thing that became apparent was that everyone who engaged in the conversation called this something different. So we didn’t do what one would justifiably expect a bunch of testers to do (i.e. have an ugly argument about who’s term came first, is more correct, that continues until no decision is made and all goodwill is lost). Instead, we held a contest to name the practice. We invited the speakers and attendees to submit their ideas, from which we’d select a name of the practice. The stakes were that the submitter of the winning submission would receive a signed copy of Jerry Weinberg’s book Perfect Software, and that the speakers and attendees would use and promote the term.
The speakers and attendees submitted nearly 50 ideas. The speakers voted that list down to their top 4, and then the attendees voted for their favorite. In a very close vote, the winning submission from Philip Nguyen was User Experience Under Load (congratulations Philip!).
So, next time you, or someone in your organization proposes putting users on a system that is currently under load, you can say “Let’s run a User Experience Under Load test to assess end-user satisfaction.”
I strongly encourage you to do that… and follow it up by mentioning that Philip Nguyen of Citrix Systems coined that phrase. I, for one, will, from this moment forward use this name to refer to this practice (which is a practice that I fully support as a good practice in a wide variety of contexts, and believe is widely under utilized) in my writing, speaking and consulting, and will be diligent about attributing the name to Philip.
I hope all of you see the value in what happened here. “Testerland” is so filled with overloaded terms coined for marketing purposes that are not widely understood or agreed upon that using terms will often hinder communication more than help it. In this case, a group of 100 performance testers first agreed on a description of a practice, and then agreed on what to call it — with no agenda other than naming a good practice to make it easier to talk about.
I hope this inspires other groups of diverse individuals to stop debating which existing term is “better” than each other existing term, and rather identify a practice through description (not definition) and then jointly agree on a term to refer to it that makes sense. Personally, I’m tired of watching a bunch of consultants, trainers, and/or vendors get into heated debates over whose term should be *the* term to force down the throats of everyone in “Testerland” because it supports their business agenda. If that reality bugs you as much as it bugs me, take the first step. Adopt User Experience Under Load as the name of this practice, and tell the story of how Philip Nguyen’s proposed name won out in a democratic vote of every day performance testers who just happened to be in the right webinar at the right time.
Chief Technologist, PerfTestPlus, Inc.
“If you can see it in your mind…
you will find it in your life.”