There are significant methodological and philosophical differences between ethnography and laboratory-based processes in the product development cycle. These differences set users of these data collection methods at odds with one another. Frequently, these debates occur less within the user research community and more amongst the people using or responding to the findings and solutions presented. Whenever these arguments come up, the naysayers endlessly debate methodological purity, ownership, and expertise. One side fears a lack of scientific rigor, and the other worries about a contextually detached environment yielding irrelevant results. Both sides make valid points, but the debate draws attention away from the fundamental question of product design: does the product work in the broadest sense of the term? Can the people for whom the product is designed use it in the correct contexts?
To defuse the debate and get back to this primary question requires an approach that blends the rigor of laboratory-based processes with the contextual richness of ethnography. This article focuses on the rationale for using a blended method for testing and the basic principles of such a method.
Why We Bridge Methodological Boundaries
In the iterative product design process, what typically shapes the design are findings from in-lab usability testing. However, while the data are reliable in a controlled situation, they may not be valid in a real world context. It is possible to obtain perfect reliability with no validity when testing. But perfect validity would assure perfect reliability because every test observation would yield the complete truth. Unfortunately, perfection does not exist in the real world, so the reliable data recorded during laboratory testing must be supported with valid data that is best found through field research.
Consider RCA’s release of the eBook in 2000. The product tested very well, but no one asked where, when, and how people read. Consequently, the UI did not match user real world needs. Had it been tested in context, the company might have avoided millions of dollars in losses.
To ensure validity, an anthropologist or ethnographer can spend time with potential users to understand how environment and culture shape what they do. When these observations inform the design process, the result is product innovation and improved design.
At this point, however, the field expert is frequently removed, and the product moves forward with little cross-functional interaction. The UI designers and usability researchers take responsibility of ensuring that the product meets predetermined standards of usability. While scientific rigor is a noble goal, the history of science includes countless examples of hypothesis testing and discovery that would fail to satisfy modern rules of scientific method, including James Lind’s discovery of the cure for scurvy and Henri Becquerel’s discovery of radioactivity. Arguably, both scientists conducted bad science from the standpoint of sample size and environmental control, but that doesn’t negate the value to the millions of people that have benefited from these discoveries. Similarly, by allowing more testing in the field, we can gain insight into a product’s usability that might go undiscovered in a strictly controlled environment.
If we fail to account for the context in which the product will be used, we may overlook the real problem. A product may conform to every aspect of anthropometrics, ergonomics, and established principles of interface design. It may meet every requirement and have every feature potential users asked for. It may have also improved participants’ response time by a second or two in a lab study. But what if someone using the product is chest deep in mud while bullets fly overhead? Suddenly, something that was well-designed and tested becomes useless because no one accounted for shaking hands, awkward positions, and decrease in computational skills under physical and psychological stress. Admittedly, some conditions can be simulated in a lab. However, it would not be cost effective or ethical to create the heat, dirt, fear, and general discomfort described in the example above. Furthermore, users in their natural environment have a reduced need to provide answers that would placate the researcher. Context, and how it impacts performance, is of supreme importance, and knowing the right question to ask and the right action to measure become central to accurately assessing usability.
Field Testing: Getting Dirty
So what should be done? Designers should detach themselves from controlled environments and the belief, often held by people outside the user research and design departments, that the job is to yield the same sort of material that would be used in designing, for example, the structural integrity of the space shuttle. The reality is that most of what we design is more dependent on context than it is on being able to increase efficiency by one percent.
Consequently, for field usability to work, the first step is being honest with what we can do and able to articulate this to the other groups within the business. A willingness and ability to adapt to new methodologies is one of the principal requirements for testing in the field, and is one of the primary considerations that should be taken into account when determining which team members should be directly involved. I point to a colleague who works at Jet Propulsion Laboratories; while he is a brilliant engineer and designer, field testing is simply too uncomfortable for him, though he recognizes its value.
The process begins with identifying the various contexts in which a product or UI will be put to use. This may involve taking the product into a participant’s home and having both the participant and other members of the social network use it with all the external stresses going on around them. It may mean performing tasks as bullets fly overhead and sleep deprivation sets in. The point is to define the settings where use will take place, catalog stresses and distractions, and then learn how these factors impact cognition and performance.
For example, if you’re testing an electronic reading device, such as the Kindle, it would make sense to test it on the subway or when people are laying in bed, because those are the situations in which most people read. Does the position in bed influence necessary lumens or button size? Do people physically shrink in on themselves when using public transportation and how does this impact use?
Products should be tested in the real conditions under which they will be used and include external variables in the final analysis and recommendations. It is not possible to document every variable and context in which a product or application will be used, but the bulk of these situations will be uncovered in the field. As with most products, the range of methods is defined by product cost and impact. For example, testing a simple entertainment-oriented phone application retailing for $2.99 will require less depth than designing an interface used in a complex medical software package retailing for $50,000.
During the actual testing, designers do not forego measurement of success and failure, time on task, click path, and perhaps even physiological responses such as body temperature and heart rate. When testing in the field, we strive to retain the same level of scientific rigor as we would in a lab, while also understanding how context shapes usability. Make no mistake, this is not easy and requires a significant amount of time. However, the idea that fieldwork reduces rigor is, frankly, a myth. The problem with most fieldwork, and the perceived lack of rigor, is frequently a lack of training and/or preparation. Science is defined as a systematic enterprise of gathering knowledge about the world and organizing and condensing that knowledge into testable laws and theories. It includes the use of careful observation, experiment, measurement, and replication. To be considered a science, a body of knowledge must stand up to repeated testing by independent observers, which is precisely what is being advocated here.
Once the initial test is done, the user research team should leave the product with the participant for a period of time (for example, two weeks for consumer goods, longer for more complex or infrequently used products). During this time, participants are asked to document everything they can about their interaction with the product and what is going on in the environment. The researchers then return and measure learnability, and gather participant feedback on the participants’ experience with the product or application and how their behavior changed as a result of the product. For example, the research team tries to determine if using the product has changed how the user sits or how much time they spend in front of their computer. There are times when a product is perfectly usable, but the users reject it because it is too disruptive to their normal activities.
Conclusion
A product or UI design’s usability evaluation is only relevant when taken outside the lab into the real world context where it will be used. Some of what occurs in the real world can be replicated in the lab, but in the end, it is still a staged environment, devoid of the complexities of real contexts. Social interactions and cultural practices are often lost. Rather than separating exploratory and testing processes into two discrete activities that have minimal influence on each other, efforts can be maximized by employing a mixed field method that bridges the gap between ethnographic and laboratory approaches. Innovation and great design will follow
Retrieved from https://oldmagazine.uxpa.org/taking-usability-into-the-trenches/
Comments are closed.