In a 2008 article fo User Experience (Volume 7, Issue 1) Jerrod Larson described how some enterprise software (e.g., portals, expense software, learning management systems) comparisons conducted by major IT research firms may not always adequately account for usability or user experience. The net result, Larson suggested, is that these firms might recommend software that is feature-rich yet usability-poor. Purchasing companies, confident in the quality of these recommendations, may be unwittingly procuring enterprise software that is unnecessarily difficult to use, potentially reducing or eliminating desired benefits.
At the same time, others have recognized that usability tends to be an Achilles heel for enterprise software. As K. Vinh provocatively put it in 2007, “Enterprise software, it can hardly be debated, is pretty bad stuff.” In 2009, K. Finstad, in Interactions, charting a more cautious course, outline some of the reasons behind the usability-related problems that tend to plague enterprise software and ways in which these problems can be addressed over time. While we encourage the creation of better quality enterprise software, we also see an immediate need for low overhead approaches for evaluating existing enterprise software on a usability dimension, prior to purchase. Put simply, if a given piece of software is not easy to use, companies should know that before purchasing and deploying it. In this article, we present a simple yet relatively rigorous approach companies can use to gather this information.
The Problem with Current Approaches
Experience in trying to improve the user experience of enterprise software systems has taught us that early intervention is the key to success. For software developed in-house, this requires the integration of user experience (UX) activities from the start of an iterative design process. But how can a UX team add value when executives pick software systems “off-the-shelf?” Since many organizations prefer to buy and configure enterprise software rather than to build custom applications, we argue that UX teams must extend the role of usability evaluation to the software procurement process itself. This is particularly true if the usability of a software product is critical to its success, as in the case of self-service systems destined to be used by large numbers of employees with minimal training.
The burden for evaluating and comparing enterprise software products falls on the purchasing company. With regard to usability, this is often done (if at all) through activities such as reviewing vendor documentation, reading the results of usability studies the vendor has conducted (if available), receiving product demonstrations, and the like. Unfortunately, it can be difficult to compare products via these methods and tools. A more robust approach may be to evaluate products in-house using traditional usability evaluation techniques, but this requires purchasing companies to install and configure the prospective software and then conduct usability studies on those products. In the majority of cases this is untenable because such activities tend to be costly, time-consuming, and complex.
Adapting a Familiar Practice
To make a case for participation during the procurement phase, UX teams need low overhead methods that address genuine concerns and help procurement teams make informed decisions. One way to do this is by adopting quick testing methods to assess the out-of-the-box usability of demonstration products provided by vendors, and offer these tests as a way to manage the business risk of purchasing an inappropriate or low quality product.
In this article we describe one such testing method, building off the notion of “User-Centered Procurement” advanced by Lif, Göransson, and Sandbäck in 2005. The core idea is to establish a common and collaborative way by which a company can undertake these pre-purchase software evaluations, one that does not require a usability lab or the purchasing company to have the prospective software in-house. In short, the method marries usability evaluation techniques with the traditional software demo.
Software vendors routinely conduct demos for prospective clients, but these demos tend to be problematic for evaluating the usability of products because software vendors themselves walk clients through their software. Having a software expert demonstrate a product does not provide an indication of how difficult it will be for new or novice users, as the demo always follows the “happy path” to task completion. Ultimately what is important for a usability evaluation is not how easy it is for the vendor to use their own software; instead, it is how easy-to-use the software will be for target users at the customer site.
The method we advance appropriates vendor software demonstrations for informal usability testing. It is similar in many ways to any usability test; this approach differs primarily in its context and participants. Further, our method taps into these existing demonstration activities without significantly increasing the time or resources required to conduct them. The method also helps the procuring organization clarify their core requirements and product vision, and can reduce the likelihood that a great product demo will satisfy a need that exists only in the heads of a few business executives. Lastly, it seems many software vendors can benefit from this activity too, because the focus on user tasks and goals and the experience of seeing a customer navigate their product helps them to better understand their customers’ needs and design more usable products.
What follows are the outlines of the testing method. We acknowledge that many usability practitioners may be conducting a similar process presently (indeed, the authors of this article came together because we were each doing something similar and wanted to learn from each other’s experiences); our goal is to simply put this method into writing and allow it to be further refined by other practitioners.
The Method
First, the method assumes that the procurement team will include a usability representative who will be able to guide the rest of the procurement team through this process. The method then proceeds as follows:
1. Plan the evaluation and agree on the goals
The first step is to get the procurement team on the same page with respect to the evaluation process. As part of that team, usability professionals should facilitate an initial discussion in which the team outlines the goal of the demo software they will receive (i.e., decide what they want to learn) and how the information uncovered will be used in the evaluation process. For example, it is rare that the results of a usability test alone will drive an entire procurement decision, so the rationale, value, and outputs of the evaluation must be clear to all stakeholders. It is also essential that the usability professionals associated with this effort understand the broader motivations, product strategy, and critical decision factors that drive other members of the team because there are likely to be some battles that cannot be won.
Regarding the weight usability should have in the overall procurement decision, a simple calculus may be all that is needed: in the event the usability of the application is crucial—as in a self-service application intended for use by a lay audience—the weight of the study results similarly should probably be great. In the event the usability of the application is not crucial—as in an application meant for experts who will receive copious training and for a system wherein the user experience is not crucial—the weight of the study results should probably be minimal.
2. Describe and prioritize key tasks or use cases
As in almost any usability evaluation, this method requires defining use-cases or key tasks related to important roles that the software products will be asked to facilitate. For expense reporting software, this might include a task to “add an expense to a previously created expense report.” It is important that these tasks, while not exhaustive, represent the gamut of high priority tasks the software will be expected to accommodate, and include those that will be executed most frequently. Prioritizing the tasks keeps demos manageable (time-wise) and focused on the most important features. These tasks should be co-created by members of the procurement team.
These key tasks may have already been produced as part of research or feasibility activities. Even where there is no prior work to draw on, it should be possible for project stakeholders to identify a handful of user goals that the system must support. (If these cannot be articulated we would argue that it is too early to engage in the procurement process anyway.)
3. Create an evaluation form
The team must create an evaluation form. This form should include the list of key tasks, with each task including a space for a score of some type that observers can record (see Figure 1 for an example).
In addition to the quantitative data captured through the score, the form should include areas for capturing qualitative data. In Figure 1 we present a very simple form with one scale and one comment field per task. Teams may choose to use more complex forms—for example, multiple questions per task—but should make sure the form does not become unwieldy or unusable.
Also, there may be a large degree of subjectivity in evaluating something like “success” on a five point scale as is imagined in Figure 1. As part of this step it is important to agree on guidelines for how to rate consistently, or, perhaps, make it clear that the weight of the usability professional’s ratings will be greater than those without much usability experience.
4. Assign a test participants
The team needs to decide who will actually test the vendors’ software. In a pinch this person could be a member of the procurement team, but ideally this person would, as in any usability study, be representative of a user in the primary role for the software under evaluation. If the system under evaluation must satisfy tasks for people in multiple user groups or roles, the team may want to consider the typical workflow and cover key tasks for each, in the order in which they would typically occur. Each participant should be given a list of the tasks to review ahead of time to ensure they are realistic and understood, but obviously participants should not be provided access to the software prior to the evaluation.
5. Distribute the evaluation form
Distribute the evaluation form to all members of the team who will be observing the tests and rating the performance of the software. We recommend that all members of the procurement team be included in this process (and use the evaluation form) to increase awareness and involvement about usability issues.
There will likely need to be one form per piece of software under evaluation, and possibly one per user role as well.
Additionally, we have found a short (one half hour) orientation session is helpful for the procurement team and the participants to get comfortable with the evaluation form and the study prior to the arrival of the vendor. This orientation will smooth-out wrinkles that would disrupt actual sessions.
6. Set-up a meeting with each vendor; advise the vendor of the intent
The team must then set up meetings with the vendors. Each meeting should consist of a vendor representative, a working copy of the software under evaluation, members of the procurement team, and the participant(s). The meeting should be in person or mediated through conferencing software, since it will be crucial that everyone is able to view and interact with the vendor’s offering.
When setting-up the meeting with the vendor, it is important to let them know that the team will be attempting to use the vendor’s software real-time in this meeting. (The team and the vendor may wish to have a separate sales-oriented demonstration as well; this is fine, but should be separate from the usability evaluation.) At this time the team should also provide the vendor a list of the key tasks under evaluation. The vendor will likely be familiar with conducting demos, and ceding control to a prospective client may be new to them. They may also require time to set up an environment that allows the team to do this.
It may be that a vendor is unable (or unwilling) to readily set-up its product for this activity; although unfortunate for the sake of the usability evaluation, this discovery may provide the procurement team some indication of the complexity of that software’s set-up, which itself might be an important finding.
7. Conduct the meeting, and observe the participant
In the meeting, ask each participant to run through each task using the vendor’s software. Each of the procurement team members should rate, on their own evaluation form, the performance of the software based on the experience of the participant. Be sure to remind any vendor representatives to act as observers rather than support specialists during the actual evaluation. It may also make sense to ask the participant to think aloud as he or she negotiates the various tasks.
Repeat steps 5-7 for each product under review.
8. Analyze the results
After all the evaluations have been conducted, the team should meet to aggregate, analyze, and discuss the results recorded on the forms. How a team approaches the analysis largely depends on the data the team decided to collect. Still, we can offer some advice. We have presented the mean success rate per task per system across all observers. The average across all tasks per system can be presented as well, but this only makes sense if all tasks are equally important. All qualitative data (for example, any comments recorded by the observers) could be made available to all members of the team, and the usability professional could choose to highlight patterns in comments across observers.
9. Factor the results into the other considerations for procurement
The results of the study should at least indicate which product or products seem most able to support the key tasks with the least end-user difficulty. This information should be included as additional information to factor into the purchasing decision the procurement team will undertake, as agreed to in step one.
Conclusions
Our approach seeks to extend the use of common usability evaluation methods in areas commonly dominated by IT and procurement departments. Purists may balk at the somewhat quick and dirty approach, but the method is lightweight by design and is used in an environment where usability professionals have not often had much involvement. Certainly it makes sense to use more robust usability methods when appropriate or when provided the opportunity, yet this method is certainly a better alternative than the prospect of no involvement. In many large organizations it is a significant achievement for UX teams to get even a small stake in the procurement process, and we have found that engaging in activities such as the one we’ve outlined have influenced software purchase decisions. Perhaps more importantly, by using these methods we have helped identify the user experience as a critical success factor for any software product, whether it be developed in-house or purchased from a vendor.
Retrieved from https://oldmagazine.uxpa.org/user_centered_procurement/