Without web analytics data, you are missing part of the picture when it comes to planning and analyzing usability testing of a website. While testing typically provides deep insight into the behavior of a small sample of people, analytics complements this by providing data about what large groups of users have done. Prior to launching a usability study, analytics can help identify areas to explore and prioritize your research questions. After the study, analytics can help better understand findings from a usability test and supply evidence to support them.
This article provides an overview of how to approach combining web analytics with user testing. Although the techniques discussed are applicable regardless of the web analytics tool you choose, our examples are based on the widely used and free Google Analytics.
Using Analytics to Design a Study
Prioritizing Tasks with Page Views
In formative usability testing, analytics data can help you think of tasks that may uncover problems. When prioritizing what tasks make it into your test, the number of page views can help determine how important a page is to users (see Figure 1). If a page gets a lot of traffic, that might be a signal that it is an important page to test. If you think a page should be getting more page views than it currently does, you can create tasks that test whether users are able to find the page and what they do once they get there.
For example, in a project for a university, we noticed that the number of page views for the registrar’s office section seemed low. This page provided a link to a tool used to register for classes. The page wasn’t in the website’s main navigation, and analytics showed that pages in the primary navigation received far more page views than pages that couldn’t be directly accessed from the primary navigation. While it’s possible that the registrar’s office page was getting an appropriate number of page views based on how often students needed to access it, the situation prompted questions that could be adapted into tasks and questions in a user test, such as how students were actually getting to the registrar’s office page, and what other steps they were taking to register for classes.
Using Outliers to Identify Problem Areas
Sometimes in web analytics, the examination of high-level metrics known as “behavioral metrics” can raise questions about user behavior. The measures worth looking at are:
- Time on Page: How long do users stay on a page relative to other pages? Do any pages have an unusually long or short time on page compared to the rest of the site?
- Bounce Rate: How many people are viewing just one page and leaving before going to any other pages? Are there any pages that have an unusually high or low bounce rate in comparison to other pages?
- Exit Rate: From which pages are most people exiting the site?
For these three metrics you should be looking for outliers—values for a given page that are radically different from values for other pages. You can explore what’s behind these measurements by constructing tasks that lead participants to interact with the pages in question. There may be good reasons for an outlier measurement. For example, a navigational “hub,” a page with no real content that mainly leads users to other pages, will probably have a short time on page since users have no reason to stay there.
Time-on-page data can also help you come up with a benchmark for your first summative test. You can find the pages involved in completing a task and add up the average time users spend on them. Clearly, these are not perfect data because users may be on those pages for any number of reasons, but you at least have a starting point based on actual measurement.
Examining User Paths
Data about user paths can help you construct more realistic tasks. A useful report is the Landing Pages report in Google Analytics, which shows where users entered the site. The experience of visiting a website is different for a user who landed on the homepage than it is for a user who landed on an internal page. Therefore, depending on the goal of the usability test, you may wish to have users start on a common non-homepage landing page for a task.
Analytics data also tell you which pages users came from immediately before reaching a page, and where they went upon leaving a page (in Google Analytics, this is the Navigation Summary report). With this report you can assess how many users are actually following the paths you expect them to take, and identify unexpected ways in which they move through the site.
Usability testing can help you better understand why people go from one page to another. We recently conducted usability testing for a childcare provider with almost a thousand locations in the U.S. According to the analytics data, users researching childcare options tended to use the main site navigation rather than the subnavigation of the individual childcare centers. To understand why the subnavigation was underutilized, we planned tasks that involved open-ended exploration, as well as finding specific pieces of information. Most participants did not identify the main navigation and center-specific subnavigation as two separate systems. Also, the items in the main navigation seemed to correspond better with users’ information needs. Combining the results from web analytics and user testing allowed us to make a stronger case and justify design changes.
Using Analytics Goals
If you have goals configured on your analytics program—that is, if you have identified key user actions on the website that contribute to your company’s success—these goals form an excellent basis for deciding what to include in a user test. Goals that involve a sequence of steps, such as a shopping cart checkout or multiple page registration, can give you a starting point for estimating a completion rate benchmark because, with web analytics, you can measure how many start and then complete the goal.
Using Analytics in Your Findings
Verifying Your Findings
After testing, web analytics is valuable for confirming findings and giving you a better sense of how common a problem may be. Begin by reviewing your initial analysis of the analytics data in light of what you learned from usability testing. For example, web analytics may have revealed an unusually low time on page for a particular web page. If you assumed that was because the content did not correspond to users’ needs, data from the user test can verify if your hypothesis was correct.
Sometimes, analytics data can contradict your findings from usability testing. In the usability test of the childcare provider’s website mentioned earlier, 60 percent of participants went straight from their initial starting page to the Our Philosophy section. However, when we dug into the analytics data, we found that fewer than 1 percent of users went directly from the homepage to the Our Philosophy page, and only 9 percent of the site visitors ever got around to visiting the Our Philosophy page. What could explain this inconsistency in findings? We concluded that this was an instance of the lab setting influencing participant behavior. Our participants thought that, as good parents, they were supposed to show interest in the childcare center’s philosophy, when in fact they were much more interested in the tuition costs.
Questions to ask could include whether a well-visited page was really all that important to the participants, and if so, in what way. For pages that your participants visited, do the analytics’ average time on page, bounce rate, and exit rate numbers make sense when compared to what you learned from participants? Can these measures help explain how participants interacted with the site?
Of course, there is always the possibility that analytics data will make you question something from a usability test or lead you to different recommendations. When analytics data appear to contradict user test results, it is important to attempt to interpret the data in a way that tells a story that resolves the contradiction, rather than simply discard one kind of data or the other. Even when analytics data minimize the impact of a usability problem observed during user testing, you still observed that problem. It just may not affect as many users as you had thought based on the test, meaning that you should lower your prioritization of the usability problem.
Making a Stronger Case
You can bolster your report with hard numbers that speak to stakeholders skeptical of the small sample sizes typical of usability testing. One of the recommendations to come out of an earlier round of usability testing of the daycare website was to modify the site’s navigation. Instead of users having to click on a main navigation item and go to a hub page to see what subpages were in that section, we suggested removing the hub page and implementing menu rollovers (see Figure 2).
We were able to make our case by presenting page views for these content-light hub pages to show how many users would be affected by the change. Figure 3 shows that these hub pages were among the most viewed pages on the website, despite providing little value to users. In contrast, Figure 4 shows that after our recommended design changes were implemented, these pages no longer received as many visitors.
The best approach to integrating web analytics with usability test reports is to fold references to analytics data into the narrative that explains the uncovered problems and proposed solutions. You can also discuss ways design recommendations are intended to change metrics, such as increase or decrease time on page, or increase the number of visitors to a page. The risk of breaking out analytics data into their own section is that it will make it harder for stakeholders to understand the full story of how all of your findings tie together.
After the Usability Test
Looking beyond user test planning and analysis, web analytics also provide a way to measure the effectiveness of design changes after they have been made. You can look to the metrics you targeted when creating your design recommendations to see if they have changed in the way you intended. This gives us the opportunity for greater accountability—the ability to more quickly change course when design changes do not work out as planned, and a way to better show the benefits of our work.
Retrieved from https://oldmagazine.uxpa.org/making-the-case/