Directing users to make out-of-context privacy and security decisions hurts them and software companies alike. It’s not just that users don’t understand the trust dialogs that computers present; it’s that they also use other clues to determine trustworthiness, and make one-off decisions based on their desire to complete the task that the dialog interrupted.
Understanding users’ behaviors leads to a handy acronym, SECRET, for a set of criteria designers should follow when developing trust and privacy related interfaces.
SECRET – a Scoped, Equitable, Contextual, Responsible, Emotional and Timely user experience for trust and privacy decisions
(Re) Learning about Trust
Back in the early 2000s, several nasty viruses hit the Internet in quick succession: ILoveYou, Sircam, Code Red, and Nimda all spread across the globe thanks in part to poor end-user security.
To some extent, these viruses and privacy-reducing spyware products propagated because users found it difficult to understand the implications of their online actions. At Microsoft, we started work on Service Pack 2 for Windows XP to address these issues. From a technical perspective it was easy to throw up warning dialogs and quarantine certain downloads, but the largest issues lay in getting users to respond to the warnings produced by the operating system.
As technologists, we knew our perception of trust differed from that of regular users; so as part of Microsoft’s Trustworthy Computing Initiative, we went back to first principles to learn how consumers thought about privacy and security.
First, we encouraged a group of usability study participants to create “trust maps” using craft supplies (see Figures 1 and 2). The maps helped us to understand what trust meant to them and formed the basis for subsequent one-on-one interviews. We asked about previous trust incidents, how participants recovered from those incidents, and how the incidents made them feel.
Asking users to recall previous trust incidents elicited some very powerful emotions. Interviewees used terms such as frustrated, violated, preyed upon, and exposed to describe events that had occurred to them.
During the course of the interviews, however, it became apparent that the experiences that users described contradicted their earlier statements about whom they trusted with various levels of personal information. In other words, what participants said they would do differed from their actual behavior. (See articles by Caroline Jarrett and Kelly Bouas Henry in this issue of UX.)
For example, when asked whether he cared whether people could see where he’d been online, one participant stated, “No. It’s just the idea that they’re in there. But it’s not as much a privacy deal as your credit card or letters.” However, ten minutes later, when reading an online privacy statement, the same participant said, “Why would you want to be tracked? You’ve lost your freedom. I’m not happy with that.”
It appeared that users’ rational thoughts about their behavior were overpowered by emotional criteria such as who had suggested they visit a site, or how much they wanted the thing that the software offered them. They would make one-off emotional trust decisions without necessarily considering the rational consequences. Users were happy to give information when it suited them. They also regretted the decisions when those consequences no longer suited them.
Interestingly, participants did not see the computer as an actor in the trust decision. Computers themselves were neither trusted nor not-trusted. They were just seen as a conduit for information. Any recommendations that the computer might make were easily overpowered by the user’s emotions.
The interview and trust map findings provided some basic research findings:
- Trust decisions are emotional; computers are logical
- Users’ trust decisions are one-off rather than general
- Users don’t—or don’t want to—consider the consequences
Next we tested some prototype user interfaces based on our findings.
Setting Privacy Preferences
Although users typically do not read privacy policies, we studied their responses when they were made to read through the text. We found that the broader and more inclusive the language in the privacy policy wording, the less credible users found the policy to be. Their concept of what terms such as cookies, third parties, aggregate data, and such mean was often wildly inaccurate, leading them to assume behaviors worse than the policy actually allowed.
As a reaction to that credibility gap, we wondered whether allowing users to determine their preferred privacy settings up front in plain language would allow us to quickly compare any new software or service against their existing preferences and simply highlight the differences. That way we’d help users with their one-off decisions by presenting a list of exceptions rather than a whole policy. We specifically create a “Trust Advisor” label for this feature in an attempt to make it into a proxy for a person rather than just being “the computer.” (See Figure 3.)
It didn’t work quite as we had planned. When shown a set of privacy controls separate from the context of use, the majority of users choose the most restrictive privacy settings. They were not aware of the consequences of these settings, such as inability to access certain sites or use certain products. In the future, they may become confused when their computer appears to be “broken.”
For example, users like the “similar book recommendations” feature of Amazon. Their friends and family addresses are only a click away. They like the convenience of having the books delivered by UPS. Yet, when you show them a privacy clause that describes sharing aggregate purchase information with other users and address book data with third parties, they claim that they would refuse to use the service. They trust Amazon, not the concept of “sharing data.”
That means it’s hard to turn the computer into a trust agent working on the user’s behalf, because the user can’t—or can’t sensibly—instruct the computer up-front, and because the operating system can’t make emotional decisions based on who is being trusted.
We had inadvertently broken all of our own principles in an attempt to fix them: we made users consider the outcome of their trust decisions, made them do it out of context, and we presented situations where the logical option was to shut out access.
It turns out that the concept of a trust center for reviewing or rescinding privacy preferences is fine, but it should not be the users’ first port of call. Once we moved to a system of smaller in-context decisions, each of which defaulted to a recommended trustable action (a “smart default”), study participants became much more reasoned in their interactions with the software.
Some additional findings from the study:
- Users make trust decisions based on real-life context, not abstract concepts like “aggregate data.”
- Users make more reasoned trust decisions at the point in time where the decision is necessary than they do in advance.
New Trust Dialogs
The interface that had led to many users’ frustration was the ActiveX installation dialog (see Figure 4). This dialog box would appear, sometimes without user intervention, when users visited web pages that required additional software to run. Because the dialog was so unhelpful, most users just clicked whichever button they thought would remove the distraction and allow them to continue with their task. As a result, many users ended up installing software that reduced their online privacy or opened them up to malware attacks.
We had learned from our Trust Advisor studies that when we had to ask users to make a decision, the best place to do it was at the point when they were taking the action the trust decision was related to.
The users’ strategy for dealing with the ActiveX dialog box was relatively polar. About 40 percent of users would consistently click the “No” button. Another 40 percent would always click the “Yes” button, stating that they’d never had issues with previous software so this download should be okay. The remaining 20 percent would choose either the “Yes” or the “No” button, depending on how they felt at that time, swayed, in part, by the information in the dialog box.
Unfortunately, the dialog box was particularly uninformative. Its talk of “signing” and “authenticity” presented users with a dilemma rather than with data they could use to make an informed decision. Both well-intentioned and shady software companies had taken to giving their software long names like “WidgetWorks please click the YES button below,” because this was the only place they could insert a message into the dialog.
The initial redesign (Figure 5) tested marginally better. The dialog’s question was clearer, button labels used verbs related to the action that users must take, and VeriSign was called out as the trust-providing entity rather than “the computer.”
However, some issues remained. The option to always trust content from the provider was the opposite of what people needed. Having a default of never trust would have allowed users to block the same malware products that appeared on many sites. Also, people didn’t know who VeriSign and other certificate authorities were. Users typically assumed that if software was signed, it was somehow trustable, rather than the reality, which is that certificates merely assert that the software was produced by the people listed on the label.
The tests suggested that the best solution would be to replace VeriSign and other unknown entities with trustable companies that users could choose to subscribe to for recommendations, such as Consumer Reports, Good Housekeeping, SlashDot, or whoever else they happened to trust. Unfortunately, the infrastructure required to create this solution just wasn’t possible in the short timeframes available.
Data, Not Dilemmas
Our research confirmed that users’ responses to system security dialogs were based more on convenience than reason. Users would do whatever was necessary to dismiss the dialog and get on with their task.
It didn’t help that most security dialogs asked users to make decisions without supplying the underlying information that users need, resulting in dilemmas, not data. The dilemmas in this case were the tradeoff between users’ stated aims of staying secure and not revealing personal data versus their emotional attachment to performing a “risky” task.
After iterating the interface through several additional user test sessions, we arrived at the shipping version shown in Figure 6. Although it included some compromises, our user test data indicated that many more users understood the reason for the dialog box and could make informed decisions based on what they read.
The dialog box starts with a question that sets the scope of the interaction. It then presents the data that we have about the interaction, namely what we know about the software and developer. The help text is placed in a “snap-off” area under the main dialog so that it doesn’t interfere with the main task, but is still accessible. Addition of a set of red/yellow/green trust icons gives a quick visual indication of the risk level. Even participants who skipped past the text and went straight to the buttons hesitated once they saw the labels, realizing that “Install” had bigger implications than just dismissing a yes/no dialog box.
The iterations on the trust interfaces allowed us to identify some additional findings:
- Users don’t want to make trust decisions, they just want to “be secure”
- Users don’t want to reveal personal data without clear benefits
- Trust questions should present data, not dilemmas
Implications
As technologists we often push responsibility for trust and privacy issues onto end users without giving them a suitable environment or the tools to make smart decisions.
We can either tell people not to share their email address, or we can create better spam filters. We can warn people that a downloaded app might be dangerous, or we can sandbox it to stop it from doing bad things. At least part of the responsibility for smart trust solutions lies with software developers.
Privacy and security settings are a bit like sausages. People like them, but they don’t want to know what goes into making them. Similarly, people love the features that software provides, but if they are asked up front for all the permissions the software needs just to do its job, they panic and shut down access, often without considering the longer-term implications. By asking just-in-time questions, we can keep trust decisions within the context of the tasks that people perform.
Computers are good at two things, remembering, and doing sums. They are very bad at understanding emotions. Yet many of the trust decisions that people make have a large emotional component. Despite the computer’s calculation that an item is untrustworthy, it may have great emotional significance to a user.
This leads to the SECRET acronym:
- Scoped: Present users with just the data they need to make decisions, not with unmanageable dilemmas.
- Equitable: Demonstrate the benefits that users will get in return for sharing their information.
- Contextual: Let users make trust decisions in context. Make exchanging information an explicit part of using software, rather than hiding it in a privacy statement.
- Responsible: Stop making users take responsibility. Recommend and default to trusted options; use technology to prevent trust issues.
- Emotional: Users consider emotional factors that the computer can’t understand. Always respect their decision.
- Timely: Present trust decisions at the time they need to be made, rather than bundling them up in advance.
Following these design principles is a good first step in creating trust interfaces that users will understand. The more they understand the decisions they are making, the more they will trust the company that is asking the questions.
Retrieved from https://oldmagazine.uxpa.org/gaining-user-trust-research-and-a-secret/