[:en]The other day, I connected a new phone to my car’s “Communications Console” for the first time. It all went smoothly, with the technology working pretty much as I expected.
Then I got this message on my phone:

Figure 1: In other words, do you trust your car to have access to everything on your phone?
I had to think about this for a minute…literally. The message is hard to parse: What am I adding? How do I know what VW BT 9314 is? How far away is 100 meters?
Once I’d picked apart the syntax, recognized “VW BT” as Volkswagen Bluetooth, and mentally paced off a distance about as far as I park from the front door of my grocery store, I realized that this is a much bigger question than a simple, “Are you sure?” in a software dialog. The risks here aren’t just whether I have accurately understood the question. To answer accurately, I need to understand what might happen in an area around my car, not just now, but in the future.
Even worse, I don’t really know what that question is. The message in the interface is asking me whether I trust my car, but I have no idea what I’m trusting my car to do.
So, I asked a geeky friend to translate for me. He said, “I don’t know what your car can do. It might be asking you if you trust that when your car talks to your phone, it really is your car.” That seemed fair. My car is a big physical object and I usually know if I’m near my car or not. But, his first question kept nagging at me. What can my car do that I might not want it to do…or even know about.
Trusting Technology
Our social lives are based on many levels of trust. We are asked to trust other people, organizations, institutions, and government in both explicit agreements and tacit assumptions. Even turning on a light switch is to believe that by doing so a light will turn on; that it will turn on the same light as last time; that there will be electricity to power it; that it will not spark an explosion.
At one level or another we make these leaps of faith in almost everything we do.
As technology gets more and more embedded in our lives, we increasingly inhabit a world where the boundaries are invisible. How do we design the interfaces that help people understand what they mean every time they answer an “Are you sure?”
Every new technology brings new social challenges as human beings learn to negotiate the communication, trust, and security issues implicit in that technology. Behind every new feature and every interaction, there is a team of people setting up those decisions for millions of users to make.
There’s a lot of evidence that, too often, we are making it too hard for users to make a decision they are comfortable with. That’s partly because it’s hard to balance explaining things clearly and not drowning people in information. But too often we are getting the conversation wrong. We make assumptions about what terminology users know or how well they can imagine how new technology really works.
It’s not just that with the Internet of Things we have to decide whether we can trust devices like our cars with our information. We also must ask whether our devices have been designed to be good citizens. Like The Sorcerer’s Apprentice, we may find that these devices are out of our control and can affect the “common” with new possibilities for mischief.
For example, in October 2016 one of the largest Denial of Service attacks on record shut down much of the Internet in the US. The culprit turned out to be baby monitors. Yep, baby monitors and a lot of other devices with cameras built into them—simple Internet-connected cameras that were infected with malware. Part of the problem is that we want these devices to be easy to set up and use. In fact, they are so easy to use that almost anyone can break into them. In this case, the problem turned out to be a single brand of webcam that—wait for it—had a password written into the firmware, essentially spoon-feeding access to hackers.
The result was, as CNET put it, that “an army of DVRs and cameras kept you off Reddit for most of [a] day” as hackers turned 100,000 vulnerable devices into a malicious botnet, or a zombie army of “things.”
You can Google this yourself, but here are three articles that explain what happened in the denial of service attack.
Hackers Used New Weapons to Disrupt Major Websites Across U.S. New York Times, October 22, 2016
Why it was so easy to hack the cameras that took down the web? CNet, October 24, 2016
Chinese firm recalls webcams used in last week’s massive cyber attack as experts warn poor device security may lead to another major hack Daily Mail, October 24, 2016
Designing for Trust
Like usability, accessibility, and quality, trust must be built into a system. It’s not enough for something to be possible for it to be a good idea. Big data algorithms are a particular ethics concern, so much so that the US ACM Public Policy Committee has created a set of principles for transparency and accountability.
As Eric Meyer and Sara Wachter-Boettcher wrote so elegantly in their book Design for Real Life, it’s critical to think—right from the beginning—about what might go wrong and how different people might use (or misuse) a new feature. What might the impact be on the very real people who will use what you create?
- Will they understand the question each interaction asks?
- Will they know the risks of each action they take?
- Can they imagine the consequences to themselves and others?
These aren’t rhetorical questions. News reports are filled with stories that range from people who thought their email was a private conversation to new features in social media that have had devastating unintended consequences, to dark patterns designed to manipulate the people who use our products.
- Can users trust us to work thoughtfully and ethically so they can trust us—and the devices we work on?
- How can we make human-centered ethics, trustworthy transparency, and security for people part of every UX decision?
Each of us has to answer these questions for ourselves. It’s a topic for conversation with our teams, our colleagues, and with the people whose lives we touch with our work.
Discuss among yourselves.[:zh]随着技术越来越多地融入我们的生活,世界的边界正在渐渐消失。我们应如何设计界面,以帮助人们了解此类技术带来的种种沟通、信任和安全问题?仅是可行是不够的。您还必须考虑您所创建的事物对实际用户的影响。
文章全文为英文版[:KO]기술이 우리 삶에 점점 더 깊숙이 파고들면서, 우리는 점점 더 경계가 보이지 않는 세상에 살고 있습니다. 사람들이 해당 기술에 내포된 통신, 신뢰 및 보안 문제를 이해하는 데 도움이 되는 인터페이스를 어떻게 설계할까요? 무엇인가가 가능하다는 것만으로는 충분치 않습니다. 당신이 만드는 것을 사용하게 될 실제 사람들에게 미치는 영향도 고려해야 합니다.
전체 기사는 영어로만 제공됩니다.[:pt]À medida que a tecnologia vai se tornando mais presente na nossa vida, percebemos que vivemos em um mundo onde o céu é o limite. Como podemos fazer o design das interfaces que ajudarão as pessoas a compreender as questões relacionadas à comunicação, confiança e segurança inerentes a essa tecnologia? Não basta que algo seja possível. Também é necessário considerar o impacto sobre as pessoas que realmente usarão o que criamos.
O artigo completo está disponível somente em inglês.[:ja]テクノロジーが我々の生活に浸透するにつれ、我々の住む世界は境界がますます不明瞭になりつつある。これらのテクノロジーに潜在するコミュニケーション、信頼、そしてセキュリティの問題を人々が理解するのに役立つインターフェイスをデザインするにはどうしたらいいだろうか。何かが可能であるだけでは十分ではない。我々は、我々が作り出したものを実際に使用する現実の人々への影響も考慮する必要がある。
原文は英語だけになります[:es]A medida que la tecnología se inserta cada vez más en nuestras vidas, habitamos progresivamente un mundo en donde las fronteras son invisibles. ¿Cómo diseñamos las interfaces que ayudan a las personas a entender los temas de comunicación, confianza y seguridad implícitos en esa tecnología? No es suficiente que algo sea posible. También debemos considerar el impacto en cada persona real que usará lo que usted crea.
La versión completa de este artículo está sólo disponible en inglés[:]
Retrieved from http://oldmagazine.uxpa.org/navigating-the-internet-of-things/
Comments are closed.
