Top

Trust and Blame

Universal Usability

Putting people at the center of design

A column by Whitney Quesenbery
February 20, 2006

I lost my address book recently. It was one of those near-death computer experiences where you see your data pass before your eyes and start searching through the trash, then the Web, hoping to find the information you need right now. The experience made me think about blame—and trust.

Here’s what happened. I was running late for a meeting and plugged in my Palm for a quick HotSync. You know the drill: one hand on the mouse, the other stuffing things into my briefcase, all while shrugging on my coat. Then, I got an error message. Something about having too many records and suggesting that I delete a few and try again. Distracted, I try removing old, completed tasks. A few quick clicks, and I’m hotsyncing again. That’s when it all went wrong, and I lost all of the information in my address book.

Champion Advertisement
Continue Reading…

Okay. Before we go any further, did any of the following thoughts pass, however fleetingly, through your mind?

“I bet you didn’t have a good backup.”

“Why would you do anything like that when you are in a hurry?”

“What did you really click? Maybe you made a mistake.”

“Are you sure you don’t have a recent backup?”

If they did, I’m sure you are not alone in having these thoughts. They certainly passed through my mind—along with a few choice curses, imprecations to higher powers, and condemnation of all things silicon.

Before we all get out our hankies, let me tell you how this ended. I did not junk my electronic organizer and go back to paper. And I managed to restore most of my contacts’ names and addresses from a backup, even if not one as recent as I might have liked. We could say that this little episode ended happily.

But did it?

Yes, I got my contacts data back, but two other things happened. First, I endured another episode of “blame the user.” Second, I was given another lesson in why electronic devices can’t be trusted.

Blaming the User

Look at those reactions again. Each and every one of those thoughts blames the user—me—for the problem. But I really didn’t do anything except try to use an expensive piece of electronics for the purpose for which it was intended: carrying information with me in a convenient package. What I didn’t do was make it the center of my attention, so some might attribute this problem to human error.

But why is it that the only humans who seemingly make errors are the people who are trying to use a product? As David Aragon of Voter March said, “All errors are human error.” Why not point the finger at all the other people who had a hand in the situation: programmers, designers, product managers, and quality testers? Whose human error is it, anyway?

My error: I put too much information into my mobile organizer.

In other words, I used the thing. A lot. If it was my old DayRunner notebook, it would have been bristling with slips of paper, addresses scribbled in margins, and directions pasted into the calendar. Instead of all that mess, my data is stored neatly on a chip. But where my DayRunner would literally start to explode when it got too full, there is no visible meter on the Palm to show me how full it is.

I’m sure that, somewhere in the documentation, there is a statement about how many records the device can hold. But what good does it do to have this information buried somewhere? Even if I had read it and remembered the number, what good would it have done me if I were not reminded of it in a timely manner?

Their error: The warning came too late.

The time to warn a user is before a problem happens, not after it occurs. The faster things happen, the earlier the warning needs to be. Think about how early you should warn someone that they are stepping carefully, walking, or running toward the edge of a cliff. If there are any hard-coded limits, warnings should appear when there is room to spare.

My error: I didn’t pay enough attention to the error message.

I treated the message like an annoying child, giving it just enough attention to quiet it, but not enough to really understand what it was saying. But if I had received the message from a person, there would have been some easy-to-recognize change in tone to warn me that this was serious. Instead, the device showed me a little window that looks almost identical to the window that says it has completed its task successfully.

I barely read the message, of course. I just glanced at it and clicked OK in the same instant. So, by the time I realized that I had not really gotten the message, it was too late. The message was gone forever. If there was another way to find out what had happened, I missed it, too.

Their error: Important information that should be visible wasn’t.

If it’s important for users to be aware of a condition, it’s essential that they have a way to monitor it. We figured this out with battery status warnings. Why not with disc or memory space? This applies to messages as well. If an error message contains critical information, don’t make it look just like the message that says everything is okay.

My error: I tried to fix their problem without really focusing on it.

I remembered “remove some records” and tried to find something I could remove quickly. The calendar offered me only the options of removing items from one week, two weeks, three weeks, or a month ago. Not far enough back. I decided to remove completed tasks. I think I did it right, but who knows. I didn’t spend much time on this decision and certainly didn’t look up any instructions.

Their error: Failure to protect user data.

I don’t really know what happened here, but I’ll bet it’s a bug. Perhaps the error was in not testing boundary conditions carefully enough—or in assuming something will never happen. But when it does happen, don’t punish the poor human who did nothing more than buy your product and try to use it.

It will, of course, take a complete culture change to make the humans on the product-creation side take responsibility for their human errors and the product defects they cause. This change, however, is critical if we are going to create good user experiences rather than just user experiences that work okay as long as nothing bad happens.

This brings us to the second lesson: trust.

Learning (Not to) Trust

When a product—computer, mobile device, or whatever—is just a toy, it doesn’t matter so much if it works well. It might be annoying if your favorite game dumps your playing history just as you’ve reached the highest level, but not much more. However, if you’ve stored all of your financial data in an application, losing it has real consequences.

The more we rely on our electronic devices, the more we are trusting them to be there when we need them and to safeguard our information and our privacy. And the more we rely on them, the greater the consequences of any failure.

I don’t know anyone who has not been through at least one catastrophic failure. Some have long-lasting consequences, other results are more short term, but the pattern of what follows is the same. After going through denial and anger, we make a bargain with ourselves: we will never let this happen again! During the depression phase that follows, we are more conscientious. We back up our data. We don’t push the system so hard. But, over time, the memory softens, we accept what happened, and we fall back into our old habits.

I’m not talking about “average” users, but people who work with computers regularly and have a good enough understanding to know their limitations. We have a strong affinity and start with a high degree of trust, so it takes a lot to whittle it away.

I just trusted that the Palm would not trash my data without warning me. After all, this is one of the few user interfaces that doesn’t ask me if I want to save the data I’ve just put effort into creating, but assumes that, of course, I want to keep my work. This incident left me a bit shaken, but in the end, I kept on using my Palm. I back up my data a bit more often, and I don’t trust HotSync not to destroy both sets of data, but I can already feel myself slipping into resigned acceptance.

There’s a saying: “Fool me once, shame on you. Fool me twice, shame on me.” How many times will people be fooled by technologies before they give up and decide that they can’t be trusted? Or will we make them trustworthy before that happens? 

Principal Researcher at WQusability

Co-founder of Center for Civic Design

New York, New York, USA

Whitney QuesenberyWhitney is an expert in user research, user experience, and usability, with a passion for clear communication. As Principal Consultant at Whitney Interactive Design, she works with large and small companies to develop usable Web sites and applications. She enjoys learning about people around the world and using those insights to design products where people matter. She also works on projects with the National Cancer Institute / National Institutes of Health, IEEE, The Open University, and others. Whitney has served as President of the Usability Professionals’ Association (UPA), on the Executive Council for UXnet, on the board of the Center for Plain Language,and as Director of the UPA Usability in Civic Life project. She has also served on two U.S. government advisory committees: Advisory Committee to the U.S. Access Board (TEITAC), updating the Section 508 regulations, and as Chair for Human Factors and Privacy on the Elections Assistance Commission Advisory Committee (TGDC), creating requirements for voting systems for US elections. Whitney is proud that one of her articles has won an STC Outstanding Journal Article award and that her chapter in Content and Complexity, “Dimensions of Usability,” appears on many course reading lists. She wrote about the use of stories in personas in the chapter “Storytelling and Narrative,” in The Personas Lifecycle, by Pruitt and Adlin. Recently, Rosenfeld Media published her book Storytelling in User Experience Design, which she coauthored with Kevin Brooks.  Read More

Other Columns by Whitney Quesenbery

Other Articles on Software User Experiences

New on UXmatters