Top

Testing Your Own Designs: Bad Idea?

Envision the Future

The role UX professionals play

A column by Paul J. Sherman
September 21, 2009

This column was spurred by a simple question I posted to Twitter in mid-August: Can designers effectively usability test their own designs?

This isn’t just an academic question. With the current state of the economy and many UX teams downsizing, it’s entirely probable that your company will call upon you to both create a UX design and do usability testing to validate it. In the future, as the field of user experience progresses, agile UX becomes more common, and functional disciplines become more blended, I think this will occur more and more.

People have often likened doing both design and usability testing on the same project to defendants serving as their own counsel in a court of law. How does that saying go? Something like this: A lawyer who defends himself has a fool for a client. Is testing one’s own design a similarly bad idea? What are the pitfalls? Are there any advantages? And most important, if you must do it, what pitfalls should you beware of?

Champion Advertisement
Continue Reading…

In this column, I’ll answer these two questions:

  • Is it possible to do both design and usability testing effectively?
  • If so, how can we test our own designs well?

UX literature is rife with cautionary tales about designers testing their own designs. The objections to doing so typically follow this line of reasoning: Designers are emotionally invested in their designs, they believe in their rightness, and are loath to change them or bear criticism of their baby.

I think this is a specious argument. Certainly, some designers—particularly those of the genius design bent—are willing to brook little criticism of their designs. However, most designers I’ve encountered are more interested in actually solving users’ problems than in maintaining the typically illusory artistic integrity of their designs. So I don’t think most designers are naturally resistant to criticism of their designs, particularly when the criticism originates from the people who are the intended users of the products they’re designing.

When I posted this question on Twitter, Facebook, and my own UsabilityBlog, I received some interesting responses. Here’s a sampling of some the best, most pithy opinions. (Keep in mind that folks who replied via Twitter were limited to 140 characters, while those who replied via Facebook or my own site had a bit more room to expound.)

“Designers can test [their] own stuff. Everyone has bias whether [they’re a] designer or not. [You] just need to be aware of your biases.”—an interaction designer/usability analyst, via Twitter

“[Designers] can effectively perform [usability] testing to gather additional insights and new ideas, but to be fully unbiased and seek out real UX issues, and to do it effectively…it’s a struggle. If you are playing the role of both researcher/analyst and designer, you have to be fully aware, at all times, of how you are forming your conclusions. For example, am I just seeking insights that prove my design solutions? It’s best to partner with an unbiased—yet collaborative—researcher.”—a user experience manager, via Facebook

An experienced user researcher who works for a large developer of both desktop and Web applications had this to say:

“Yes [designers can test their own designs], but they have to be actively trying to ‘dis’ [them].”—via Twitter

A hardware designer and usability analyst expanded on this theme:

“I think it is very challenging for UX [professionals] to objectively test designs they’ve created. While [some] designers are accustomed to having a number of ideas and [going through a] critique process, [others] tend to select one option and focus on it. This generally results in significant bias. I know some companies separate the design folks from the test folks. I suspect many organizations see that as too cost prohibitive these days, though.”—via Facebook

You may have noticed that I’ve been holding back my opinion so far. Well, here it is: I generally agree with the consensus these comments have demonstrated. Regardless of whether we like it or think it’s a good idea, designers will increasingly be testing their own designs.

Potential Pitfalls of Designers’ Testing Their Own Designs

I do think there’s a more subtle argument to be made about the potential pitfalls of designers’ testing their own designs. My own take is that it’s entirely possible for designers to test their own designs effectively. However, there’s one catch: the designs they’re testing have to be close to the right solution, because in testing their own designs, it’s likely designers would concentrate more on fixing the design as it exists rather than being open to the possibility that their design is not an appropriate solution. In other words, my hypothesis is that designers would be less willing than a third party to throw out their faulty design concepts and, instead, more likely to try to patch their flaws. Why do I say this?

Think back to your Psych 101 class. In its social cognition unit, you probably learned a bit about a psychological phenomenon called confirmatory bias. I won’t go into the foundational research behind confirmatory bias, because the Wikipedia definition is serviceable:

Confirmation bias is an irrational tendency to search for, interpret, or remember information in a way that confirms one’s preconceptions or working hypotheses…. The bias appears, in particular, for issues that are emotionally significant—such as personal health or relationships—and for established beliefs [that] shape the individual’s expectations.”
Wikipedia

What this means in a design context is this: The fact that you’ve taken a stand and created a design in the first place means you’ve articulated your design hypothesis and instantiated your hypothesis in the form and function of your design. It is going to be difficult to keep yourself from wanting to confirm your design hypothesis, because you’re hardwired to preferentially seek out confirming rather contradictory evidence. Even though you’re able to criticize your own designs and recognize that a fundamentally sound design needs some adjustment, confirmatory bias makes it hard for you to realize that your design is the wrong approach entirely.

Guidelines for Testing Your Own Designs

So, enough opinion. Here are some guidelines for testing your own designs:

  1. When testing your own designs, always concentrate on the negative.

Yes, it’s normally part of good, balanced testing practice to look for both what works and what doesn’t. However, knowing what we know about confirmatory bias, it’s probably a bad idea for designers who are testing their own designs to try to be balanced. Instead, you should always focus on failing your design, because once you start looking for the good in your design, you’re likely to weight that information more heavily than the negative.

This, of course, begs the question of how you can put yourself in a frame of mind to focus on what’s wrong. So, here’s guideline number 2.

  1. To focus on the negative aspects of your design, try to keep yourself focused on your long-term goal, which is to solve your users’ problems.

The real outcomes you’re aiming for are to make your client happy—if you’re a consultant—and make your target users happy by successfully providing them with a tool that helps them solve a problem. It might help you to be objective if you visualize what would happen if you gave your design a pass, your client launched it, then users found it lacking. Keeping this in mind has often helped me criticize my own designs—knowing that, if I don’t get it right, both the users and my client will be unhappy.

Guidelines 1 and 2 have been about motivation and focus. Guideline 3 is all about helping you recognize when you should scrap your design.

  1. If your users are unable to grasp the task at hand or are experiencing repeated failures and missteps when navigating your design, you should consider rethinking the entire design.

When you see users endlessly hunting for functional access points or repeatedly struggling to map the task you’ve asked them to do to anything in your user interface, you should take this as a sign that you should redesign your information architecture, navigation, or interactions from the ground up. In this case, don’t patch, scrap.

A final thought—as I write these words, it occurs to me that these guidelines are equally applicable to third-party usability professionals. What do you think? I welcome you to add your thoughts, affirmations, or disagreements in the comments. 

Founder and Principal Consultant at ShermanUX

Assistant Professor and Coordinator for the Masters of Science in User Experience Design Program at Kent State University

Cleveland, Ohio, USA

Paul J. ShermanShermanUX provides a range of services, including research, design, evaluation, UX strategy, training, and rapid contextual innovation. Paul has worked in the field of usability and user-centered design for the past 13 years. He was most recently Senior Director of User-Centered Design at Sage Software in Atlanta, Georgia, where he led efforts to redesign the user interface and improve the overall customer experience of Peachtree Accounting and several other business management applications. While at Sage, Paul designed and implemented a customer-centric contextual innovation program that sought to identify new product and service opportunities by observing small businesses in the wild. Paul also led his team’s effort to modernize and bring consistency to Sage North America product user interfaces on both the desktop and the Web. In the 1990s, Paul was a Member of Technical Staff at Lucent Technologies in New Jersey, where he led the development of cross-product user interface standards for telecommunications management applications. As a consultant, Paul has conducted usability testing and user interface design for banking, accounting, and tax preparation applications, Web applications for financial planning and portfolio management, and ecommerce Web sites. In 1997, Paul received his PhD from the University of Texas at Austin. His research focused on how pilots’ use of computers and automated systems on the flight deck affects their individual and team performance. Paul is Past President of the Usability Professionals’ Association, was the founding President of the UPA Dallas/Fort Worth chapter, and currently serves on the UPA Board of Directors and Executive Committee. Paul was Editor and contributed several chapters for the book Usability Success Stories: How Organizations Improve by Making Easier-to-Use Software and Web Sites, which Gower published in October 2006. He has presented at conferences in North America, Asia, Europe, and South America.  Read More

Other Columns by Paul J. Sherman

Other Articles on Usability Testing

New on UXmatters