Top

This Is Not Just a Test—It’s Primetime

Insights from Research

Walking in your customers’ shoes

May 10, 2011

When engaging in any form of product usability test, there are certain very important guidelines to keep in mind. One guideline that user researchers commonly overlook is testing with a version or mockup that is free of glitches, bugs, or known errors. In essence, you want what you’re testing to be ready for primetime. We have found it is very common for companies to test with incomplete builds of a product that is rife with known issues. We always advocate for using a clean build or mockup of a product, because of negative consequences we’ve encountered in the past. Of course, it is always possible to test with a buggy build of a product, but it is very important to be aware that testing with a product with known issues can extend a usability study’s schedule, compromise the accuracy of its results, and inflate its cost.

Champion Advertisement
Continue Reading…

Users Fixated on Glitches, Bugs, and Errors

When you put a product that has glitches—represented in Figure 1—bugs, or errors in front of users, they usually discover them. When they do, they tend to fixate on them and miss actual usability issues, especially those that are more subtle. It is because of this phenomenon that it is important to do iterative testing to uncover all of a user interface design’s critical usability issues. However, even if you are using an iterative testing approach, participants’ encountering many glitches could extend the number of test iterations you’ll need to do. If you are relying on just one or two iterations of usability testing to get your product ready, you are almost certainly going to miss quite a few issues if participants encounter errors.

Figure 1—Encountering a glitch
Encountering a glitch

When you are doing concept testing, errors can distort participants’ true reactions to a concept. So, rather than responding to the concept’s value proposition, participants instead respond to the execution of its implementation. When discussing barriers to adoption in a previous column, “Barriers to Adoption and How to Uncover Them,” we defined confidence as users’ believing that a product can deliver the value it promises. When you perform concept testing with a build that has obvious errors, participants’ confidence in the product suffers. In turn, their reaction to the concept suffers. Thus, even if participants attempt to compensate for the errors by ignoring them, their reaction is still somewhat tainted, and they can end up overcompensating or imagining a product that differs radically from the intended design. In that situation, your best course of action is to ask participants about the design they imagine. But keep in mind that they are still describing untested and vague design ideas rather than providing true feedback on the concept.

Increased Costs and Extended Schedules

Whenever we test with a buggy build, we know that the testing will take 25–50% longer than it would have with a stable build, because of the troubleshooting that tends to occur during and between test sessions. When a product freezes or crashes during a test, you must stop the session to solve the problem. This can often result in sessions’ extending far beyond their scheduled time. An hour-long session can easily become a 90-minute session, pushing back the start time for the following sessions, during which other participants may experience the same problems. At times, we’ve had to cancel sessions shortly after they started, because the complete failure of the software required that we wait for an engineer to repair or reinstall the build.

Such failures are time consuming and costly because they extend the timeline for acquiring adequate, usable data; require additional engineering support; and can result in the loss of paid participants. The need to replace lost participants means you’ll accrue additional costs for recruiting, participant compensation, and session moderation. In addition, adding more sessions when you replace lost participants can put a research deliverable date in danger of slipping, which could shift the development schedule, which is very costly, or result in design decisions’ being made without research findings, which is risky.

Obviously, it’s not always possible to get a perfectly clean, workable build. When you cannot, the best way to overcome the hurdles we’ve mentioned is to be as prepared as possible before starting usability testing.

We always test a build thoroughly prior to starting a usability study to determine its stability. It’s very helpful to know the ways in which a product can break ahead of time and whether there are incomplete sections you should avoid during testing. It’s also imperative to know whether a build could require a cold boot or even a reinstall. When we know that we are dealing with a buggy build, we can

  • compensate by scheduling additional buffer time between sessions—This allows each session to run long as necessary.
  • recruit additional participants in case there are cancelled sessions—We can cancel later sessions with some participants if they aren’t necessary. It’s much easier to cancel than to scramble in an attempt to find suitable replacements.
  • have a research partner on a study if the budget allows—This allows one person to aggregate data while the other collects data or to troubleshoot while the other person performs a post-session interview or preps the next participant.

Anything that you can do to streamline your process and ensure you meet your deadlines is a sensible enhancement to your test plan.

Testing with Mockups or Prototypes

For usability testing, if you don’t have software to test, fake it. In most cases, testing with mockups or prototypes can provide excellent, actionable data. This holds true whether you are using a simple, clickable Flash demo for usability testing or paper prototypes for concept testing. By limiting the development of a prototype to a product’s front end, you can quickly create a prototype of a user interface that is adequate for usability testing.

If a designer or researcher on your team is familiar with simple Flash development or HTML/CSS prototyping, you can develop a prototype with minimal support from Engineering. There are also some software solutions available from companies like Balsamiq and Napkee that allow just about anyone to produce a clickable HTML prototype like the one in Figure 2.

Figure 2—A prototype made using Balsamiq Mockup
A prototype made using Balsamiq Mockup

When taking this kind of approach, you should match your test-session design to the fidelity of the mockup or prototype. If you have paper prototypes or fairly simple clickable prototypes, focus primarily on core features, brand messaging, and perceived value. If you have a more complete prototype, you can progress to a more rigorous test of additional features and their added value.

When devising an end-to-end research plan, we typically start with need-finding research such as ethnography, home visits, or interviews, then do concept testing using paper prototypes to assess the value proposition, brand messaging, and feature set. As the design progresses, we test with paper prototypes, incorporating simple test tasks that address a product’s core functionality . Eventually, we’ll transition to testing core functionality, using low-fidelity, clickable mockups. The next step is more robust usability testing with medium-to-high-fidelity prototypes. Finally, we’ll transition to testing a reasonably stable build of the actual product, doing in-depth usability tests or following more advanced testing methods such as competitive benchmarking. We’ve found that this kind of iterative testing schedule is extremely effective in providing actionable design intelligence.

Conclusion

It’s never a great option to test with buggy or unstable builds, because doing so can compromise your data collection, complicate your study’s logistics, and potentially, impact your study’s budget and schedule. You can test mockups or prototypes of various types as alternatives to testing incomplete builds, but it is important to design your study to be compatible with the fidelity of the mockups or prototype you are using.

When a prototype simply won’t do the job and you need to use a build that you know has errors, it’s important to plan for the problems that are likely to arise. Before starting your study, test the build as extensively you can, note the areas in which you encounter difficulties, and plan for troubleshooting. In your test plan, it’s also important to accommodate the possibility of cancelled or extended sessions by recruiting extra participants, including extra buffer time between sessions, and working with a research partner or team of researchers.

As user research professionals, our goal is always to provide accurate, actionable research findings on schedule and on budget. For testing, we recommend to our clients that they provide a build that is ready for primetime. But, if that can’t happen, we rely on the tools we’ve described and a little creativity, so we can anticipate problems, quickly find solutions when we encounter problems, and keep our research objectives on track. 

VP, UX & Consumer Insights at 30sec.io

Co-Founder and VP of Research & Product Development at Metric Lab

Redwood City, California, USA

Demetrius MadrigalDemetrius truly believes in the power of user research—when it is done well. With a background in experimental psychology, Demetrius performed research within a university setting, as well as at NASA Ames Research Center before co-founding Metric Lab with long-time collaborator, Bryan McClain. At Metric Lab, Demetrius enjoys innovating powerful user research methods and working on exciting projects—ranging from consumer electronics with companies like Microsoft and Kodak to modernization efforts with the U.S. Army. Demetrius is constantly thinking of new methods and tools to make user research faster, less costly, and more accurate. His training in advanced communication helps him to understand and connect with users, tapping into the experience that lies beneath the surface.  Read More

President & Co-Founder at Metric Lab

Strategic UX Adviser & Head of Business Development at 30sec.io

Redwood City, California, USA

Bryan McClainBryan is passionate about connecting with people and understanding their experiences and perspectives. Bryan co-founded Metric Lab with Demetrius Madrigal after doing research at NASA Ames Research Center for five years. While at NASA, Bryan worked on a variety of research studies, encompassing communication and human factors and interacting with hundreds of participants. As a part of his background in communication research, he received extensive training in communication methods, including certification-level training in police hostage negotiation. Bryan uses his extensive training in advanced communication methods in UX research to help ensure maximum accuracy and detail in user feedback. Bryan enjoys innovating user research methods that integrate communication skills, working with such companies as eBay, Kodak, Microsoft, and BAE Systems.  Read More

Other Columns by Demetrius Madrigal

Other Columns by Bryan McClain

Other Articles on Usability Testing

New on UXmatters