In this edition of Ask UXmatters, our experts discuss how to bridge user research into design.
Every month in Ask UXmatters, our panel of UX experts answers our readers’ questions about a broad range of user experience matters. To get answers to your own questions about UX strategy, design, user research, or any other topic of interest to UX professionals in an upcoming edition of Ask UXmatters, please send your question to us at: [email protected].
Champion Advertisement
Continue Reading…
The following experts have contributed answers to this edition of Ask UXmatters:
Carol Barnum—Director and Cofounder, Usability Center at Southern Polytechnic State University; author of Usability Testing Essentials: Ready, Set. . . Test!
Dana Chisnell—Principal Consultant at UsabilityWorks; coauthor of Handbook of Usability Testing
Leo Frishberg—Principal Architect, User Experience at Tektronix Inc.
Gerry Gaffney—Founder and Lead Consultant at Information & Design
Adrian Howard—Generalising Specialist in Agile/UX
Mike Hughes—User Assistance Architect at IBM Internet Security Systems; UXmatters columnist
Jordan Julien—Independent Experience Strategy Consultant
Tobias Komischke—Director of User Experience, Infragistics
Whitney Quesenbery—Principal Consultant at Whitney Interactive Design; Past-President, Usability Professionals’ Association (UPA); Fellow, Society for Technical Communications (STC); UXmatters columnist
Daniel Szuc—Principal and Cofounder of Apogee Usability Asia Ltd.
Jo Wong—Principal and Cofounder of Apogee Usability Asia Ltd
Q: How do you bridge user research into design?—from a UXmatters reader
“I hardly know where to begin with such a big question,” exclaims Whitney. “The metaphor I used years ago is that starting a design is like reading fortune-telling cards. If you took some of the cards out of the deck, you wouldn’t get the right answer.
“In a non-UX process, we so often see business, marketing, and technical needs represented in design decisions, while the perspectives of actual users are completely absent. UX research brings those perspectives into the process, completing the deck. Design is not a mechanical process. UX design takes all the insights and skills of a good designer or design team. But what makes it UX is ensuring that design focuses on the user experience—context, activities, goals, and emotions.”
If you want to learn more about bridging user research into design, Whitney recommends that you read the works of Steve Portigal, Josh Seiden, and Indi Young. She also recommends Tomer Sharon’s upcoming book, It’s Our Research, “which stresses engaging the whole team in the research process.”
“This is a very broad question,” answers Jordan, “and the answer comes down to the type of user research and the type of design. I’ll assume we’re talking about designing a digital platform or program and will quickly explain how to use initial user research, goal-based user research, and on-going user research.
Initial User Research—Discovery
Platform—User personas, journeys, and scenarios
Use—These personas, user journeys, and scenarios should inform what feature sets and platform requirements get defined. It’s these feature sets and specifications that define what gets designed.
Program—User insight framework
Use—This framework should focus on a single important insight and expand on how it influences behavior. This insight should be the guiding principle for all ideation sessions. Ideation, of course, should lead to the design solution.
Goal-Based User Research—Testing
Platform—Card sorts, usability testing, focus groups, and surveys
Use—This type of research is helpful in defining design options, during the design process. It is often useful in getting user feedback on different design options, but also in gathering up-front insights.
Program—Heuristic testing and informal testing
Use—Generally, programs require a more emotional response than platforms. In this case, testing should focus on documenting users’ emotional state and emotional response to different design options.
On-Going User Research—Multivariate Testing / Goal Tracking
Platform—A/B testing, multivariate testing, and goal tracking
Use—This research can help you adapt your designs to how users are actually interacting with your platform. It’s best to plan what elements you want to test and what insights you’d like to get from the tests.
Program—A/B testing, multivariate testing, and goal tracking
Use—Often, programs don’t have a long enough lifespan to perform meaningful A/B or multivariate testing. That said, you should perform these types of tests and compare the results across multiple programs whenever possible. These learnings can help you optimize and refine designs.”—Jordan Julien
“I think this is a great question,” replies Tobias. “There’s nothing worse for a UX professional than standing in front of an empty, white wall. You have all your inputs, all the contextual knowledge, but yet, there are so many design alternatives! How do you start?
“Well, I just start with the first design alternative that comes to mind: think brainstorming with a pencil. Don’t constrain yourself, just sketch out as many ideas as you have. You can eliminate those that don’t make sense later. If you can show many design alternatives to your stakeholders, they can identify the good and not-so-good characteristics of each of them. Then in the next iteration, you can try to incorporate all of the good aspects into just one or two designs alternatives and take it from there.”
The Boundary Between Research and Design
“Research that doesn’t bridge into design is useless,” asserts Whitney. “The research team may have learned a lot, but unless the whole product team gets immersed in the research findings—even if they don’t become deeply engaged in the whole research process—it remains siloed knowledge.”
Leo agrees: “In my process, there is no difference between research and design. Good design begins with good research—it is all part of one continuum. When designers are on your research team, the diversity of the questions and insights you can glean from interactions with users increases substantially, leading to a very rich understanding of the problem space. When you have designers on your research team, the way they frame the questions and interpret, or code, the answers already sets the stage for actionable results.
“Because design is inherently a rapid, iterative process, designers with a sensitivity to user-centered processes not only expect to participate in the initial research, they expect to continue the process of research even as design artifacts are emerging.”
“If the challenge is to have the right inputs for the design—that is, the right outputs from research—I advise you to be conscious of the fact that your research methods pretty much determine how easy or hard it will be to start designing seamlessly,” says Tobias. “At a minimum, you need the following four inputs for design:
Data elements—the objects users refer to, present, manipulate, and act upon—for example, records, stocks, or photos
Functional elements—the operations users can execute on the data elements—for example, create, edit, or copy
A taxonomy—the basis for organizing the data elements and functional elements and relating them to each other—for example, users need to be able to delete a photo
Scenarios, use cases, or user stories—the paths through the taxonomy that users follow when accomplishing a task—for example, users first upload a photo, then map it to a customer record
“Your research must unveil all four of these inputs. If you look at taxonomy, it becomes clear that there’s no clear distinction between research and design. A taxonomy can be the result of research—for example, when you use card sorting to understand how the users of a Web site envision the distribution of content across Web pages. But the taxonomy can also be the result of design. For example, you might change the taxonomy that resulted from your research by consolidating the content on two pages into one page.
“Let’s assume that you have all four of these inputs,” Tobias continues. “What’s next? I strongly believe in breadth before depth—meaning that you need to design for the full functional breadth of your product before you design for functional depth. If you don’t, you may learn about the need for some crucial parts of the system too late in the game and find that your already established design framework doesn’t accommodate those parts. It’s like building a house: You want to plan out all of the floors and rooms before you worry about the pictures on the walls. Oh, you want to add another room on the second floor without changing the existing rooms on the first floor? Not a good idea.
“The taxonomy should allow you to determine what data elements and functional elements should go together on one screen—or page or window—
based on the scenarios. With that knowledge, you can create a map on a whiteboard that includes all of those screens. The taxonomy and scenarios should also allow you to understand which screens should be close to others in terms of navigation—to support users’ various workflows. You can draw lines with arrows between screens to show those paths. The result might look similar to the map in Figure 1. (I had to remove the screen names. DE = data element; FE = functional element. The color coding of screens represents different user groups.)
“You should validate your map with stakeholders and users. Once this map is stable, you have the equivalent of a floor plan for a new house. The next step is to work within the rooms—your screens—to make sure that it’s clear how to get in, how to get out, and what to do within them.
“What’s the interplay between users and the product on any specific screen or within any area of a screen? To determine that, I recommend that you check whether there are already existing ways of doing that. The magic word is patterns—generic, reusable solutions to user-interface design challenges. You can buy Jenifer Tidwell’s book, Designing Interfaces, or you can use Quince, Infragistics’ free pattern explorer. All other user-centered design methods apply, including screen design best practices such as alignments, contrasts, and grid-based design, as well as formative usability testing.”
The Problem Space
“User research findings can come in a lot of different forms. Generally, they describe a current state, but in some cases, a desired state,” offers Mike. “Findings from contextual inquiries take this form: Users act this way when they are engaged in such and such a task. Findings from interviews might look like this: Users say they want.... Survey results might take this form: x% of the users prefer y. And so on. So the most important thing is to extrapolate problem statements from these statements. Designs are solutions, so research needs to illuminate the problem. I use scenarios that start with a context paragraph that compiles the facts that I’ve gathered into a description of what’s wrong, as in the following example:
Context—Mary is the Chief Security Officer for ABC bank, a large account. She meets weekly with Frank, the Security Analyst who is assigned to that account. Although she finds many of the standard reports useful, she feels that, on the one hand, she is lacking a concise look at her operations, but also lacks very focused reports that target her specific needs. Frank tries his best to mash up data by exporting and combining them outside our product, but this is time consuming, and he doesn’t always have access to the data he knows is in our system. He has asked that we issue some of ABC’s reports as standard reports and has made a credible case that other customers would benefit from them. In one case, he got what he wanted, but it took three months for his request to get into a development cycle, three months to develop, and three months to test. By the time the report was available, it was no longer as relevant to ABC as when he originally made the request.
“Then, I then write a single imperative sentence, defining the goal from a user’s perspective—for example:
Goal—Produce ad hoc or customer-specific reports—from scratch and by combining elements of existing reports—without having to go through Engineering.
“Finally, I describe the solution through a narrative that describes how using the solution might play out in a realistic example. I iterate and expand on that description with subject-matter experts and developers, and usually end up with some illustrative wireframes. So, for me, the important step is using research to understand the problem space, then letting that drive the design.”
Look at the Big Picture
“There’s a lot that might be bundled up into that question,” replies Adrian. “Are the researchers having problems getting their research results integrated into the product? Are the designers having problems understanding the research? Is the business owner pushing the design in another direction, because he doesn’t value the research? Are implementation constraints causing the design to veer away from the design direction the research indicates would meet users’ needs? It is something else? Well, just for the fun of it, I won’t answer, It depends!
“The best bit of general advice I can give is to take a step back and look at the bigger picture. If there are problems bridging user research into design, the business doesn’t value the research. If the business doesn’t value the research, it’s not seeing its connection to the bottom line. Focusing just on the bridge between research and design won’t solve the larger problem.
“Unfortunately, the connection between user research and the bottom line is a distant and tortuous one in many organizations. Fixing that is a two-stage process:
Take ownership of the feedback. Your research doesn’t end when you hand over the conclusions to the design team. Your research ends when the product the research has shaped is in the hands of customers and you have evidence that it’s doing what you said it would.
Tighten the feedback loop. The shorter the period between doing the research and getting the feedback that the results were useful, the better. This makes it easier to demonstrate that it was the research that was significant—rather than some random change in the economy, the latest set of TV advertisements, or the phase of the moon.
“Once you have a solid, repeatable feedback loop, you can demonstrate the value of your research to the company. Once you can do that, you’ve solved your biggest problem.”
The Entire Design Team Should Take Part in Research
Dana recommends the following process for bridging research into design:
“Make sure that everyone who has any input to design decisions is part of the user research. When everyone on the team understands the experiences of the users, it’s much easier to generate good design decisions.
“Work on analyzing observations together with everyone who attended user research sessions.
“Step through a deliberate process, collaboratively looking at observations. An approach that has worked well with teams I’ve worked with has been to run a KJ (Kawakita Jiro) analysis first—also known as affinity diagramming—to identify the priority observations to work on. Once you have the top three to five things, hold a workshop with all the team observers. In the workshop:
Start by having everyone offer what they heard and what they saw relating to the first priority item. No interpreting, yet. Just put the observations out there.
Next step, everyone brainstorms inferences from those observations
—it’s a game of Guess the Reason. Why did those things happen? What’s the gap between what the user interface was designed to do and what the users were doing?
Then, look at the weight of the evidence. If you’re doing exploratory field research, what happened during the session to support or refute any of the inferences or reasons that you brainstormed? If you did usability testing, what performance measures support or refute the inferences from your brainstorm?
From this, the team should be able to develop theories about what to do to solve the observed issues, creating a design direction that the team can then prototype and test. And the cycle starts again.
“Teams tell me they love this process because it gets them out of the opinion wars, creates a solution collaboratively, and there’s no need for a written report.”
Dan and Jo suggest sharing the following:
“videos—Get the users to show you how they use products or services and record it.
photos—Show photos to get a deeper understanding of users’ context—
home, city, neighborhoods, friends, and family to name a few.
stories—Tell stories, along with the videos, about how users use products or services.
observation and insights—As the designers watch the videos, look at the photos, and listen to the stories, get them to write down observations and insights that they can bridge into the design.
“Overall, it’s about involving designers in the research and making them own the insights. An important component of success in designing meaningful experiences is inviting teams into environments that foster a culture of collaboration, creativity, and openness, in which it’s safe to critique, there are good energies, and team thinking helps you to move beyond your product of today toward a road map of possibilities.” For further discussion, see Dan and Jo’s UXmatters article “The Design Workshop: Bringing It All Together.” Dan and Whitney also cover some of these ideas in their forthcoming book, Global UX: Design and Research in a Connected World.
“By the way,” adds Whitney, “this question also ties into making design sessions fun by
acting out or embodying research results
writing stories
using design studio and rapid sketching methods
“These and all the other design activities aim not only to make collaborative design fun—and by doing so to free up our minds to be creative—but also to bring research insights into the design process in an organic way.”
Involve Stakeholders
“Involve your stakeholders in the research,” recommends Carol. “If it’s usability testing, get buy in from at least one key stakeholder or designer on every aspect of planning, testing, and analysis of the findings. While delivering a report about the results from a usability study documents what you have discovered, it doesn’t make the results come alive. Nothing beats seeing users in action. And if you can get your key stakeholders to help plan a study, you will rule out there being any questions about the test protocol that might otherwise come up later.
“I find that involving stakeholders in the process—from start to finish—
means I have the chance to educate them as we move through the steps. This education process ensures that they understand and are committed to all aspects of the study—from the screener for recruiting participants to the task scenarios to the post-test questions.?
“In addition to helping get a study plan that will produce the results that match the stakeholders’ goals, their involvement in the process gives them the results as soon as the study ends. Whether you meet at breaks throughout the day or at the end of each day, a quick meeting to discuss findings turns research into actions without delay. There’s no need to wait for a report. The results of a findings meeting go out the door with the designers. If you don’t have the designers involved in the research, they are left waiting days or even weeks for your report, which they will then need to discuss to decide what actions to take. By then, it may be too late to make changes.”
Personas, Scenarios, and Stories
“The answer to your question depends on several factors,” responds Gerry. “I think of the process as operationalizing the user research data. It’s a horrible word, but it does capture the fact that, at this point, you are trying to make the switch from knowing to doing. Personas and scenarios, in their various formats, can be particularly useful, because they enable you to tell stories about what you’ve found and the implications for the product you’re designing.
“To get to the stories, you need to conduct some data analysis. I still find that the best way to do this, with any large set of data, is to use index cards or Post-it notes and conduct an affinity diagramming exercise—grouping related items and content. This allows you and your team to clearly identify key themes and issues, which, in turn, lets you create the personas qand scenarios that will drive the design.
“Once you’ve got some solid data—and perhaps draft personas and scenarios—it’s a good idea to run design workshops, during which you can both present your findings and begin to discuss and get agreement on their implications. A very important factor is the need to share the journey with your project team—getting their buy-in will have a profound effect on both the design process and the final product.”
Whitney also recommends “using personas and stories to explore different experiences. Then, experience maps or other models like Indi Young’s mental models are a good way of getting from the messiness of specific stories to a broader—or higher—view.”
When You Are Tight on Time
“If you’re on a really tight development schedule and have got the commitment of your designers to participate in a study, your project is a candidate for the RITE method,” suggests Carol. “Developed by the usability testing team at Microsoft’s Games Studios, RITE stands for Rapid Iterative Testing and Evaluation. The process provides a way to make fast design changes as soon as you identify a problem.
“Using the RITE method, you can schedule a participant—or a few participants—then debrief to diagnose whatever problems the design team has observed. As soon as the team reaches agreement that the problem exists, it gets fixed. You can test your change with the next one or two participants. Testing and changing the design so often requires the full commitment of the design team, but the beauty of this commitment is that either the next participants immediately confirm the strength of the design changes, or you continue to make more changes and do more testing until the design is right for users.”
I found this an interesting read because I discovered it shortly after writing up some of my own thoughts on the subject—albeit relating to bridging the gap from multivariate testing to design.
I’m rather surprised at the fact that almost none of those quoted here talk about the role of hypothesis, which seems to me to be the only practical—or at least defensible—way of approaching the problem.
I’ve blogged a sort of review of this article, and would be interested if anyone involved would comment.
Dr. Janet M. Six helps companies design easier-to-use products within their financial, time, and technical constraints. For her research in information visualization, Janet was awarded the University of Texas at Dallas Jonsson School of Engineering Computer Science Dissertation of the Year Award. She was also awarded the prestigious IEEE Dallas Section 2003 Outstanding Young Engineer Award. Her work has appeared in the Journal of Graph Algorithms and Applications and the Kluwer International Series in Engineering and Computer Science. The proceedings of conferences on Graph Drawing, Information Visualization, and Algorithm Engineering and Experiments have also included the results of her research. Read More