How many times have you come away from a user interview thinking That was a complete waste of time? That the person with whom you just spent an hour or more of your life revealed perhaps two interesting observations, but the interview didn’t contribute anything particularly worthwhile to your project? This probably happens at least once during every design research study: you just have an interview that, for whatever reason, is a dud.
Design research can be a costly exercise and, with small sample sizes, every minute with every person counts. Some people know exactly what they want to talk about. With others, it’s like peeling layers off an onion or cracking a safe. Rather than relying on participants’ being good talkers who know how to respond to your lines of questioning, realize that everybody has something to say, but you have to know what buttons to push to find what it is they care about and what they know that can inform your design.
Champion Advertisement
Continue Reading…
Sometimes when we have a poor interview, we blame the person we’ve interviewed. That person might be a design stakeholder or current or potential customer. You might be conducting behavioral or stakeholder interviews or running usability test sessions. The interviews may have been at participants’ offices or homes, in your offices, or in the field. Regardless of the situation, you may be tempted to label a participant unengaged, inappropriate, inarticulate, or worse. But there is one constant in all of these different interview scenarios: you.
Bad interviews can result in missing data, incomplete detail, misleading results, partial insights, and lost opportunities. Your reports, presentations, and recommendations document what you’ve learned from your research and the decisions you’ve made based on it, so you need to ensure your research is the best it can be—that you get good interviews.
What a Good Interview Sounds Like
NPR, a public radio station in the United States, shared an interview they thought was so bad it was comical, on their Bryant Park Project YouTube channel. The interview was with the popular Icelandic band Sigur Ros. Watch the footage in Figure 1 and consider what went wrong.
What NPR thought went wrong was Sigur Ros. In their description of this YouTube video, they claimed the band gave the worst interview in the history of electronic media and suggested you never invite them to an interview. But the interviewer broke nearly every rule in the book by
posing hard questions first
asking closed questions
overloading questions
asking boring questions
answering his own questions
posing questions in a random sequence
doing little followup on responses
using negative body language
failing to build rapport
In some cases, your research participants can feel like the proverbial stone from which you’re trying to draw blood. So what constitutes a good interview? In his book Interviews: An Introduction to Qualitative Research Interviewing, Steinar Kvale outlines some criteria for evaluating an interview:
To what extent are the participant’s answers spontaneous, rich, specific, and relevant?
Are the interviewer’s questions shorter and the participant’s answers longer? The longer, the better.
What is the degree to which the interviewer follows up and clarifies the meanings of relevant aspects of the participant’s answers?
Did the interviewer interpret the meaning of the participant’s answers throughout the interview? Ideally, this should occur to a large extent.
Did the interviewer attempt to verify his or her interpretations of the participant’s answers during the course of the interview?
Was the interview self-communicating? Did it communicate a self-contained story that requires hardly any additional description or explanation?
These criteria relate to characteristics of both the interviewer and the research participant, so let’s consider what is within your control, as an interviewer, to make an interview successful—regardless of whether your interview is with stakeholders or users.
Preparation is key to ensuring that, when you sit down with that complete stranger, you know exactly what you need to get out of the conversation.
Immersing Yourself in the Problem Space
To design appropriate research and tease out new insights, you need to understand the status quo and a project’s goals. You can think of this as validating what you know you know, recognizing what you actually know, and becoming aware of what you don’t know. This starts with fully understanding a project’s context by building domain knowledge and being crystal clear about why there is a need to do research and how the results will inform the project. However, this is not about becoming an expert in the subject matter you’re investigating. Instead, it’s about getting familiar with the boundaries or frameworks within which to explore the subject matter.
Domain knowledge includes the product, Web site, or system; the industry basics; a product’s business goals and value proposition; industry jargon; competitor offerings; the product’s past success and failures; and previous research findings. Amassing this knowledge equips you to write an appropriate interview script, gain insights during the interviews, and be nimble with your questioning.
Your immersion in a domain extends to understanding the design context. You don’t want your interviews to focus on solving design problems others have already solved. Nor should you ask questions about issues that widely reported industry research has fully explored. Instead, investigate how others have tackled the same design challenges by building functional knowledge. Become familiar with best practices and alternative approaches.
Functional knowledge includes concepts, interactions, processes, vocabulary, taxonomies, and design patterns. Securing functional knowledge helps you to ascertain where to probe further when getting feedback on a user interface, as well as recognize what is trivial and not worth discussing, because tried-and-true solutions apply.
Through your full immersion in relevant domain and functional knowledge, you can confidently develop a set of research questions that can help you find the answers to what you and the project team really don’t know, providing enormous value and, hopefully, learnings that are of genuine interest.
Getting Access to the Right People
Finding people who have sufficient knowledge, experience, and interest can be difficult, especially for stakeholder interviews. You may have to think beyond the organizational chart and an organization’s usual relationships to get truly useful interviews. Staff who interact with the most departments, who come into contact with people of varying levels of seniority, and who their colleagues listen to and respect are often gold mines of information.
Once you know who you’ll be talking to, make sure they know what the interview is going to be like, so they’ll be in the right head space when you meet. If you’ve ever had a job interview or prepared a presentation you thought would follow one format, but was actually in another, you know this can be very disconcerting. Your anxiety level immediately goes up, making it harder to communicate confidently. This is equally true for both stakeholder and user interviews. You also want to avoid situations where a stakeholder answers every question by saying, “I’ll have to find out for you / follow up on that with you / run a report to find out.” Often such supplementary information never materializes.
You can prevent these kinds of scenarios through some clear and timely communication. Set the right expectations for the people you’ll interview by letting them know in advance what participating in your study will be like and about any priming activities you’d like them to do. Don’t leave it to a participant recruitment firm or your client to craft this message for you. Supply the copy yourself.
Finally, plan your interview schedule to ensure an even exposure to different participant types. Mix up experts, intermediates, and novices or participants from different market segments or business units, so you’ll hear a broad set of perspectives each day during your study. Ensuring this breadth is especially important if you’re reporting observations during the research or iterating designs between sessions. You don’t want to report skewed results or modify your design inappropriately as a result of participant-type bias.
Setting the Stage
Both the settings in which you conduct interviews and your presentation as an interviewer influence the way interviews proceed and the results of your research.
Ideally, select a meeting place that has some relevance to a research topic or participant. This gives a greater sense of context and make participants more relaxed and open. The environment needs to be conducive to chatting intimately, so consider its physical layout, ambience, potential distractions, and the social warmth of the room. If you’re in a lab setting, get any mirrors, video and audio equipment, and in-room observers out of a participant’s field of vision.
You also need to craft your own presence by planning your attire. Here your aim is to make yourself credible and participants comfortable. It might be necessary to imitate your participants’ dress code. It’s also worth thinking about any accessories you’re wearing that might indicate hierarchy, status, or belief. You don’t want participants to have their defenses up in reaction to a religious or political statement.
Deciding on Your Script’s Structure and Scope
Once you’ve determined who you’re meeting with and the meeting setting, it’s time to turn your attention to what you’ll talk about.
Depending on who your research participants are, a one-size-fits-all interview script or moderator’s guide might not be appropriate. If you’re interviewing stakeholders from business units whose terminology, culture, level of interest, responsibility, or key performance indicators are different, write different versions of your script. In such a situation, you might tailor your questions to reflect their subject-matter expertise, touchpoints with design, or their relationship to a project or client. This can be especially useful if a project might potentially be besieged by internal politics. Alternative scripts can help you uncover and tackle the long-held assumptions, fiefdoms, and decision-making power that emerge when new knowledge threatens the status quo.
In a user interview, you can use conditional and branching questions to cover a multiplicity of possible scenarios. Since you are unlikely to have much information about participants up front, ask questions early in the interview that can reveal what line of questioning you should take for the rest of the session.
Phrasing Your Questions
Once you have identified what questions are appropriate for different research participants and decided whether you need different versions of your script, its time to take your research questions and turn them into demonstrations, exercises, tasks, and interview questions.
You’re aiming to get a mix of responses to your questions—including facts, anecdotes, opinions, attitudes, feelings, perceptions, and values. In addition to eliciting good answers, a good question contributes to your interviews in two other ways:
It topically supports the knowledge you’re gathering.
It helps foster good interaction with participants by building on the flow of your conversation.
To get such results, you’ll need to experiment with the phrasing of your questions until each question elicits the type of response you intend. (This does not mean eliciting a specific answer, but instead ensures participants interpret your questions as you had intended.) This requires that you expand the vocabulary you use when asking questions. The most powerful questions are open-ended questions such as:
When could you see yourself doing that?
How does that make you feel?
Why were you expecting that?
Such questions help you avoid getting yes or no responses and leading participants’ responses. They can prompt participants’ thinking about a topic afresh and stimulate more thoughtful, meaningful answers.
Bloom’s Taxonomy of the Cognitive Domain can be useful in making you aware of the impact of your question vocabulary. This taxonomy, shown in Table 1, identifies the six levels of cognitive skill that let people think about information in different ways. The lowest-order thought process is Knowledge: memorizing, remembering, and recognizing information. We tap into this kind of thinking when we ask who, what, why, and when questions. The highest-order domain is Evaluation: judging and resolving. Questions that ask people to select, decide, prioritize, rate, or discuss information trigger this kind of thinking process.
Table 1—Bloom’s Taxonomy of the Cognitive Domain
DOMAIN
CRITICAL THINKING PROCESSES
USEFUL QUESTION VERBS
Knowledge
Remembering
Memorizing
Recognizing
Who
What
When
Where
How
Comprehension
Interpreting
Translating
Describing
Explain
Outline
Distinguish
Compare
Define
Application
Problem solving
Applying information
Show
Use
Complete
Classify
Relate
Analysis
Separating
Finding structure
Contrast
Categorize
Identify
Separate
Diagram
Synthesis
Creating
Combining
Create
Imagine
Design
Propose
Invent
Evaluation
Judging
Resolving
Select
Decide
Prioritize
Rate
Discuss
You can consciously choose the cognitive domain within which you want participants to operate by crafting questions that use vocabulary that elicits a certain kind of thinking. This taxonomy also gives you insights into how your questions might increase a participant’s cognitive load. Observe participants’ energy level during sessions and judge at what time it is appropriate to include higher-order questions.
Steve Portigal, of Portigal Consulting, also suggests a range of question types that can help you construct a script that probes topics from different perspectives:
sequence—“Walk me through a typical day.” “Then, what do you do next?”
specific examples—“What did you make for dinner last night?”
peer, product, or activity comparison—“Do other cashiers do it this way?”
projection—“What do you think it will be like in five years’ time?”
look back—“How did it use to be?”
quantity—“How many of your customers fall into that category?”
changes over time—“How are things different now, from the way they were three years ago?”
suggestive opinion—“Some people have really negative feelings about X. What are your feelings about that?”
clarification—“And when you say X, you mean X, right?“
hypothetical—“What would you do if X happened?”
reflective—“When you say X, it seems that you’re XYZ. Tell me more about that.”
other viewpoint comparison—“What do you think younger people might think about that?”
native language—Point at object X and ask “What do you call this?”
exhaustive list—“Write down everything that comes to mind when you think of X.”
relationships, organizational structure—“Draw the different groups, indicate the size of each group, and show whether they overlap.”
naïve outsider perspective—“How would you explain this to someone who had never heard of this or done this before?”
Using a mix of question types gives variety to your questioning, keeps participants engaged, and helps you collect different types of data. Working on question types and syntaxes is important, because it increases the chance of your framing questions in ways that resonate with participants and operate within their constructs. Too often, we frame questions in ways that make sense to and seem important to design teams or ourselves. If you’d like to learn more about avoiding this kind of construct bias and understand what constructs are important to your audience, you might like to investigate Personal Construct Theory and the use of a Repertory Grid. These tools give you a solid foundation for asking salient questions.
Determining the Length of Your Questions
Review your interview questions for length, keeping in mind that short questions are easier for participants to remember and interpret. You can also assess question length in terms of the media you’ll use for your final research deliverables. If you need sound bites for a highlights video, two-part questions might result in longer answers that play well. If you’re writing a report and need some pithy verbatim quotations, use a series of shorter questions. If you’re sharing lengthy audio recordings, think about whether you’re likely to need to insert verbal encouragement between questions, with the result that your recording wouldn’t be clean.
Planning Sequence and Flow
Once you’re happy with your range of questions and their phrasing, it’s time to examine the order in which you’ll be asking them. Aim to do the following:
Ask easy, sociable, warm-up questions first to get a participant talking.
Craft a natural order for your questions, with segues and transitions between them.
Once you’ve had a chance to build rapport and trust, slot in your harder questions.
Build a sense of closure at the end of an interview, so participants leave with a sense of accomplishment and feeling good about themselves for disclosing helpful information.
Instead of simply writing Thank participant at the end of your script, note that you should mention how the information a participant has provided contributes to the bigger picture, acknowledge their sharing their time and valuable feedback, ask whether they have any questions, and advise them of any next steps or need for further communication. You may even want to debrief participants and ask how the interviewing experience was for them.
Getting Comfortable with a Style of Speech
Once you’ve drafted your script, you need to read it aloud—both to check for any awkward spoken language and to help gauge an interview’s duration. If there are multiple interviewers, make sure everyone can comfortably read the script out loud. Tripping over someone else’s verbiage can slow you down during an interview and break the flow of conversation. After a few language tweaks, you now have version 1.0.
Ideally, after a few readthroughs, you’ll start to remember the question sequence. This familiarity lets you follow whatever natural path a conversation takes—jumping around from topic to topic—but still get full coverage of your questions by looping back smoothly. Mastering this semi-structured approach enables you to follow up on responses with deeper probes, while avoiding your unintentionally skipping questions or losing your place in the script.
Taking Notes
It’s also valuable to plan what data you’re going to record before interviewing participants. You should already know whether you’ll be taking notes yourself, there will be any observers, or someone will transcribe every word afterward. Unless transcription is in your budget—lucky you!—discuss what information is worth writing down. There’s nothing worse than entrusting someone else with taking notes, then finding out they’re scant, vague, or inconsistent.
Contemplate the format of your research deliverable and the nature of your research questions to determine what kinds of responses to capture—for example:
verbatim quotations
just errors
just successes
both successes and errors
a scoring system for task completion
steps people take
cues for follow-up questions
references to particular areas of interest
body language
Agree on symbols, codes, and other forms of shorthand to expedite synthesis of everyone’s notes afterward.
Preparing for Success
Interviewing well is not easy, but it’s a learnable skill—and it’s vital to get it right. Lauded basketball coach Bobby Knight once said, “The will to succeed is important, but what’s more important is the will to prepare.”
Your first step should always be finessing everything that’s within your control before the interview—the problem, people, presence, phrasing, and so on. It helps to create moments that are conducive to reflection and the sharing of relevant information.
In Part II, I’ll address what you can do during an interview—in the heat of the moment—to make it a success, as well as some ways to continue honing your interview skills.
Is it useful to give users—that is, interviewees—a list of questions to think about ahead of time? If so, what types of questions will help the interview process and uncover needs, challenges, and so on? I am conducting some “day in the life” contextual sessions at users’ offices. What sets of questions would be useful for them to know in advance?
Freelance UX Design Consultant at MM Communications
Melbourne, Victoria, Australia
Mia has over 10 years of experience designing digital user experiences. She applies her passion for finding the sweet spot between user needs and business objectives, working for clients as diverse as Medibank, NAB, RMIT, eBay, Telstra, Ford, Merrill Lynch, Coles Group, EMC, and ANZ. Formerly, Mia worked for a who’s who of Australian Internet companies, including Fairfax Digital, Sensis, and SEEK, as well as the interactive agency Razorfish and the consultancy Symplicit. Her favorite part of UX projects is conducting UX research that illuminates the ideas that make the biggest impact on design. Read More