Chances are that, if you do user research, you conduct a fair number of user interviews. When conducting interviews, our training tells us to minimize bias by asking open-ended questions and choosing our words carefully. But consistently asking unbiased questions is always a challenge, especially when you’re following a participant down a line of questioning that is important, and you haven’t prepared your questions ahead of time. Also, if you do a lot of interviews, you might fall into a pattern of asking the same types of questions for different studies. This might not bias participants, but you can bias yourself if you always investigate the same types of issues. Finally, are you sure you are asking the right questions? Your interview questions might be relevant to you and your project team, but are they the questions that will get at important issues from a user’s perspective?
In an effort to address some of these considerations, I’ve experimented with the Repertory Grid method—an interview technique that originated in clinical psychology and is useful in a variety of domains, including user experience design.
Champion Advertisement
Continue Reading…
Personal Construct Theory
The Repertory Grid is a data extraction and analysis technique that has as its basis the Personal Construct Theory, which George Kelly developed in the 1950s. The central theme of the Personal Construct Theory is that people organize their experiences with the world into conceptual classifications that we can differentiate and describe using attributes of those classifications called constructs. Often, these constructs manifest themselves as polar opposites on a scale, so we can easily classify the elements of our world. For example, based on our experiences with people, we know that some are shy and others are outgoing. When we meet new people, we may consciously or subconsciously categorize them according to that construct.
An important element of the Personal Construct Theory is that each individual has his or her own unique set of constructs that are important to that person. Taking my example further, whether a new person is shy or outgoing might not be important to you in your categorization scheme, but it might be very important to someone else. George Kelly hypothesized that people are constantly challenging and growing their construct systems, but those systems remain unique to the individual, and the sum of each person’s experiences shapes them. In addition, the differences in people’s construct systems contribute to our different perceptions of the world and our behavior in it.
For example, when choosing a place to live, one person might organize her choices using a construct that rates locations according to how easy it is to get to work, because she’s experienced tough commutes in the past. Another person might organize his choices by climate or some other factor. According to the Personal Construct Theory, each person has his or her own unique system and prioritization of constructs, or way of construing the world.
It is this inherent difference in construct systems between people that introduces bias in research: The researcher has one set of constructs, and each participant has another. Especially in survey or structured-interview research, a researcher might ask questions he feels are important. Participants can answer those questions, but are they really the most relevant questions? For example, in evaluating the user experience of a Web site, we can ask participants whether they think the site is trustworthy and why. Each participant might be able to answer these questions, but is a trustworthy site important to that participant for that domain?
Even the most well-intentioned researcher, drafting questions that are as open-ended and unbiased as possible, still might lead some participants down an irrelevant path. Kelly developed the Repertory Grid as an interview technique that attempts to minimize the construct bias of the interviewer and systematically extract constructs for a particular domain that are important to participants. Why is it called the Repertory Grid? First, Repertory comes from the word repertoire, which refers to a participant’s repertoire of constructs. The Grid refers to the data extraction and analysis procedure researchers use to gather and compare information from a number of participants in a study.
The Repertory Grid Process
Traditionally, researchers conduct a Repertory Grid study by choosing several examples in a particular domain with which participants interact. Ideally, there will be 6–12 different examples that represent a wide variety of approaches and potential constructs. A Repertory Grid study proceeds according to the following four general steps:
Selection
Triading
Rating
Analysis
1. Selection
During each session, either the participant or the researcher chooses to work with three random examples from the initial set. (Ideally, there are multiple participants in the study, and each participant works independently, with a different set of examples.)
2. Triading
This is the core aspect of eliciting constructs without introducing bias from the researcher. The researcher asks the participant to identify how two of the three examples are different from the third. The researcher does not provide a starting point, but just asks the participant about the constructs that are important from his or her perspective. Often the constructs that are most important to the participant are surprising—and sometimes not related to the topic that the researcher intended. However, this is the key aspect of the exercise—to uncover what is important to the participant.
Once the participant identifies a construct, or how two of the examples are different from the third, the participant names the two polar opposites of the construct, identifies which is good and which is bad, then writes the names of its two contrasting poles at the opposite ends of a row in the grid.
The participant continues the process of triading examples to identify additional constructs for the domain. Participants can change which two examples are alike and which are different for different constructs. The key is to elicit as many constructs as possible, without any suggestions from the researcher. The researcher can ask probing questions and ask the participant to think aloud, but suggesting dimensions for constructs introduces the bias that this method seeks to avoid.
3. Rating
After identifying and naming the contrasting poles for constructs during the triading step of this process, the participant rates all of the original examples in the study—that is, the 6–12 examples, including the three the participant used in triading—basing his or her ratings on the constructs the participant developed during triading. For each individual construct, the participant rates an example on a scale of 1 to 5, where 1 represents one end of the pole and 5 represents the other.
For example, if a participant identified a construct with the two poles organized and cluttered, the researcher would ask the participant to rate each example on a scale from 1 to 5, where 1 is organized and 5 is cluttered.
Depending on the number of examples and constructs the participant identified during the triading step, this rating process can take some time, so be sure to allow for it in your scheduling.
4. Analysis
You can analyze the results of a Repertory Grid study both qualitatively and quantitatively. Often, a qualitative analysis is enough to develop a good understanding of the constructs that are important to the target audience. By reviewing notes from the triading sessions and conducting affinity diagramming sessions to assess the various participants’ constructs and language, researchers can identify themes that can inform their decision making for the domain. In addition, to statistically identify which constructs are most relevant and most clearly distinguish the selected examples, a researcher can apply factor analysis to the participants’ ratings of the examples. The result is a dendrogram or tree diagram like that shown in Figure 1, which is similar to what you would get during a card sort exercise and shows
which examples are most closely associated with one another
the selected examples’ most differentiating characteristics
Applying the Repertory Grid to User Experience Design
George Kelly was a clinical psychologist, so his application of the Repertory Grid was to help identify the constructs his patients used in interacting with those around them. In his work, the 6–12 examples for the domain were significant people in the lives of the patients. Kelly used the Repertory Grid method to help patients understand their issues with interacting with those people.
In user experience design, the subject matter is obviously different, but we can use the same process to identify the key constructs or considerations people have when interacting with systems.
In my work, I’ve applied the Repertory Grid method to user experience design in two different ways:
Getting feedback on user interface paradigms during conceptual modeling
Understanding a product’s competitors and positioning in its marketplace
Getting Feedback on User Interface Paradigms
During the conceptual modeling phase of a project, when I am developing paradigms for how a user might interact with a particular system—whether I am redesigning a particular page or an entire process—I’ve used the Repertory Grid to get feedback on which paradigm would work best.
In this case, the examples for construct elicitation are three different user interface paradigms I’ve either sketched, drawn as low-fidelity wireframes, or developed as visual comps. Generally, I ask participants to show me how they would complete particular tasks with each user interface—similar to the task scenarios participants follow during in a usability study. Because participants’ interactions with the three user interfaces help them develop some familiarity with each paradigm, they can then complete a Repertory Grid exercise comparing the three.
The triading phase of this process is key. Instead of simply asking the participants which user interface they like best and hoping they have good answers when I ask them Why?, the triading process brings out the specific attributes that differentiate the user interfaces in the minds of the participants. Additionally, the triading phase is important when comparing wireframes or other prototypes, because two things generally limit the rating and factor analysis:
The number of user interface paradigms we can create
The number of user interfaces participants can interact with and be able to retain in memory during an interview session
Analyzing a Competitive Marketplace
During early phases of projects, I’ve used a more traditional application of the Repertory Grid method—with ratings and statistical analysis of constructs—to develop a strong understanding of a product’s competitors and its current positioning in a market.
For example, during the business intelligence phase of a project, when a product team is defining the business goals and objectives for a product, business stakeholders usually have their own perspectives on how a Web site or application fits in the marketplace and what customers’ perceptions are. However, their experiences and constructs are likely different from those of their customers. Therefore, what stakeholders think is important might not be important to customers at all.
By conducting a Repertory Grid study, using competitive sites or products as examples and choosing participants who are familiar with those competitive products, a researcher can develop a strong understanding of customers’ perspectives on what is important. In this application of the Repertory Grid, I’ve either asked participants to interact with the example systems to bring them back to top of mind or simply shown them images of a Web site, brand, or application to trigger their memories. Participants complete the triading process using three examples, then rate all of the examples according to the constructs they’ve developed. The resulting factor analysis helps identify the differentiating characteristics of the product domain and positive characteristics on which we should focus. Additionally, the statistical aspect of the factor analysis is another tool that can aid in the presentation of the results to stakeholders—especially if they respond positively to quantitative analysis.
Conclusion
In applying these variations of the Repertory Grid to user experience design, I’ve found the method to be fun and engaging for participants and easy for researchers. The Repertory Grid method has a number of benefits for user experience research and design evaluation. Repertory Grid studies
quickly generate a large number of attributes, or constructs, that are useful in comparing different examples
elicit differentiating attributes in the participants’ vocabulary rather than the researcher’s vocabulary
identify constructs that are important to the participants rather than the researcher
provide a structured process for eliciting feedback that is easy for participants to understand
The most significant limitations of this method concern when you can use it effectively. Triading is an effective technique that you can use when you have just a few examples for comparison. However, to utilize the Repertory Grid to its full potential, it is best to have more numerous examples for comparison, and participants must develop some familiarity with all of them.
Also, as with any other qualitative interviewing method, there is potential for bias from a researcher who proposes constructs or leads participants during follow-up questions. However, when applicable, the Repertory Grid method helps researchers minimize bias while developing an understanding of a particular domain from the customer’s perspective. I recommend you use this method as a component of your user-centered design toolkit.
Additional Resources
Kelly, George. The Psychology of Personal Constructs. New York: Norton, 1955.
Jankowicz, Devi. The Easy Guide to Repertory Grids. New York: Wiley, 2003.
This article introduces an interesting topic in the design of interviews, but it would be helpful to see more concrete examples of how the UX research applications are actually executed. Thanks for the ideas!
Michael, an excellent article. And clearly well worth implementing the Repertory Grid method in user research techniques. Have you used this technique on non-Web site GUIs—in particular, stand-alone, click-once applications? I would like to hear your opinion on how one could obtain examples for these types of UIs when you cannot get your hands on competitors’ UIs.
This article also reminded me of a technique I use wherein Six Sigma and user-centered design methods are used in conjugation to quantitatively measure usability performance during usability testing sessions.
Thank you for this article. It really was informative.
Jamie and Sandhya, thanks for your comments. What I like about the repertory grid technique is that, theoretically, you can apply it to anything—click-once applications, informational Web sites, transactional applications, brands, shopping experiences, and so on. The key is that you have to get participants who have interacted with the examples before the interview session. Perhaps they have experienced the different sites or used the applications in a usability test environment before answering the repertory grid questions. As long as the participants have enough experience with the different examples to compare them, you can use the repertory grid.
For example, in the early phases of a project, you might be interested in how the experiences with competitor brands compare. Recruit participants who have experience with the different brands or give them an exercise to become familiar with the examples, then use printed copies of the brand logo or a screen shot of the home page for the repertory grid interview.
If you are limited in the amount of examples or have limited access to participants with the right experience, consider the “triading” technique for interviewing. You don’t get all the benefits of the repertory grid, but it is a constructive exercise for a different perspective on the customer experience.
The repertory grid method is gaining in popularity in UCD and related disciplines. Here are a few examples.
In the UCD domain, the repertory grid method has been used to evaluate the personality of Web sites (Hassenzahl, 2003), to elicit knowledge from experts (Crowther & Hartnett, 1996), to elicit requirements (Hudlicka, 1996; Sutcliffe, 2002), and to understand the vocabularies and concerns of different groups of users —:for example, how do patients, doctors, and pharmacists view medications. Hassenzahl and Trautmann (2001) used a variation of the repertory grid technique to determine how an old and new design for a German banking Web site compared with six other public banking sites. Verlinden and Coenders (2000) used the technique to compare different pages from within a site and determine which pages needed improvement. Tan and Tung (2003) used the repertory grid method to investigate what factors Web designers consider important when developing business-to-consumer (B2C) Web sites. Read, MacFarlane, and Casey used a simple repertory grid to get feedback on the usability of various text input methods for children. Steed and McDonnell (2003) used the repertory grid method to evaluate the effectiveness of six different virtual environments. The repertory grid method was especially useful in the context of virtual systems, because its focus is holistic, in contrast to experiments where only a few explicit variables are measured in any given study.
Michael, I too think this is an excellent article providing a clear summary of an interviewing technique that is difficult to explain.
I would like to make two comments. Firstly, that it is legitimate in grid to focus the attention of the participant on what the interviewer is most interested in by using a small number of in terms of… qualifiers to append to the triading question. Thus, we get: How are A and B similar and different from B in terms of…. This still leaves the participant in control, providing unbiased constructs, but focused on what you are attempting to reveal or prove.
Secondly, the interview does not have to finish at the cluster analysis stage. The analysis may reveal some constructs or examples—I call them elements—that appear to be very similar, and further triading based on those can provide further constructs or elements, thus providing more depth in the interview results. It is possible for the interview to go on indefinitely until either the participant or the interviewer is exhausted or has gained sufficient material for the purpose.
My Web site is focused on the application of grid in business and has some market research examples. This article is so good that I plan to link back from there.
Thanks for the comments. You make an excellent point about focusing the participant on the domain you are studying. If you leave the triading totally open ended, you may get perspectives that aren’t at all related to what you are looking to learn about. I try to leave it as open ended as possible to get the participant’s initial reaction, then narrow the scope as necessary.
Also, another good point about continuing the triading process. Often a participant will stop after a few elements are identified, but further probing can be revealing.
Finally, thanks for linking to your Web site with its wealth of resources regarding the repertory grid technique.
I’m still confused about how to analyze a repertory grid? Could you tell me further please? I want to analyze it by a qualitative method. Can I use content analysis? Thank you for your information.
Thanks for your question. If you don’t want to do a full quantitative cluster analysis of the results, you can still get significant value from the study with a qualitative analysis, as you suggest. To do this, I would start first by making a list of all of the attributes identified by the different participants, and then you can organize the list in two ways. First if you organize by frequency, you will find that some dimensions were identified more frequently than others. Second, see which dimensions were mentioned first by the different participants. Looking at the frequency and immediacy of the attributes identified by participants will tell you which ones are most salient to the designs you are studying. These should be the focus of your design efforts. For example, in a comparison of ecommerce Web sites, if you find that warm or welcoming is a key attribute, you should focus on this direction in subsequent design.
In addition to the frequency and immediacy of attributes, you can also do a traditional content analysis of the comments made by participants during the exercise. Take notes as the participants add commentary or think aloud during the triading process, and then apply a content analysis to those notes. For a simple content analysis, I like to use a spreadsheet to list points made by a participant. As I repeat the process for additional participants, if more than one mentioned a particular point, I will note it. At the end of the process, I would look for patterns or frequent themes to inform my overall findings.
Good luck and feel free to post any additional questions or comments.
If I were describing the data analysis—both qualitative and quantitative—in a proposal for research involving the repertory grid technique, what would I need to address and would the assumption testing be the same as for factor analysis?
Thanks for the article; very helpful. However, I am still a bit confused. What would be included in the data analysis—qualitative and quantitative—section of a proposal for research involving the repertory grid? Would the assumption testing be similar to factor analysis?
Thanks for the interesting article and nice method for interview data collection. But I have a question: can I use this method to gather interview data regarding service design steps/procedures for services or products to get constructs about these steps/procedures even if these steps/procedures are in chronological order?
This method seems to me to be interesting and very valuable for researchers. I should admit that I don’t know how to go about it. Anyone ready to assist?
Thanks for the interesting method of interview data collection. How can I use this method to conduct an interview aiming at getting participants’ lived experience?
I am a bit late to join this discussion I guess, but I need your feedback on a repertory grid analysis I am about to do very shortly. You see, I have designed eight Web sites with various changes and want to see how the Web users view these sites. I am confused about how to conduct the triading process. With eight Web sites, how many possible combinations of threes can be generated? How many will be enough? Another question is: for how long should I let my participants elicit constructs? Will too many constructs be difficult to analyze? As for the rating of the constructs, should it be a 1-5 or 1-10 scale? Please clarify.
Chief Design Officer at Mad*Pow Media Solutions LLC
Adjunct Professor at Bentley University
Boston, Massachusetts, USA
As Chief Design Officer at Mad*Pow, Mike brings deep expertise in user experience research, usability, and design to Mad*Pow clients, providing tremendous customer value. Prior to joining Mad*Pow, Mike served as Usability Project Manager for Staples, Inc., in Framingham, Massachusetts. He led their design projects for customer-facing materials, including e-commerce and Web sites, marketing communications, and print materials. Previously, Mike worked at the Bentley College Design and Usability Center as a Usability Research Consultant. He was responsible for planning, executing, and analyzing the user experience for corporate clients. At Avitage, he served as the lead designer and developer for an online Webcast application. Mike received an M.S. in Human Factors in Information Design from Bentley College McCallum Graduate School of Business in Waltham, Massachusetts, and has more than 13 years of usability experience. Read More