A common activity at the outset of many design projects is a competitive review. As a designer, when you encounter a design problem, it’s a natural instinct to try to understand what others are doing to solve the same or similar problems. However, like other design-related activities, if you start a competitive review without a clear purpose and strategy for the activity, doing the review may not be productive. One risk is that you may find you’ve wasted your time reviewing and auditing other sites, because you end up with findings that don’t help you design your own solution. Another risk is that the design and interactions of competitor offerings might influence your solution too heavily, whether you intend them to or not. Once you’ve seen how others have solved a particular problem, their solutions may subconsciously affect your own thinking.
But while competitive reviews pose some risks, I contend that doing them is still valuable. Designing without first understanding what others are doing in the same competitive space means you’ll miss out on an opportunity to leverage others’ experience, and you might not be cognizant of possible threats to your strategy. To differentiate your Web sites and applications in the marketplace, you must be aware of what others are doing. Key to a successful competitive review is to have a clear objective for your review and minimize the risk of bias when doing your own designs. In this column, I’ll discuss a structured approach to competitive reviews I’ve used successfully to help my team understand the competition. This approach focuses on identifying opportunities for differentiation.
Champion Advertisement
Continue Reading…
Other Approaches to Competitive Reviews
Before outlining my new approach to competitive reviews, let’s look at some other approaches I’ve tried that contrast with my approach. To appreciate the process and the value my approach brings, it’s important to understand what my new approach is not.
Usability Analysis
As a designer with a background in usability and human factors, my initial inclination when doing competitive reviews was to perform a usability analysis of competitor sites. Such a usability analysis could take many forms—for example, doing usability testing with representative users or a heuristic analysis of competitor offerings with one or more reviewers. The key objective of these types of analysis methods is to identify the usability problems of a given design. I was familiar with these techniques, and it seemed natural to apply them to a review of competitor sites.
When analyzing competitor sites, I found that identifying usability problems in their designs was encouraging. Seeing all of their problems gave me confidence I could design a better solution that would avoid those problems. It seemed logical that, if I understood the usability issues in other designs, I could avoid them in my own. But this approach had a couple of limitations. First, my designs were inherently different from those of competitors. My designs had their own unique sets of usability considerations, and the issues I found in competitors’ designs were not necessarily relevant to my own. Second, and more important, simply identifying usability problems in other designs did little to help me understand unique opportunities for differentiation.
Feature Audits
Instead of focusing my competitive review efforts on usability evaluations of competitor sites, I’ve also tried a functionality or feature audit. With this approach, my goal was not necessarily to find usability problems or rate the user experience of other sites. Instead, the idea was to document features, functionality, and activities on competitor sites. With a list of the different functionality available in the marketplace, I could evaluate which features might be appropriate for my design. This process did not always result in a formal report. Often, I could do this exercise informally, in a workshop setting or brainstorming with colleagues. In each case, the idea was the same: Review what other companies are doing to solve a particular problem and leverage any insights to inform my design.
This is a natural process, and I found it valuable to a certain extent. However, while it is helpful to understand what others are doing, I often found this approach limited me or my team to the solutions competitors had already designed. Seeing other solutions would bias my thinking and limit my creativity in envisioning unique solutions, no matter how much I tried to separate myself from them.
A Different Approach
Based on the lessons I’ve learned from the usability-analysis and feature-audit approaches to competitive reviews, I decided to try a different approach. My goal was to develop enough understanding of competitors’ offerings to recognize opportunities to set my designs apart, yet minimize the influence of competitors’ designs on my thinking. My new approach follows these steps:
Identify the salient dimensions that distinguish competitors in a competitive space.
Ideally, a competitive review is just one of a variety of user research and strategy activities a team performs during the business intelligence phase of a design project—for example, contextual inquiries, focus groups, or stakeholder interviews. During these activities, I solicit feedback on the dimensions that best define a particular competitive space. It should be possible to express these dimensions on a scale, but that scale does not necessarily have to have positive and negative polarities. For example, tone of voice might be a dimension that differentiates two competitor Web sites. One could be friendly, while the other might be business-like. Neither is inherently good or bad, but they differ along a scale.
If there is no budget or time for user or stakeholder research, I develop my own set of dimensions that I feel define the competitive landscape. But hopefully, I’ll have the opportunity to conduct some kind of research. By doing so, I can avoid making assumptions about a set of dimensions that may define a competitive space. Instead of biasing participants by presenting a set of attributes for them to comment on, I solicit participant perspectives on dimensions that that are important to them. The repertory grid interview technique is an excellent mechanism for doing this. (For information about the repertory grid technique, see my article on UXmatters, “The Repertory Grid: Eliciting User Experience Comparisons in the Customer’s Voice.”)
This step is the most important part of the approach to competitive reviews I’m outlining here. When trying to understand the dimensions that could distinguish a competitive space, I consider a wide range of attributes, including the types of activities a site supports, approaches to tasks, design perspectives, content style, design sense, usability, audience targeting, and navigational models.
Plot the defining dimensions in a diagram.
After identifying the dimensions that can distinguish competitors in a given domain, I plot these dimensions in a diagram and, on each axis, label the opposite ends of each spectrum, or scale. As I noted earlier, there does not have to be a good end and a bad end of each spectrum. The important thing is that each dimension be relevant. If possible, I position related dimensions near each other in the diagram.
The example shown in Figure 1 illustrates a competitive differentiator base diagram for a domain-specific content site. Notice that each axis in the diagram correlates to a relevant dimension and each axis is labeled with the scale for that dimension. For example, I measured the Customer Community dimension of the site on a scale from Active to Passive to Non-existent.
Score the selected competitor sites along the various dimensions and plot them visually.
Once I’ve identified the attributes I will examine and plotted them on a base diagram, I review each competitor and estimate where they fall on each axis. The estimations or ratings are subjective, so if possible, I try to get others to contribute ratings. I repeat this exercise for all of the competitors and for any existing version of the design I’m working on, so I can see how it compares to the others.
As I score each competitor, I plot my estimations on the base diagram, then complete my diagram by giving a visual dimension to the diagram, which resembles a spider web, as shown in Figure 2. This visual aspect of the diagram is important, because it lets me make quick comparisons between competitors during the final step of my analysis.
Compare the diagrams for different competitor solutions and identify the gaps.
Once my subjective analysis of all relevant competitors is complete, I compare all of my diagrams, looking for trends and opportunities to design something different. In some cases, there will be a particular aspect of the competitive space in which there are no competitors. These gaps provide obvious opportunities that I can evaluate for my design. In other cases, the comparison may be more complex. Together, the competitors may have addressed most of the dimensions. However, there may be certain combinations or patterns that are lacking in the competitive space. The visual nature of the diagrams helps me understand the competitive threats and opportunities and develop a strategy that identifies the opportunities on which I want to focus in my design.
Figure 3 shows my subjective ratings of four different competitors across a number of dimensions. A quick visual scan of their different scores reveals one dimension—the vertical axis—in which all of the competitors are on the same end of the spectrum. Envisioning a solution that corresponds to the other end of that spectrum may present an opportunity to differentiate my solution.
Conclusion
At the outset of any design project, one goal of business intelligence activities is to develop a focus for your design and provide direction and focus for your ideas. User research is one component of business intelligence that can help us develop a sense of empathy for the users for whom we’re designing a solution. However, a design will not be successful if it does not also meet business goals. One business goal is to differentiate your offering within the competitive landscape. I’ve found the approach to doing competitive reviews I’ve outlined in this column to be very helpful in achieving that goal. This method helps me see opportunities for new activities, new approaches to tasks, alternative presentation styles, and features that can differentiate my design. I recommend it as one component of a design toolkit.
Great article! The method is very useful and practicable.
The only thing I worry about is whether it effectively mitigates the problem of getting primed by the designs of the analyzed systems. With me, once I see another design, it influences me so much that I just cannot start from scratch.
So it may be an interesting variant to first draw some quick-and-dirty design sketches and only then carry out the competitive reviews.
@Tobias: Contact with competitors could certainly ground you in pre-existing solutions, but it could also be an opportunity to help evidence a decision to differentiate a design proposal. Tricky.
@All: This article inspired me to extend Ahava Leibtag’s “Creating Valuable Content Checklist” into a competitor landscape analysis template for Web content—radar charts and all. =)
Chief Design Officer at Mad*Pow Media Solutions LLC
Adjunct Professor at Bentley University
Boston, Massachusetts, USA
As Chief Design Officer at Mad*Pow, Mike brings deep expertise in user experience research, usability, and design to Mad*Pow clients, providing tremendous customer value. Prior to joining Mad*Pow, Mike served as Usability Project Manager for Staples, Inc., in Framingham, Massachusetts. He led their design projects for customer-facing materials, including e-commerce and Web sites, marketing communications, and print materials. Previously, Mike worked at the Bentley College Design and Usability Center as a Usability Research Consultant. He was responsible for planning, executing, and analyzing the user experience for corporate clients. At Avitage, he served as the lead designer and developer for an online Webcast application. Mike received an M.S. in Human Factors in Information Design from Bentley College McCallum Graduate School of Business in Waltham, Massachusetts, and has more than 13 years of usability experience. Read More