As UX professionals, we’re all familiar with the need to test user experience designs. Testing content, however, might be a different story. Most companies haven’t given testing content the attention it deserves—partly because it’s challenging. One challenge is that time and budget usually do not allow us to test every single piece of content. Another challenge is that gathering too much unfocused feedback can freeze our projects in analysis paralysis. To meet these challenges, try testing your content concepts—and start testing them early in your projects.
I have found surprisingly little advice about testing content that is integral to rather than supportive of the user experience. Also scarce is advice about testing content for more than usability. A good starting point for understanding the need to test content is a blog post by Ginny Redish, “Usability Testing: Be Sure to Test Content as Well as Navigation.” According to Redish:
“Too many usability tests focus only on finding information—not on how the information itself works for people.”—Ginny Redish
This column explains the value of testing content with real people and offers tips on evaluating content concepts.
Champion Advertisement
Continue Reading…
The Value of Testing Content
Similar to testing UX designs, testing content at important points throughout a project lets you get feedback from people and ensure your content works. If your content isn’t working, you can adjust the content and test again until it does. By testing throughout your project, you save yourself the pain, frustration, and cost of reaching the end of a project only to find the content does not work for people.
Beyond Usability
For content, usability is whether someone can find, read, and understand the content. To accomplish our business goals, we often need content to be more than findable, readable, and understandable. Content might need to engage, influence, or support decisions.
Balancing What with Why
Perhaps the greatest value of testing content is that it lets you gather more qualitative data to complement your quantitative data such as analytics. It is difficult, if not impossible, to measure the effectiveness of content using quantitative data alone. [1] If you test content throughout a project, you assimilate a more complete understanding of whether it’s achieving your goals. While analytics can tell you plenty about what is happening with your content, qualitative testing helps you understand why it is happening. For example, analytics can tell you that a certain page has a high bounce rate. Interviews with people can tell you that the content doesn’t cover the topics they need or the content’s tone is lifeless or the images look unprofessional—and give you insights that can help you fix the problems.
Testing content with people throughout a project lets you triangulate qualitative feedback with expert review and quantitative data—from analytics, call metrics, or multivariate testing. Where you see similarities, patterns, or connections between all three, you’ll find the strongest insights. Triangulation is a widely recognized research principle in the social sciences, where strict, empirical tests that follow the scientific method—complete with a control group, variable group, and statistically significant results—are often not possible. Instead of trying to answer a question through one scientific experiment, triangulation answers the question through the combined results of several methods. Figure 1 shows how triangulation can apply to content.
What Is a Content Concept?
A content concept is a mockup or draft of your content. I find testing at three different levels of fidelity useful, depending on what qualities of the content I want to test. The higher the fidelity, the more variables you can test. However, it can be harder to identify which variable is causing your results. (See “How Do I Test a Content Concept” later in this column.)
Table 1—Content concept fidelities
Fidelity of Content Concept
Most Helpful for Testing These Qualities…
Content only
Content independent of the design and, possibly, formatting
tone and style
comprehension
usefulness
some readability
some organization
some engagement and influence
Content in a low-fidelity design Content in rough wireframes or prototypes, including basic formatting
tone and style
comprehension
usefulness
some readability
organization
navigation
some interaction
some engagement and influence
Content in a high-fidelity design
Content in polished wireframes or prototypes, including polished page formatting
tone and style—integrated with the overall page design
comprehension
usefulness
readability
organization
navigation
interaction
engagement and influence
What Content Should I Test?
You probably can’t test every single piece of content that might be a part of your project, so you’ll need to test a sample. I find the best approaches to sampling content are critical case sampling and common case sampling.
Critical Case Sampling
Select the instances of content that are most important to your project goals. For instance, if your project involves convincing people to sign up for something, you should test the content for the sign-up module or landing page.
Common Case Sampling
Choose the types of content that comprise the majority of the content for your Web site or interactive user experience. For example, if the goal of your Web site is to provide credible healthcare information to improve people’s healthcare decisions, test the content for a typical healthcare topic.
When Should I Test Content Concepts?
Test content concepts at the beginning of your project. Rachel Lovinger, Content Strategy Lead at Razorfish, offers a helpful visual showing the evolution of content strategy throughout the project process in Figure 2. [2] Concept is part of the early discovery phase.
I find testing a concept works best once you have a clear understanding of your project goals and have gained enough insights from other data sources—such as analytics, call metrics, best practices, content analysis, market research, or competitive analysis—to form a reasonable concept.
How Do I Test Content Concepts?
The specific approach you should take to testing concepts might vary, depending on your project, but I find these basic steps always apply. If you have solid experience in testing designs, you simply need to make a few adjustments to focus your testing on content. However, if you have not conducted such testing before, I recommend that you read a book, attend a conference, or take a class about usability or similar types of testing.
Identify your testing goals.
The overall purpose of testing a content concept is to figure out what content strategy you should follow throughout your project. Consequently, make sure your specific testing goals align with the project goals. For instance, if the goal of your project is to help people make better health decisions as they travel, make sure your test covers related content and explores whether it influences people’s decisions.
Choose critical or common content cases to test.
When choosing the samples of content you’ll test, use the sampling method that makes the most sense for your project and testing goals. (See “What Content Should I Test?”)
Create concepts that explore the right qualities.
Whether your team is crafting the content from scratch or adapting it from existing source content, you need to ensure the concepts of the sample content reflect the qualities you want to test. For example, if you want to test the tone and style of the content, ensure the content concept actually has a distinct tone and style.
For comparison, you could test more than one concept, with different tones and styles. I like using comparison to test less tangible qualities, because it elicits more and better-quality feedback from people. [3] Usually, however, testing two or more concepts does not result in a winner, so do not position a comparison test as a contest when communicating with your stakeholders, client, or boss. Rather, testing multiple concepts reveals the best and worst of each concept. Most likely, you’ll try to combine the best—while avoiding the worst—into your final content approach.
Plan the right method to test the content qualities.
Usually, you’ll ask people to complete some tasks and participate in an interview. Table 2 offers a breakdown of what methods are best for testing what quality.
Table 2—Methods for testing content qualities
Method
Content Quality to Test
Tips
Task completion
Observe and, optionally, ask participants to think aloud
comprehension
usefulness
readability
organization
navigation
interaction
some tone and style
some engagement and influence
Ensure task scenarios do not lead participants. For example, do not use exactly the same wording as for the content concept.
Focus observations of tasks on how people work with the content.
Consider the full content experience—for example, moving from an email or social media message to a Web site.
Interview
tone and style
comprehension
usefulness
engagement and influence
more reasons why for other qualities you’ve discovered through tasks
Ensure interview questions do not lead participants.
Focus discussion on the content.
For engagement and influence, don’t press too hard on why.
Regarding other logistics, you can conduct test sessions in person or remotely. I prefer testing with eight or nine people to guarantee that some patterns emerge. For a good, detailed resource about testing logistics, see Handbook of Usability Testing, by Jeffrey Rubin and Dana Chisnell.
Measure the appropriate results.
Evaluating the test results in the right way is important. Table 3 offers some basics. Handbook of Usability Testing offers helpful insights, as well.
Table 3—Basic measurements for testing content
Method
Basic Measurement
Task completion
success finding and interacting with content
success reading content
success understanding content
success remembering key messages or facts
Interview
preference for and opinion about content tone and style
opinion about usefulness of content
opinion about effectiveness of other content qualities
opinion about and rating of the brand, topic, product, or decision—both before and after seeing the content
A Special Challenge: Measuring Influence and Engagement
We need to figure out the right way to assess intangible qualities such as engagement and influence. The difficulty is that, when trying to describe such qualities, people cannot necessarily articulate why. What’s more, if we press people to articulate why something does or does not engage or influence them, they are likely to rationalize—and perhaps even talk themselves into a completely different response from their real reaction. [3]
Some have used measurements from the gaming industry to help assess engagement. My concern about using such measurements is that games are designed to be uniquely immersive, making someone feel lost or completely involved in an experience. Most content-focused experiences strive for engagement, not complete immersion.
Demetrius Madrigal and Bryan McClain’s UXmatters article, “Testing the User Experience: Consumer Emotions and Brand Success,” explores subjective and objective measures for emotion—a close cousin of influence and engagement. The objective measures involve highly specialized techniques such as face reading. In a similar vein, some marketers actually scan people’s brains to assess their emotional reactions and preferences for products or marketing materials. [4] While I find these approaches fascinating, they are currently impractical for most user experience projects.
So, what approach to assessing content’s influence and engagement is practical now? We should capture people’s first reactions to content. Then, through some expertly moderated discussions with participants, we can try to understand their reactions, ensuring we don’t press too hard and, thus, end up with misleading rationalizations. One way of avoiding such rationalizations is to discuss two different concepts in comparison. Finally, we need to understand whether content affects people’s decisions or perspectives. For example, an opening questionnaire could ask participants about what their decision might typically be in a particular case and why. A closing questionnaire could ask them whether they would now make a decision that differs from their typical decision and to rate how well the content informed their decision.
Conclusion
The UX community has amassed extensive knowledge about testing interactive designs. Applying that knowledge to content has exciting potential to help companies and organizations answer the question Does our content work for people? One valuable approach is to test content concepts early in a project lifecycle, enabling you to form a clear content direction.
Both a tough challenge and a big opportunity is testing how content influences or engages people. We need practical, yet reliable approaches to help us understand whether content has impact on people’s perceptions, decisions, and more. I am personally exploring ways of addressing this need, and I look forward to the rest of the UX community doing so, too. When I think about our accomplishments in the realm of testing, I’m confident our collective brainpower can meet the challenge.
References
[1] Halvorson, Kristina. Content Strategy for the Web. Berkeley, CA: New Riders, 2009.
[2] Lovinger, Rachel. “Content Gone Wild.”MIMA Summit, October 5, 2009. Retrieved from SlideShare on December 7, 2009.
[3] Lehrer, Jonah. How We Decide. Boston: Houghton Mifflin, 2009.
I didn’t understand your point. Even if you have really big-enterprise-level content, it doesn’t mean you will not audit and analyze that content. It requires a lot of time, true. But it has to be done. Audit and analyses are the core of building a solid strategy for business and users’ objectives.
Good reading, and I appreciate the focus on the importance of content in the user experience with a product.
During user research, it would be interesting to get people to recall any words that stood out to them after they completed a task and to log action words they clicked toward achieving their goal.
Another approach might be to get users to walk through how they would expect to complete a task and listen to the words they use prior to using the product, to see how it matches their view of the world.
Fmachs—This column did not focus on audit and analysis. I 100% agree with you that those activities are critical to content strategy. See my previous column, Content Analysis: A Practical Approach.
Daniel—Thanks for sharing your wise suggestions, especially useful for testing a task-focused experience.
(See, when UX professionals think about testing content, good things will happen!)
My first experience with user testing was doing think-aloud protocols to evaluate content. It would be great for the UX community to hear from more technical writers, because I know they were testing content with readers before we started calling them users! Colleen, this is a fantastic piece, and I hope more content strategists and UX professionals add content testing to their bag of tricks. Linda Flower and Karen Schriver both have written on testing content with readers and would be a good place to start for more information.
Colleen Jones—Thanks for the reply, and thanks for pointing me to your other article. Sorry for not replying back earlier. I thought I’d get a mail back when someone commented here.
About my previous comment: I thought you were talking about skipping the entire content over small samples of it.
So I read again. I guess I understood your point, but I don’t agree.
“Sampling the content” sounds like “content persona”. Persona is a useful conceptual framework, until you have real people to test your hypotheses.
Probably you didn’t want to sound like that, but that’s the way it sounded to me.
So even if we are dealing with a big content, we can’t skip audit and analysis of all the content. (A persona isn’t a real person; a sample isn’t the content.) If that’s too much, show the management triangle to your client—cost x time x scope—and inform him that, regarding its bigness, his 3-day project needs to be a 3-month project.
Well, sorry If I am being rude or persistent: the first is my poor English; the second is constitutional.
Karen McGrane—Thanks for the reference. I found it.
Thank you, Karen, and excellent points! One of my first introductions to usability testing was Karen Schriver’s book Dynamics in Document Design. I still love that book and keep it handy.
I found this site on Google. It is excellent and very informative. As a first time visitor, I am very impressed. I found a lot of informative stuff in your article. Thanks for posting.
Great points. Testing content separate from visual and audio design is very important for the design of learning games and apps for kids. It helps assess age-appropriateness and successful learning design for the product. Pre- and post- assessment of knowledge is critical here. Calling out participant use of keywords, and their frequency, is a great idea and can be displayed in reports via wordles! Thank you for this article.
A pioneer of content strategy, Colleen is author of The Content Advantage: The Science of Succeeding at Digital Business Through Effective Content and founder of Content Science, an end-to-end content company that turns content insights into impact. She has advised and trained hundreds of leading brands and organizations to help them close the content gap in achieving their digital transformation. A passionate entrepreneur, Colleen has led Content Science in developiing the content-intelligence software ContentWRX, publishing the online magazine Content Science Review, and offering certifications through their online Content Science Academy. She has earned recognition as a top instructor on LinkedIn Learning and as a Content Change Agent by Society of Technical Communication’s Intercom Magazine. She is also one of the Top 50 Most Influential Women in Content Marketing and one of the Top 50 Most Influential Content Strategists. Colleen holds a B.A. in English and Technical Writing and an M.A. in Technical Communication from James Madison University. Read More