Top

How to Navigate UX Research for GenAI

November 4, 2024

UX researchers now have the important job of ensuring that generative artificial-intelligence (GenAI) products deliver real value to actual people. This involves addressing unique challenges that are inherent to the current state of GenAI technology—challenges such as managing unpredictability, inaccurate outputs, and new interactions—ultimately ensuring that this technology is worthy of human trust. Given this reality, how can UX researchers successfully navigate this new frontier?

Based on our experience leading UX research for GenAI products across a variety of personas and use cases, we’ve identified a few things that you can do to become better prepared to conduct or lead UX research for GenAI products. Remember, you represent the human user and might be the only voice representing the user in creating a completely new experience!

Champion Advertisement
Continue Reading…

Expanding Your AI Expertise

The first step in preparing to conduct UX research for GenAI products is growing your overall AI expertise—especially your understanding of unique considerations of GenAI that impact the user experience. We’ll share some GenAI resources at the end of this article, but first, let’s consider why you need to expand your knowledge about AI and, specifically, GenAI. Having a basic understanding of and vocabulary for discussing the new GenAI experiences for which you’re conducting research lets you better recognize their differences and engage in meaningful conversations with your cross-functional product team.

Since you are an expert in UX research, we highly recommend that you start by researching the differences between GenAI and classical AI and how GenAI and large language models (LLMs) work, including data and training for models of different sizes. Experience and compare some of these GenAI user interfaces yourself. An additional consideration is that there are distinct costs to the use of GenAI—both environmental and financial, in terms of calls and tokens—so your experience must reflect an awareness of that and inform your ability to assess whether and when to use GenAI as well.

Learning About a Product or Feature’s GenAI Experience

As we discussed in the first article in this series, “Reframing UX Research for GenAI,” GenAI presents unique considerations that are a departure from traditional, deterministic software and user experiences. Therefore, you need to have basic fluency in topics relating to GenAI technology—particularly regarding the type of product or experience for which you are conducting research—and how it might present to your target persona.

Now, let’s consider some topics and example questions that you’ll need to answer relating to each topic—at least at a high level:

  • Model and training:
    • Is this an LLM? Is it open source? Has your team fine-tuned it? On what data was it trained? Have you considered biases in the training data?
    • What GenAI capability are you using for the feature or product—for example, summarization or a Q&A?
    • Is the algorithm using retrieval-augmented generation (RAG) to optimize the output of the large language model by referring to an authoritative knowledge base outside its training-data sources before generating a response?
    • Do users need to input prompts? Is prompt engineering necessary?
  • General performance of the model:
    • How accurate is the model? Specifically, what is the risk of hallucinations for the model—that is, the generation of inaccurate, made up, missing, or misinterpreted information?
    • What is the model’s latency—that is, the time necessary to load the answers?
    • How often does the model provide an answer? What happens if it cannot determine an answer?
    • Are there certain parameters in place that you should convey to users?
  • Output variance:
    • Given users’ increased awareness of and experience using consumer GenAI products—for example, ChatGPT, Gemini, or Perplexity—how does your experience fit users’ expectations?
    • Does the user have control? Can the user modify the tone? Length? Can the user select and verify sources?
  • Onboarding and guidance:
    • How do you introduce users to the experience? How do you teach them the boundaries and limits of the experience?
    • Does the product provide examples or templates?
    • Does the product offer follow-up recommendations?
  • User data privacy:
    • Does the product require personally identifiable information (PII) or any other personal user information? Would the user be aware of this and have control over it?
    • What data would the product collect from the user to train the model? Would the user be aware of and have control over this? (Check out the recent publicity regarding LinkedIn choosing to automatically opt-in users to use their personal data in training their GenAI.)
  • User transparency and control:
    • How much control does the user have? Must they use an AI feature? Are they aware that they are using an AI feature? Does this align with their expectation? Can they opt in or out of using it?
    • Can they modify the output? Edit it? Select sources to use or verify them afterward?

These are just some of the topics that might be relevant to you as a UX researcher. Your cross-functional team should be able to help you answer these questions. Never feel bad about asking for help, especially if you have only a rudimentary grasp of the overall technology. Convey why you are asking the questions. If your team does not know, your asking them helps identify areas that need further investigation by the team. Of course, the answers to these questions are purely informational, but matter because all of them can hugely impact the desirability, usability, and ultimately users’ trust and value for a GenAI experience. You must increase your knowledge enough to communicate how the team’s choices and the risks of certain approaches would impact the user.

Changing Your UX Research Methods and Approach

Beyond learning about GenAI technology, successfully navigating GenAI UX research (UXR) requires a different mindset regarding how we approach and execute our work. Although traditional UX research methods are absolutely foundational and helpful, we have found that tweaking these methods provides more meaningful insights and informs the creation of a better user experience. Some helpful approaches include the following:

  • Being flexible with timelines:
    • GenAI development moves at breakneck speeds, so product teams must move fast to adapt to this environment. Therefore, we need to adapt our UX research approaches to meet the needs of our teams.
    • Rather than relying on traditional, fixed-timeline studies, incorporate iterative, agile research methods that can provide quick pulse checks to learn how users adapt to AI features over time.
  • Incorporating longitudinal research approaches:
    • Users’ perceptions and beliefs about AI change over time with exposure to the technology. Do not make the mistake of thinking their needs won’t be different in even just three months.
    • Ensure that you’re checking in with representative users regularly and finding ways to incorporate longitudinal research methods to track the changing relationships between humans and AI.
    • If you can’t conduct diary studies or capture within-subjects repeated measures, consider simpler pulse surveys or benchmarking.
  • Not relying only on populations that have not been exposed to AI for concept testing:
    • To mitigate any bias and expose any differences in acceptance or understanding across user groups, focus on recruiting users with various levels of AI comprehension.
    • Be aware that it is incredibly hard for users to tell you whether a concept makes sense to them when they’ve never experienced GenAI or AI before or are experiencing GenAI in a new context of use.
  • Owning responsibility for experience outcomes—that is, responsible AI:
    • You must own your role as the voice of the user within the context of these experiences.
    • By clearly communicating the human risks of AI experiences, you can uphold the tenets of responsible AI. Document these risks for your team.
    • Your research read-outs, reports, and presentations should reflect this level of responsibility and focus on actionable steps that let you foresee and mitigate poor experience outcomes.
    • In general, everyone wants to do the right thing. If you have concerns about an AI experience, don’t be timid—communicate them.
  • Understanding cognitive aspects of trust and how trust varies by the user’s role and context of use:
    • With AI experiences, a key piece of the UX puzzle is how users perceive, gain, and lose trust through interactions with an AI.
    • As a UX researcher, you are best positioned to not only bring awareness of the role that trust plays in these experiences but also to understand, measure, and track users’ trust in AI and your GenAI experience over time.
  • Acknowledging that users might not even exist or could emerge suddenly:
    • The hardest part of managing UX research for an AI experience might be that users of these experiences don’t yet actually exist. You might be researching a brand new experience or be the first person exposing users to AI within their context of use. This presents unique challenges in interviewing participants and introducing them to GenAI experiences.
    • Plus, new personas or job profiles could emerge suddenly and without much warning, especially if your experiences serve other businesses or providers. While we often think of technology’s replacing tasks, in the case of AI, we’re also seeing new tasks, new jobs, and new roles.

Shifting Your Communication Strategy

We’ve found that the way in which we communicate insights has evolved and changed and expect that is the case for most UX researchers working within the context of AI. The audience for our research and the impacts of our research insights are different. Ensuring that our presentations and insights meet the needs of our stakeholders is more important than ever, especially with extremely cross-functional teams that have varying degrees of AI experience and ownership of different parts of the product’s experience.

  • Cross-functional partners must make decisions based on your UX research, often in real time. Plus, these decisions are often riskier in terms of their impact on the user experience, your UX strategy, the overall business strategy, and cost.
  • The pace at which you deliver insights and notify your team of changes over time requires more rapid cycles of delivery with an emphasis on actionability.
  • Conducting research while supporting the pace of innovation can test your limits as a UX researcher. You might need to make sacrifices or face burnout if you try to do it all. Instead, find ways to incorporate bigger questions and strategic initiatives into quicker cycles. Regularly link your findings back to the bigger picture for your team.
  • Different messages and levels of insights are necessary for different teams—for example, AI Applied Research versus Design versus Product versus UX Research. Understand what each team needs to make decisions or take their next step and tailor your findings accordingly for maximum impact.

Applying Your Knowledge and Elevating the User

As you navigate this new frontier, please know that you likely possess all the necessary skills and abilities to thrive, grow your knowledge, and positively impact users. Although many of the tools in the Gen AI UXR toolbox are familiar, you might find that the ways you use them and the impacts of your work are very different.

When working on a cross-functional product team, you might be the only person present who can reflect the actual user’s needs and ensure that the experience you create adds real value and is worthy of their trust. Do not be afraid to ask questions and speak up and share your findings. More than ever, we need to leverage users’ feedback to make important decisions and influence adoption and investment across the entire organization. Represent users responsibly and with confidence! 

Resources for Growing Your AI Expertise

NN/g’s “AI Glossary.”

Coursera course from DeepLearning AI, “Generative AI for Everyone.”

Nvidia’s “What Is Generative AI?

Principles of Generative AI, A Technical Introduction,” (PDF) by Karen Singh, Assistant Professor of Operations Research at Carnegie Mellon Tepper School of Business

McKinsey & Company, “What Is Generative AI?

Senior UX Research Manager, AI/ML, at ServiceNow

Ogden, Utah, USA

Katie SchmidtAfter graduating with a Master’s in Experimental Psychology and publishing in the field of psychology and law, Katie began her UX career at Northrop Grumman where she was a lead UX researcher for enterprise experiences. She helped form the first team for enterprise UX at the company, then went on to manage several cross-functional teams focusing on internal and external products and experiences. Katie joined ServiceNow in 2022 as the manager for the Artificial Intelligence/Machine Language (AI/ML) UX Research team. Under her leadership, the team has grown in size and business influence, participating in history-making product roll outs for Generative AI. Her team has also emerged as a strong voice for the role of UX research in responsible AI and human-centered AI ethics. Katie values transparency, human connection, and loyalty as both a people leader and a voice in the field of AI.  Read More

Senior UX Researcher, AI/ML, at ServiceNow

Montreal, Quebec, Canada

Hayley MortinAs a Senior UX Researcher on the Platform Artificial Intelligence/Machine Language (AI/ML) team at ServiceNow, Hayley started her journey in AI working as a Data Annotator, where she learned about the AI development lifecycle while creating datasets for training computer vision. This foundational experience paved the way for her transition into UX Research, a move that was inspired by her academic background in Psychology and Behavioral Science. Today, she focuses on understanding how users perceive and approach adopting AI/ML technologies, and she explores ways to build trust with users through explainable AI design.  Read More

Manager & Strategist of AI UX Research at ServiceNow

San Diego, California, USA

Jessa AndersonJessa has over 15 years of experience researching human behaviors and needs, with a PhD in Health & Human Behaviors, and nearly five years focusing specifically on helping to understand the user experience of artificial intelligence (AI) in enterprise settings. She is a champion for humans and elevating the role of the human in the unique interplay between AI technology and users, across a variety of personas, from non-technical to highly technical. Outside of work, she is busy being a mom and soaking up the sun in San Diego.  Read More

Other Articles on UX Research

New on UXmatters