Qualitative user interviews are a core method of user research with which UX professionals likely feel very confident. However, when running user interviews for products that require users to interact with Generative AI (GenAI) experiences, there are some differences, so be prepared to ask certain key questions.
In this article, we’ll provide a definitive collection of qualitative interview questions that you can incorporate into your next user-research project for GenAI products. We’ve designed these questions to uncover insights at different stages of the product lifecycle, ensuring that your research remains human-centered and actionable. Maze AI’s excellent blog post on UX research interview questions categorizes these questions into three types:
Questions about the problem—The purpose of these questions is to understand the user’s core painpoints and goals.
Questions about the people—These questions let you gain insights into the users themselves, including their behaviors and preferences.
Questions about the product—Asking these questions lets you explore how users interact with and perceive the product your team is building.
Champion Advertisement
Continue Reading…
These categories of questions have provided a helpful framework for us to adapt our own tried-and-true user-research questions to the unique context of researching GenAI products. These questions are best tailored to the users of GenAI tools, who use these tools in their daily work rather than to those who are involved in the setup and configuration of AI within an organization.
While this is not an exhaustive list, these questions will help you get started and enable you to think about different types of questions to ask. Some of these questions won’t make sense for all users or all scenarios, but we wanted to provide a range of examples for you to leverage and adapt.
1. Questions About the Problem
Ensuring that GenAI features address real user needs rather than incorporating AI for novelty’s sake is critical. Our previous articles on UXmatters have addressed the recurring need to avoid the trap of designing AI for AI’s sake. Making sure that your product is addressing a true user issue is something you can verify during user interviews.
When to Ask These Questions
Asking the following questions is most effective during the early Discovery phase, or product-ideation phase, of your project, when defining the user problem is key.
The Questions
Here is our list of example questions that help define the problem you’re solving:
What are your primary goals when conducting [a specific task]?
What tasks do you find most repetitive or time consuming?
Can you describe the most frustrating part of trying to accomplish [a specific task] using your current tools?
What does success look like for you when completing that [task]?
What is the most enjoyable or satisfying part of completing [a specific task]?
If you could wave a magic wand and solve one problem in your workflow—whether using AI or not—what would it be?
Are there any aspects of your current process that you’d prefer to remain completely manual? Why?
What concerns do you have about using AI for [a specific domain or task]—for example, accuracy, security, or user control?
What risks or challenges do you foresee in relying on AI to perform this task?
2. Questions About the People
Understanding your users—including their experiences, mental models, and expectations—is especially important when designing for GenAI, which introduces probabilistic outputs and a need for trust. Previous AI experience—whether for personal tasks or at work—can significantly impact the ways in which users receive an AI experience.
Note—It is important that you define GenAI at this point and how you are using the term. Provide a simple, clear definition and an example. Here is an example definition that you can use:
Generative AI (GenAI) is a type of artificial intelligence (AI) that can create new content such as text, images, music, or videos based on simple instructions or prompts. Common examples include tools such as ChatGPT, for writing or answering questions; DALL-E, for generating images from descriptions; and GitHub Copilot, for helping write computer code.
When to Ask These Questions
Ask the following questions during the Discovery phase of your project, when conducting persona research.
The Questions
What is your previous experience with AI tools such as ChatGPT or Microsoft Co-pilot?
Overall, to what degree do you trust GenAI tools? Does the context and purpose of AI tools—for example, work versus personal, finance versus customer support—make a difference in your degree of trust?
How frequently do you use GenAI tools, either for work or personal purposes? Please provide examples of tasks for which you find AI tools helpful or unhelpful.
Did your company introduce you to working with AI tools, or did you take the initiative to incorporate AI tools into your workflows?
How confident are you that you understand how AI tools work? Is it important for you to understand how AI makes decisions?
What level of transparency do you expect from AI tools regarding how they generate their outputs?
How do you currently incorporate AI tools into your workflows?
How do you feel about AI tools taking on creative versus analytical tasks in your work?
What level of control or customization do you want over an AI’s outputs—for example, over the tone, length, or format?
Do you feel more or less confident in your work when using an AI assistant? Why?
How do you typically learn to use AI tools? What has worked well for you in the past and gotten you onboarded effectively?
3. Questions About the Product
Once a GenAI product or feature has been developed, evaluating its usability, functionality, and user trust becomes essential. These questions focus on how users perceive and interact with the product and how well human-centered AI guidelines have informed its design, by including an emphasis on trust, transparency, and control—for example, ServiceNow’s Responsible AI Guidelines.
When to Ask These Questions
Next, I’ll cover what questions to ask when, according to the various phases of product development, as follows:
Concept phase—To inform the early stages of your Discovery phase and ideation, focus your questions on users’ expectations, needs, and concerns for the purpose of validating your ideas before Design and Development.
Prototyping phase—During Designand iteration, ask questions that help you to evaluate usability, transparency, and control and refine functionality and interactions. You’ll typically conduct usability testing using a semi-clickable, interactive prototype such as those you can create using Figma or wizard-of-oz demos.
Released product—Once the product is live, ask questions that help you to assess real-world usability, trustworthiness, and alignment with user expectations to inform future improvements to the product.
The Questions: Concept Phase
Ask the following questions during the Concept phase:
Trust-focused questions:
What would make you feel more confident in the AI’s recommendations or decisions?
Have there been any moments when AI tools have made you doubt their accuracy or reliability? What happened?
What level of accuracy do you expect for the AI tool to be helpful for you?
Explainability and transparency-focused questions:
How important is it for you to know what data the AI uses to generate its responses?
Would you want to know if the AI made a mistake? How should it communicate errors?
Questions focusing on ethical and bias considerations:
How important is it for you to understand how the AI was trained and on what data?
How would you react if you discovered the AI had a bias against certain groups of people or perspectives?
The Questions: Prototyping Phase
Ask the following questions during the Prototyping phase:
Control-focused questions:
How easy was using the AI tool’s user interface to complete the task?
Was it easy to correct, edit, or refine the AI’s output if it wasn’t what you wanted?
Did you feel in control of the AI’s behavior, or were there moments when you felt the AI was doing something unexpected?
To what degree were you able to provide feedback on the AI’s output and performance? How willing are you to do that?
Transparency-focused questions:
How clear was the AI’s explanation of its process or output? What could make it clearer?
Was the AI upfront about what it could and could not do? Why or why not?
Usability and feature-enhancement questions:
What did you like most about the AI’s output, and what could be improved?
Were there any parts of the AI’s response that were unclear or irrelevant?
On a scale of 1–10, how well did the AI understand your inputs or questions?
The Questions: Released Product
Ask the following questions when you’re conducting UX research after the product has been released:
Trust-focused questions:
Do you trust the outputs the AI tool generated? Why or why not?
If the AI made an error, how would that impact your willingness to use it in the future? Are you able to correct the error?
Were you able to give feedback on the AI outputs? On what were you given feedback—for example, quality, accuracy, helpfulness? When would you want the AI to give you feedback?
Questions focusing on ethics and bias considerations:
Did you notice any biases or inaccuracies in the AI’s responses?
Does the AI tool treat all users or inputs fairly? Why or why not?
What steps could the product take to improve its fairness or reduce bias?
Usability and feature-enhancement questions:
Did the AI tool’s outputs align with your expectations? If not, what was missing?
If this AI tool could offer you three additional features, what should they be?
When using this AI tool, were there moments when it did not meet your expectations? What were those moments?
Conclusion
GenAI presents a unique set of challenges and opportunities for UX researchers. When conducting qualitative studies, considering GenAI nuances and tailoring your questions to the problem, the people, and the product can help you discover meaningful insights that drive better design and foster trust in AI-driven experiences. Using this approach can help UX researchers gain and put into action comprehensive insights that can drive the design of trustworthy, human-centered AI experiences and increase their usage and value.
After graduating with a Master’s in Experimental Psychology and publishing in the field of psychology and law, Katie began her UX career at Northrop Grumman where she was a lead UX researcher for enterprise experiences. She helped form the first team for enterprise UX at the company, then went on to manage several cross-functional teams focusing on internal and external products and experiences. Katie joined ServiceNow in 2022 as the manager for the Artificial Intelligence/Machine Language (AI/ML) UX Research team. Under her leadership, the team has grown in size and business influence, participating in history-making product roll outs for Generative AI. Her team has also emerged as a strong voice for the role of UX research in responsible AI and human-centered AI ethics. Katie values transparency, human connection, and loyalty as both a people leader and a voice in the field of AI. Read More
As a Senior UX Researcher on the Platform Artificial Intelligence/Machine Language (AI/ML) team at ServiceNow, Hayley started her journey in AI working as a Data Annotator, where she learned about the AI development lifecycle while creating datasets for training computer vision. This foundational experience paved the way for her transition into UX Research, a move that was inspired by her academic background in Psychology and Behavioral Science. Today, she focuses on understanding how users perceive and approach adopting AI/ML technologies, and she explores ways to build trust with users through explainable AI design. Read More
Manager & Strategist of AI UX Research at ServiceNow
San Diego, California, USA
Jessa has over 15 years of experience researching human behaviors and needs, with a PhD in Health & Human Behaviors, and nearly five years focusing specifically on helping to understand the user experience of artificial intelligence (AI) in enterprise settings. She is a champion for humans and elevating the role of the human in the unique interplay between AI technology and users, across a variety of personas, from non-technical to highly technical. Outside of work, she is busy being a mom and soaking up the sun in San Diego. Read More