UXmatters has published 70 editions of the column Practical Usability.
User research consists of two core activities: observing and interviewing. Since we’re most interested in people’s behavior, observing is the most important of these activities because it provides the most accurate information about people, their tasks, and their needs.
While interviewing is also very important, the information people provide during interviews isn’t always accurate or reliable. Often, research participants don’t know why they do things, what they really need, what they might do in the future, or how a design could be improved. To really understand what people do, you can’t just ask them, you have to observe them.
But exactly what is observation, and what does it entail? Though we all know what the word observation means and everyone knows how to look and listen, there is more to it than just pointing your eyes in a particular direction, listening, and taking notes. By doing a little research, I found many books and articles about interviewing, but surprisingly few about how to observe research participants. So, in this column, I’ll first explore what observation is and the different types of observation methods, then focus on one particularly useful, yet underused UX research method: naturalistic observation. Read More
In the old days, card sorting was simple. We used index cards, Post-it notes, spreadsheets, and buggy software—USort and EZCalc—to analyze the results, and we liked it! But this isn’t another article about how to do card sorting. Nowadays, there are multiple techniques and tools, both online and offline, for generative and evaluative user research for information architecture (IA), which provide greater insights on organizing and labeling information.
In this column, I’ll summarize and compare the latest generative and evaluative methods for IA user research. The methods I’ll examine include open card sorting, Modified-Delphi card sorting, closed card sorting, reverse card sorting, card-based classification evaluation, tree testing, and testing information architecture with low-fidelity prototypes. I’ll cover the advantages and disadvantages to consider when choosing between these methods, when it makes sense to use each method, and describe an ideal combination of these methods. Read More
Scoping a project’s user-research phase is a classic Catch-22 situation. Before a project even begins, you must plan the research activities and the time necessary to perform them, but you’ll rarely have enough information to make these decisions optimally until after the project begins. If you estimate too much time and money, you might scare clients away. Estimate too low, and you’ll either go over budget or won’t have enough time to do the research properly.
To accurately scope user research, you must have a somewhat detailed understanding of the project’s business goals, the users, and their tasks. While you can usually get an overview of this information by talking with your clients, it’s difficult to obtain accurate, detailed information until after a project’s kickoff meeting and initial stakeholder discussions. At that point, you might realize that the research methods you’ve planned aren’t the ones that would let you best understand the problem. You might need more or different participants, and there might not be enough time to conduct and analyze the research. In this column, I’ll discuss some of the problems you may encounter when scoping user research and provide some advice about how to make scoping more accurate. Read More