UXmatters has published 19 articles on the topic Product Reviews.
Conducting traditional synchronous, or moderated, usability testing requires a moderator to communicate with test participants and observe them during a study—either in person or remotely. Unmoderated, automated, or asynchronous usability testing, as the name implies, occurs remotely, without a moderator. The use of a usability testing tool that automatically gathers the participants’ feedback and records their behavior makes this possible. Such tools typically let participants view a Web site they are testing in a browser, with test tasks and related questions in a separate panel on the screen.
Recently, there has been a surge in the number of tools that are available for conducting unmoderated, remote usability testing—and this surge is changing the usability industry. Whether we want to or not, it forces us to take a closer look at the benefits and drawbacks of unmoderated testing and decide whether we should incorporate it into our usability toolbox. Read More
I recently bought a Toyota Prius and was surprised to notice my driving behavior change to a more economical style of driving. Doing some research, I learned that I wasn’t alone in this. Much has been written about “the Prius Effect”—how the Prius and other hybrid vehicles change driving behavior by providing feedback that shows drivers how their actions affect their gas mileage. Some people view this as a positive effect, while others, who are annoyed by slow Prius drivers, view it negatively.
What causes Prius drivers to change their behavior? I believe that it’s the feedback that the Prius’s Multi-Information Display provides to drivers. This display consists of several screens, showing the current gas mileage, average gas mileage over various periods of time, and whether the gas or electric motor is currently powering the car. In this column, I’ll discuss the Prius’s information displays, in terms of the effects they have on drivers, the usefulness of the information that they provide, and the effectiveness of their design. Read More
Someone recently asked me to provide recollections of my earliest experiences with usability testing. This took me back to around 1997, when as part of a research project, I analyzed the use of a then new Web-based library catalogue system, conducted user interviews, and redesigned the system according to the resulting findings. While this sounds straightforward now, with Google Analytics and today’s online survey tools, back then it necessitated writing raw HTML and Perl to capture data and C code to parse and analyze log-file and survey data, then mocking up alternative designs in HTML. Today, 20 years on, our expectations of software have changed radically. Fortunately, so have the tools at our disposal for designing and testing software.
For me, being able to conduct usability testing remotely is one of the biggest developments of the last 20 years. Add the gig economy, fast networks, and screen recording, and we’ve set the stage for being able to get low-cost, high-volume feedback on our software, in a way that complements our ability to rapidly prototype and do iterative, agile development. Read More