Top

An Overview of Expert Heuristic Evaluations

June 2, 2014

An expert heuristic evaluation is a form of discount usability evaluation. The essential idea is that such evaluations are, or should be, quicker and cheaper to perform than usability studies with a sample of participants that are representative of actual users.

Over the last 20 years, I’ve done a fair few expert heuristic evaluations. Recently, I did one for a client who wanted me to provide a good description of this approach to usability evaluation in my report—something I’d never been asked to do before. I thought I’d be able to just do a quick Google search and steal something suitable—providing an acknowledgment and reference, of course. My search yielded lots of good stuff. My own library of usability books and journals also included lots of good information on this topic. But I didn’t find quite what I was looking for, so I decided to write something new about expert heuristic evaluations. I hope that this information will be useful to the wider UX design and usability community.

Champion Advertisement
Continue Reading…

Why the Term Expert Heuristic Evaluation?

As David Travis points out in his excellent Udemy lectures on expert heuristic evaluations, [1] these things go by many different names. You can combine any number of the terms in the list on the left with any one of the terms on the right to create your own name for them:

expert

evaluation

heuristic

review

usability

inspection

user interface

assessment

consistency

appraisal

user experience

critique

Thus, we’ve ended up with things like expert usability assessment or user experience critique. As Travis also points out, many UX agencies claim that their “user interface inspections” or “heuristic consistency critiques” are somehow different or special, but my experience and his is that they’re all talking about basically the same thing!

So why have I settled on expert heuristic evaluation? Well, the term evaluation is just my personal preference. But I argue that the other two terms, expert and heuristic, capture everything important about this type of evaluation.

Expertise and Heuristics

The word expert conveys the idea that the opinion of someone who is demonstrably an expert in usability engineering is more valuable and should have more credibility than that of a non-expert. Of course, this begs the question of what constitutes an expert. In my experience, UX professionals who provide good expert heuristic evaluations typically have considerable gravitas in the field of usability engineering. This typically comes from them having both extensive practical experience and advanced, specialist academic qualifications in this field.

I’ve often seen people who are not actually usability experts conducting so-called expert evaluations! For example, UX designers conduct many such evaluations and, while they may be very experienced in UX design and great at what they do, many are not sufficiently expert in usability engineering to carry out these evaluations well. As I point in my UXmatters article “UX Defined,” designing and evaluating user interfaces are two separate areas of user experience—though there are, of course, many UX professionals who are highly skilled in both of these areas.

The general idea of any heuristics-based activity is that you aim to reach an excellent solution, while still recognizing that the solution may not be optimal. Jakob Nielsen and Rolf Molich first applied this idea to usability engineering by assessing the usability of a user interface with reference to a well-established, or proven, set of general principles, guidelines, and criteria that tend to result in good user interface design. These are generally known as heuristics, but are sometimes referred to as rules of thumb. The key here is to move the evaluation away from opinion and more toward measurement and, thus, gain greater objectivity.

While these components of expertise and heuristics are distinct, they are typically interrelated because, in actual evaluations, a usability expert typically applies the heuristics.

Which Heuristics Should You Use?

There is no single right answer to this, but UX professionals commonly use the following heuristics when conducting expert heuristic evaluations because they originate from authorities in usability engineering and are in wide use to good effect:

  • Jakob Nielsen’s “Heuristics for User Interface Design”—This post originated, or at least made famous, the concept of heuristic evaluations. [3]
  • Jakob Nielsen’s “Top 10 Mistakes in Web Design” [4]
  • Arnie Lund’s “Expert Ratings of Usability Maxims” [5]
  • Bruce Tognazzini’s “Principles of Interaction Design” [6]
  • Ben Shneiderman’s “Eight Golden Rules of Interface Design” [7]

However, these are by no means the only valuable heuristics. Also, it can be appropriate to use your own custom or new heuristics, as the context for an evaluation dictates.

How They Relate to Usability Studies

Expert heuristic evaluations should never be a substitute for usability studies! Human behavior is diverse, unpredictable, and variable. Despite our best efforts as usability experts, users often surprise us: They may fail when we think something will be easy for them—even when a user interface theoretically addresses all of the heuristics well. Likewise, they sometimes sail through tasks that we’ve predicted would be difficult for them.

Expert heuristic evaluations do not produce the definitive statistical data that you can gain from a usability study that you’ve conducted with a reasonable sample size, so inevitably, they have less credibility as evidence that a user interface design is likely to work well. Expert heuristic evaluations depend more on interpretation and aim only to provide an approximation of the findings that you would expect from a usability study with a large sample size. However, UX professionals often use expert heuristic evaluations to complement usability studies that have a relatively small sample size to good effect. While the two methods of evaluation are qualitatively different, you can identify areas of commonality in their results. In research, we speak of this is an example of triangulation.

Another key difference between expert heuristic evaluations and usability studies is that, while the only aim of a usability study is to identify problems with a user interface, usually a good expert heuristic evaluation also recommends a range of potential solutions to any problems that an expert identifies.

Conducting Expert Heuristic Evaluations

When experts conduct heuristic evaluations—particularly outside academic contexts—they do not usually attempt to map user interface problems directly to specific heuristics because the mapping of problems to heuristics is typically many-to-many, convoluted, and multidimensional. Mappings of specific problems to heuristics would be very hard to structure in any report—particularly when presenting findings to audiences who are not specialists in usability engineering—and explanations of the mappings would likely be verbose and complex. Instead, experts conducting such evaluations typically just state what heuristics they’re using, then keep them in mind when identifying problems and recommending solutions.

It is common for experts performing heuristic evaluations to do the following three key things in their reports:

  • Uniquely identify each issue and any associated recommendations. This lets you easily refer to them in any report or discussion.
  • Prioritize, or rank, each issue according to its severity, and codify the ranking in some way. Myself and many others use this traffic-light taxonomy:
    • example of a best practice (Green)
    • minor problem (Yellow)
    • serious problem (Orange)
    • critical problem (Red)
  • Relate each issue and its recommended solutions to screenshots of the user interface. This enables people who read your report to see the problems to which you’re referring without their needing to have access to the system you’ve evaluated.

Other than these common practices, the approaches that usability experts use in performing expert heuristic evaluations tend to vary, but often include the following:

  • screen-by-screen or page-by-page analysis—Taking this approach makes it easier to structure your report.
  • using a goal-task-action model—Such a model is consistent with that for a cognitive walkthrough, which is a different method of evaluation.
  • following prescribed user journeys—These may be the same as those for a parallel usability study for a system.

How Many Experts Do You Need?

Nielsen [8, 9] and Landauer [9] have argued strongly that heuristic evaluations should be performed by three experts. They based their recommendation on extensive research into how many problems individual experts would be likely to find in a user interface when conducting this type of evaluation. They found that three experts identified about 60% of the problems and that, as the number of experts increased, the number of additional problems that they found tailed off exponentially. For example, it took ten experts to find 85% of the problems. This led them to the conclusion that three experts offer the optimal cost-benefit ratio for heuristic evaluations.

There has been much discussion about Nielsen and Landauer’s research and their associated recommendation since its publication. In the last decade, many usability engineers have questioned its validity—in part, because the cost of using three experts is often similar to or even greater than that of conducting a usability study with a decent sample size. Of course, this goes against the very idea of a discount method of usability evaluation! So, while the use of multiple experts remains common, many organizations now use just one or two experts—particularly if they’re conducting an expert heuristic evaluation in conjunction with a usability study.

Reporting the Results

Some common ways of presenting expert heuristic evaluations are in printed reports and PowerPoint presentations. A less common method is to produce a video presentation, using a tool such as TechSmith’s Camtasia or Morae, which lets you synchronously capture the screen output, audio of the expert talking, mouse movements, and optionally, a video of the expert talking. I like using this method because I can often produce a video more quickly than a formal report or PowerPoint deck and, at the same time, provide richer feedback.

Whichever format you use for reporting your findings, it’s important to do four things:

  • State clearly what heuristics you have used. This helps to establish the credibility of your evaluation because the heuristics come from authorities in usability engineering.
  • Provide the CV, or resumé, of each expert doing an evaluation. It’s critical to establish and maximize the experts’ credibility to ensure that your client does not discount the work as “just another person’s opinion.”
  • Provide an executive summary. You can’t expect executives to go through your whole evaluation! They just need to know your key findings and what they need to do next to fix any issues that need fixing—which usually means allocating more time and resources to a project.
  • Arrange some time to discuss your evaluation with your client. Preferably this discussion should occur face to face. Just as for usability studies, reports on the findings from expert heuristic evaluations often contain a lot of bad news. Plus, you may be criticizing someone’s pride and joy. Although, in an ideal world, people would take professional criticism as constructive criticism, it’s not uncommon for people—especially the team who designed the user interface that you’ve evaluated—to take it personally. So it’s important to meet with them in person to soften the blow, clarify your feedback, and let them know the good news, too—that everything is fixable, and you can help them to fix it!

Using Axure for Expert Heuristic Evaluations

Since I’m an Axure expert, I’ve recently started using this tool when doing expert heuristic evaluations. In brief, for those who are familiar with Axure, I use its image-editing tool to slice, crop, and manipulate screen shots. I use its customizable page and widget annotation features, in combination with a custom widget library that I’ve developed, to cross-reference and prioritize issues on each page. Then, I use its customizable specifications generator, along with a custom Microsoft Word template that I’ve developed, to automatically output a very slick Microsoft Word document. Please contact me if you’re interested in learning how I do this in greater detail. 

Endnotes

[1] Travis, David. “How to Carry Out a Usability Expert Review.” Udemy, undated. Retrieved April 25, 2014.

[2] Nielsen, Jakob, and Rolf Molich. “Heuristic Evaluation of User Interfaces”. In CHI ’90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York: ACM Press, 1990.

[3] Nielsen, Jakob. “10 Usability Heuristics for User Interface Design.” Nielsen Norman Group, January 1, 1995. Retrieved April 25, 2014

[4] Nielsen, Jakob. “Top 10 Mistakes in Web Design.” Nielsen Norman Group, January 1, 2011. Retrieved April 25, 2014.

[5] Lund, Arnold. “Expert Ratings of Usability Maxims.” Ergonomics in Design, Volume 3, Issue 5, 1997.

[6] Tognazzini, Bruce. “First Principles of Interaction Design (Revised & Expanded).” Ask Tog, March 5, 2014. Retrieved April 25, 2014.

[7] Shneiderman, Ben. “The Eight Golden Rules of Interface Design.” University of Maryland, undated. Retrieved April 25, 2014.

[8] Nielsen, Jakob. “Finding Usability Problems Through Heuristic Evaluation.” In CHI ’92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York: ACM Press, 1992.

[9] Nielsen, Jakob, and Thomas Landauer. “A Mathematical Model of the Finding of Usability Problems.” In Proceedings of the ACM/IFIP INTERCHI '93 Conference. New York: ACM Press, 1993.

CEO of Ax-Stream

London, England, UK

Ritch MacefieldRitch has worked in UX design since 1995. He has a BA with Honours in Creative Design, an MSc with Distinction in IT-Computing, and a PhD in Human-Computer Interaction (HCI) from Loughborough University’s HUSAT (HUman Sciences and Advanced Technology) Institute. Ritch has lectured at the Masters level in five countries—on user-centered design, UX design, usability engineering, IT strategy, business analysis, and IT development methodology. He also has numerous internationally recognized qualifications in IT-related training and education. He has published many HCI articles and was on the editorial team for the Encyclopedia of HCI (2007). Ritch has led major UX design and user-centered design projects at Bank of America, Vodafone, Thomson-Reuters, and Dell Computers. His international experience spans 15 countries. Ritch presently heads Ax-Stream, an approved Axure training partner. His work currently encompasses Axure training, advanced Axure prototyping, usability and acceptability testing of early conceptual prototypes, coaching and mentoring of senior UX designers, and strategic UX work—integrating UX design and user-centered design into agile development methods and moving organizations from second and third generation to fourth generation prototyping methods.  Read More

Other Articles on Usability Testing

New on UXmatters