Why the Term Expert Heuristic Evaluation?
As David Travis points out in his excellent Udemy lectures on expert heuristic evaluations, [1] these things go by many different names. You can combine any number of the terms in the list on the left with any one of the terms on the right to create your own name for them:
expert |
evaluation |
heuristic |
review |
usability |
inspection |
user interface |
assessment |
consistency |
appraisal |
user experience |
critique |
Thus, we’ve ended up with things like expert usability assessment or user experience critique. As Travis also points out, many UX agencies claim that their “user interface inspections” or “heuristic consistency critiques” are somehow different or special, but my experience and his is that they’re all talking about basically the same thing!
So why have I settled on expert heuristic evaluation? Well, the term evaluation is just my personal preference. But I argue that the other two terms, expert and heuristic, capture everything important about this type of evaluation.
Expertise and Heuristics
The word expert conveys the idea that the opinion of someone who is demonstrably an expert in usability engineering is more valuable and should have more credibility than that of a non-expert. Of course, this begs the question of what constitutes an expert. In my experience, UX professionals who provide good expert heuristic evaluations typically have considerable gravitas in the field of usability engineering. This typically comes from them having both extensive practical experience and advanced, specialist academic qualifications in this field.
I’ve often seen people who are not actually usability experts conducting so-called expert evaluations! For example, UX designers conduct many such evaluations and, while they may be very experienced in UX design and great at what they do, many are not sufficiently expert in usability engineering to carry out these evaluations well. As I point in my UXmatters article “UX Defined,” designing and evaluating user interfaces are two separate areas of user experience—though there are, of course, many UX professionals who are highly skilled in both of these areas.
The general idea of any heuristics-based activity is that you aim to reach an excellent solution, while still recognizing that the solution may not be optimal. Jakob Nielsen and Rolf Molich first applied this idea to usability engineering by assessing the usability of a user interface with reference to a well-established, or proven, set of general principles, guidelines, and criteria that tend to result in good user interface design. These are generally known as heuristics, but are sometimes referred to as rules of thumb. The key here is to move the evaluation away from opinion and more toward measurement and, thus, gain greater objectivity.
While these components of expertise and heuristics are distinct, they are typically interrelated because, in actual evaluations, a usability expert typically applies the heuristics.
Which Heuristics Should You Use?
There is no single right answer to this, but UX professionals commonly use the following heuristics when conducting expert heuristic evaluations because they originate from authorities in usability engineering and are in wide use to good effect:
- Jakob Nielsen’s “Heuristics for User Interface Design”—This post originated, or at least made famous, the concept of heuristic evaluations. [3]
- Jakob Nielsen’s “Top 10 Mistakes in Web Design” [4]
- Arnie Lund’s “Expert Ratings of Usability Maxims” [5]
- Bruce Tognazzini’s “Principles of Interaction Design” [6]
- Ben Shneiderman’s “Eight Golden Rules of Interface Design” [7]
However, these are by no means the only valuable heuristics. Also, it can be appropriate to use your own custom or new heuristics, as the context for an evaluation dictates.
How They Relate to Usability Studies
Expert heuristic evaluations should never be a substitute for usability studies! Human behavior is diverse, unpredictable, and variable. Despite our best efforts as usability experts, users often surprise us: They may fail when we think something will be easy for them—even when a user interface theoretically addresses all of the heuristics well. Likewise, they sometimes sail through tasks that we’ve predicted would be difficult for them.
Expert heuristic evaluations do not produce the definitive statistical data that you can gain from a usability study that you’ve conducted with a reasonable sample size, so inevitably, they have less credibility as evidence that a user interface design is likely to work well. Expert heuristic evaluations depend more on interpretation and aim only to provide an approximation of the findings that you would expect from a usability study with a large sample size. However, UX professionals often use expert heuristic evaluations to complement usability studies that have a relatively small sample size to good effect. While the two methods of evaluation are qualitatively different, you can identify areas of commonality in their results. In research, we speak of this is an example of triangulation.
Another key difference between expert heuristic evaluations and usability studies is that, while the only aim of a usability study is to identify problems with a user interface, usually a good expert heuristic evaluation also recommends a range of potential solutions to any problems that an expert identifies.