The Basic Idea
Consider the simplicity of a well-designed website login page. You effortlessly navigate through the process of entering your credentials and accessing your account. Now think back to a time when you battled with a poorly designed website login page. Chances are you either finished the process frustrated or simply gave up. The success of a digital interface comes down to how thoroughly its usability has been tested during its design and development. One key aspect of that testing is heuristic evaluation.
Heuristic evaluation is an inspection method for identifying problems in a user interface. During the evaluation, a small group of systems specialists use a set of predefined principles or guidelines known as heuristics to assess how user-friendly the interface is.
In the world of cognitive science, heuristics are mental shortcuts or problem-solving strategies that provide efficient and practical solutions to complex problems, especially in decision-making processes. In the context of usability and user experience (UX) design, heuristics are commonly used as guidelines to evaluate and improve the usability of interfaces or systems. Think of them as a detailed checklist for designers and developers to ensure user-friendliness.
Heuristic evaluation is usually done early on in the user interface design process as it allows designers to receive and act upon feedback before they conduct a full usability test with real people.
Theory, meet practice
TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.
Heuristics: Mental shortcuts or rules of thumb that individuals use to make quick decisions or solve problems efficiently. In the context of usability and user experience, heuristics refer to established principles that guide the evaluation of interfaces, helping identify potential usability issues.
Usability: Refers to the ease with which users can interact with a product, system, or interface to achieve their goals effectively and satisfactorily. It encompasses factors such as accessibility, learnability, efficiency, memorability, and user satisfaction in the overall user experience.
User experience (UX): The overall interaction and satisfaction users have with a product, system, or service. It considers factors such as usability, accessibility, aesthetics, emotions, and the overall perception of the user throughout their entire journey, from initial interaction to completion of tasks.
The origins of heuristic evaluation can be traced back to 1990 when web usability experts Jakob Nielsen and Rolf Molich produced a seminal paper entitled ‘Improving a Human-Computer Dialogue’.1 In this article, the authors surveyed 77 industrial designers and programmers and found that identifying specific, potential problems in a human-computer dialogue design is difficult. In response, Nielsen and Molich proposed a short checklist of nine usability considerations (they didn’t use the term heuristics at this point) in a good dialogue that industry specialists could use to assess and improve the user experience of interfaces. Nielsen later developed these early findings into his well-known ‘10 Usability Heuristics’.
Nielsen’s guidelines are the best-known and widely accepted set of principles, but other experts have also formulated guidelines for heuristic evaluation. Computer scientist Jill Gerhardt-Powals, for example, took a more holistic approach to user interface evaluation.2 While building a user interface for a submarine’s firing system, Gerhardt-Powals emphasized the importance of considering situational awareness in interface design. She advocated for taking empirical findings from cognitive sciences and applying them to interface design to create ‘cognitively friendly’ interfaces which are based on how humans process information.
While the fundamental principles of heuristic evaluation introduced by the early pioneers of usability engineering remain influential, there has been a growing emphasis on expanding and customizing heuristics to address specific contexts and emerging technologies. It would be pointless, and perhaps dangerous (!), to assess the user experience of both a mobile phone app and a missile interface system using the same set of principles. Depending on the context, intended use, and end-user, each type of digital interface also has its own set of guidelines for use. There are heuristics for assessing user experience in virtual reality, gaming, and online shopping. For mobile devices, for example, the specific heuristics used are ergonomics and judicious use of screen real estate.
Jakob Nielsen: Danish UX expert who created a set of design principles called the ‘10 Usability Heuristics’ for creating user-friendly digital interfaces.3 In addition to his pioneering research, Nielsen is also co-founder of the Nielsen Norman Group, a leading consulting firm in the field of user experience.
Rolf Molich: Danish usability engineering expert who co-created heuristic evaluation with Jakob Nielsen.
Heuristic evaluation is a cost-effective and efficient method for uncovering usability problems early on in the design process. Without this early intervention, the cost of fixing usability issues later on in the process would be much higher.
Because heuristic evaluation only requires a relatively small team of evaluators to quickly assess the interface against established principles, the costs are usually kept low. This efficiency contrasts with the more resource-intensive methods found later on in the process, such as large-scale user testing which can involve anywhere from 5 to 100+ people.
However, it's important to note that heuristic evaluation is not a substitute for user testing with actual users. The findings of the evaluation should be validated through user testing to ensure that the identified issues align with actual user behaviour and preferences. Conducting a heuristic evaluation before usability testing can ensure that the latter is more focused and productive.
Another positive outcome of heuristic evaluation is that it can serve an educational purpose within design teams. The process helps team members gain a deeper understanding of a wide range of usability principles and best practices, thus fostering a user-centric mindset that can be taken forward into future projects.
Controversies surrounding heuristic evaluation primarily revolve around its subjectivity and potential limitations. Critics argue that the method heavily relies on the evaluators’ expertise, introducing a degree of subjectivity that may vary between individuals. Additionally, for a heuristics evaluation to be effective and provide valuable usability insights, evaluators need to ensure that they are choosing the correct set of heuristics which resonate with the context of the interface.
Over time, debates have built up around the continued use and applicability of Nielsen’s original ‘10 usability heuristics.’ One criticism, for example, is that they lack scalability because they were shaped with only desktop applications in mind, and at a time when interfaces were far less complex.4 Moreover, according to Robert Bailey, a leading UX designer and consultant, Nielsen’s heuristics have never actually been validated and there is no evidence that their application in the design process actually improves the user interface.5
Heuristic evaluation also encounters challenges when navigating cultural differences, as the process often relies on a set of universal principles that might not fully account for diverse cultural contexts. Cultural variations in user expectations, communication styles, and aesthetic preferences can impact the applicability of heuristics across different user groups. For example, color symbolism or the perception of certain design elements may vary significantly among cultures. While in the Middle East the color red evokes danger and caution, in China it symbolizes luck and happiness. As such, evaluators need to be attuned to cultural nuances and consider context-specific factors when applying heuristics effectively.
A click can make all the difference
When it comes to assessing medical devices, clinical effectiveness is usually prioritized over user experience and usability. However, research suggests6 that the majority of medical device incidents are linked to inappropriate designs for user interactions rather than mechanical failures. In other words, a medical device isn’t going to be clinically effective if its intended users can’t use it properly.
In their article on the use of heuristics in evaluating and predicting patient medical device use, Zhang et al. recount the story of one physician’s encounter with a poorly designed user interface. The physician was treating an infant with an oxygen machine and set the flow knob between 1 and 2 liters per minute. After realizing that the child was not receiving any oxygen, the physician discovered that the device was designed to deliver oxygen only when the knob was set to a whole number on the dial, not between numbers. The authors note that in order to avoid this type of error occurring, a small click on each number should have been added to give the user audible feedback of the rate of oxygen flow.
Had a thorough heuristic evaluation been conducted on this medical device, this small, but important, design quirk would likely have been revealed and changed.
For complex, new, or unfamiliar interfaces, new users need to be able to learn how to use the system quickly and sometimes without much guidance. A Cognitive Walkthrough is a form of usability inspection that focuses on the ‘learnability’ of an interface and identifies design problems that could derail new users. The idea was first presented by Clayton Lewis,7 a computer scientist from the University of Colorado Boulder, and his colleagues who observed that many users prefer to learn software through exploration.
Imagine a health clinic which requires patients to check in for their appointment using a tablet. To assess the user experience of the tablet using a cognitive walkthrough, the reviewers would evaluate the precise steps that the patients would go through to complete the check-in. During each stage of the process, the evaluators address specific questions relating to the patient’s experience and determine whether the patient is likely to ‘pass’ or ‘fail’ that step in the check-in process.
A cognitive walkthrough differs slightly from heuristic evaluation because it is based on a specific set of questions; heuristic evaluation is much more general in nature. However, the two approaches are often combined as part of the design development process in order to gain a comprehensive understanding of the system from two different perspectives.
Related TDL Content
Knowledge of how people’s brains function, and how their decisions are influenced by external factors, is crucial for understanding how people interact with digital products and systems. TDL worked with leading Canadian fintech company, Moka, to understand the underlying cognitive biases and heuristics which may impact user engagement with their pioneering savings and investment app.
Each generation has different needs, behaviors, expectations, and ways of doing things. So, when it comes to designing digital services targeted at Gen Z, the user experience needs to be aligned with the way they think, feel, and learn. This article looks at how behavioral insights can be applied to designing a mental health app for Gen Z which resonates with the user experience they are looking for.
- Molich R., Nielsen J. (1990). Improving a human-computer dialogue, Communications of the ACM, 33(3), 338-348.
- Gerhardt-Powals, J. (1996). Cognitive engineering principles for enhancing human-computer performance. International Journal of Human-Computer Interaction, 8(2), 189-211.
- Nielsen, J. (1994, April 24). 10 Usability Heuristics for User Interface Design. Nielsen Norman Group. https://www.nngroup.com/articles/ten-usability-heuristics/
- Ballav, A. (September, 2017) Nielsen’s Heuristic evaluation: Limitations in Principles and Practice. User Experience Magazine, 17(4). https://uxpamagazine.org/nielsens-heuristic-evaluation/
- Bailey, R. (May, 1999). Heuristic Evaluations. Human Factors International. https://www.humanfactors.com/newsletters/heuristic_evaluations.asp
- Zhang, J., Johnson, T. R., Patel, V. L., Paige, D. L., & Kubose, T. (2003). Using usability heuristics to evaluate patient safety of medical devices. Journal of Biomedical Informatics, 36(1), 23-30. http://dx.doi.org/10.1016/S1532-0464(03)00060-1
- Lewis, C., Polson, P., Wharton, C., & Rieman, J. (1990). Testing a walkthrough methodology for theory-based design of walk-up-and-use interfaces. Proceedings of CHI, 1990 (Seattle, WA, April 1-5, 1990), ACM, New York. 235-242.