Cognitive Walkthrough

The Basic Idea

Taylor just signed up for a new online project management tool to boost productivity when working with her team. Eager to get started, Taylor creates an account and is immediately met with different features, buttons, and menus. Almost instantly, a pop-up message suggests following a tutorial before navigating the tool alone. Thinking “How difficult can this be?” Taylor closes the pop-up. She believes it’s better to play around for a while than waste her time going through a tutorial and many instructions.

A cognitive walkthrough is a usability evaluation method used in human-computer interaction and user experience (UX) design. It involves a systematic analysis of a user's thought process while interacting with a product or system. The goal is to identify potential usability issues and to understand how easily users can accomplish their tasks. It’s conducted with people like Taylor in mind—those first-time users who feel that they learn better by actually navigating through a system than by reading a manual. 

A user like Taylor expects to be able to go through the system or product by following cues that guide them through the different tasks available. For example, if Taylor encounters a “Create New Project” button they anticipate a structured process, typically involving a series of steps or fields. This expectation is aligned with Jakob’s Law of Internet User Experience, which states that users bring with them expectations based on their previous interactions with other interfaces.

Cognitive walkthroughs aim to learn about the user's logic and ways of thinking so a business can facilitate intuitive navigation. They also intend to create tailored experiences that serve as reminders of how to perform different tasks.

Let's go through the steps of a cognitive walkthrough, using Taylor’s scenario as our guide.1,2

1. Identify a goal for the test: Begin by defining a clear goal for a key section or feature of your interphase. For example, the objective might be for Taylor to successfully initiate a new project and add team members. Goals should be measurable (was the objective achieved? After how many tries? How long did it take?).

2. Assemble the team: You can use your own team of UX specialists, engineers, or domain experts as participants. But ideally, you should be choosing people who haven’t interacted with the product—to mimic Taylor’s experience—and avoid biases. It’s beneficial to include real users or participants who closely resemble the target demographic to gain insights into a wider range of user interactions and experiences.

3. Assign tasks/actions: Outline realistic tasks that range in difficulty, some that a new user might attempt and others that require a bit more knowledge (to see how experts interact). As the researcher, you should already know every single step that has to be taken to evaluate if the participant has gone through all of them. 

  • Iterative testing, where the product is evaluated, refined, and tested again, is crucial in UX design as it allows for continuous improvements and the ability to catch new issues as changes are made.

4. Perform the walkthrough: For each task, you should ask the following four questions

  • Will the user try and achieve the right outcome?: Determine if it’s clear to the user what to do. Typically, we give them goals rather than direct tasks. So, for example, ask: “How would you set up a new project with your team,” rather than: “Find the Create New Project button.”
  • Will the user notice that the correct action is available to them?: Analyze if the system provides adequate cues. Is there only one route to complete the task? Are they faced with specific challenges?
  • Will the user associate the correct action with the outcome they expect to achieve?: Consider if the system’s feedback and labeling are intuitive.
  • If the correct action is performed, will the user see that progress is being made towards their intended outcome?: Evaluate if the system provides appropriate feedback to indicate that the user is on the right path.

5. Document findings: You should record if participants pass or fail and what made this result happen. Where did they struggle? What was easier for them?

  • Gathering detailed feedback from participants after the walkthrough can offer additional insights that the structured test might not reveal. This could include their subjective satisfaction, perceived ease of use, and suggestions for improvement.

Once again, the whole point of cognitive walkthroughs is to evaluate learnability, especially for systems or products with new or unfamiliar workflows and functionalities.

After you’ve worked on a site for even a few weeks, you can’t see it freshly anymore. You know too much. The only way to find out if it really works is to test it


— Steve Krug, Usability and UX expert, author of: Don’t Make Me Think3

About the Author

Mariana Ontañón

Mariana Ontañón

Mariana holds a BSc in Pharmaceutical Biological Chemistry and a MSc in Women’s Health. She’s passionate about understanding human behavior in a hollistic way. Mariana combines her knowledge of health sciences with a keen interest in how societal factors influence individual behaviors. Her writing bridges the gap between intricate scientific information and everyday understanding, aiming to foster informed decisions.

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?