Contact  |  Login  Volunteer

Cognitive Walkthrough

The cognitive walkthrough is a usability evaluation method in which one or more evaluators work through a series of tasks and ask a set of questions from the perspective of the user.<

The focus of the cognitive walkthrough is on understanding the system's learnability for new or infrequent users. The cognitive walkthrough was originally designed as a tool to evaluate walk-up-and-use systems like postal kiosks, automated teller machines (ATMs), and interactive exhibits in museums where users would have little or no training. However, the cognitive walkthrough has been employed successfully with more complex systems like CAD software and software development tools to understand the first experience of new users.

 

Related Links

Formal Publications

Detailed description

Benefits, Advantages and Disadvantages

Advantages

  • May be done without first hand access to users.
  • Unlike some usability inspection methods, takes explicit account of the user's task.
  • Provides suggestions on how to improves learnability of the system
  • Can be applied during any phase of development.
  • Is quick and inexpensive to apply if done in a streamlined form.

Disadvantages

  • The value of the data is limited by the skills of the evaluators.
  • Tends to yield a relatively superficial and narrow analysis that focuses on the words and graphics used on the screen.
  • The method does not provide an estimate on the frequency or severity of identified problems.
  • Following the method exactly as outlined in the research is labor intensive.

 

How To

Materials Needed

  • A representation of the user interface
  • A user profile or Persona
  • A task list that includes all the tasks that you will use in the walkthrough, as well as an action sequence that details the specific task flow from beginning to end.
  • A problem reporting form and cards for listing design ideas for later use

Who Should Be Involved?

The cognitive walkthrough can be conducted by an individual or group. In a group evaluation, the important roles are:

  • Facilitator: The facilitator is generally the organizer and is responsible for making sure that the walkthrough team is prepared for the session and follows the ground rules for the walkthrough.
  • Evaluators: Representatives from the product team. These representatives could be usability practitioners, requirements engineers, business analysts, developers, writers, and trainers.
  • Notetaker: The notetaker records the output of the cognitive walkthrough.
  • Product expert: Since the cognitive walkthrough can be conducted early in the design stage (after requirements and a functional specification for example), a product expert is desired to answer questions that members of the walkthrough team may have about the systems features or feedback.
  • Domain experts: A domain expert is often, but not always a product expert. For example, if you were evaluting a complex engineering tool, you might include a domain expert in addition to product experts.

Procedure

  1. Define the users of the product and conduct a context of use analysis.
  2. Determine what tasks and task variants are most appropriate for the walkthrough.
  3. Assemble a group of evaluators (you can also perform an individual cognitive walkthrough).
  4. Develop the ground rules for the walkthrough. Some groundrules you might consider are:
    • No discussions about ways to redesign the interface during the walkthrough.
    • Designers and developers will not defend their designs.
    • Participants are not to engage in Twittering, checking emails, or other behaviors that would distract from the evaluation.
    • The facilitator will remind everyone of the groundrules and note infractions during the walkthrough.
  5. Conduct the actual walkthrough
    1. Provide a representation of the interface to the evaluators.
    2. Walk through the action sequences for each task from the perspective of the "typical" users of the product. For each step in the sequence, see if you can tell a credible story based on the following questions (Wharton, Rieman, Lewis, & Polson, 1994, pp. 106):
      1. Will the user try to achieve the right effect?
      2. Will the user notice that the correct action is available?
      3. Will the user associate the correct action with the effect that the user is trying to achieve?
      4. If the correct action is performed, will the user see that progress is being made toward the solution of the task?
    3. Record success stories, failure stories, design suggestions, and problems that were not the direct output of the walkthrough, assumptions about users, comments about the tasks, and other information that may be useful in design. Use a standard form for this process.
  6. Bring all the analysts together to develop a shared understanding of the identified strengths and weaknesses.
  7. Brainstorm on potential solutions to any problems identified.

Common Problems

The cognitive walkthrough does not provide much guidance about choosing tasks that represent what real users will do (Jeffries, Miller, Wharton, & Uyeda, 1991). The 1994 practitioner guide suggests that tasks be chosen on the basis of market studies, needs analysis, and requirements, which are all second hand sources of information. Wharton, Bradford, Jeffries, and Franzke (1992, p. 387) made some specific recommendations regarding tasks:

  • Start with a simple task and move to more complex tasks.
  • Consider how many tasks you can complete in a single walkthrough session. A common theme in the research and case study literature is that only a few tasks can be examined in any cognitive walkthrough session. A recommendation is to consider evaluating 1- 4 tasks in any given session depending on complexity.
  • Choose realistic tasks that include core features of the product. Core features are ones that are fundamental to the product and used across different tasks.
  • Consider tasks that involve multiple core features so you can get input on transitions among the core features.

Solutions from the cognitive walkthrough may be suboptimal. The cognitive walkthrough emphasizes solutions for specific problems encountered in the action sequence of a task, but does not deal with more general or higher-level solutions that might be applicable across different tasks.

Analyses tend to draw attention to superficial aspects of design (such as labels and verbiage) rather than deep aspects such as the appropriateness of the task structures and ease of error recovery.

Variations

The cognitive walkthrough has gone through several versions with each version an attempt to simplify the method. The original version,(Lewis, Polson, Wharton, & Riemen, 1990) was viewed as requiring substantial background in cognitive psychology (Wharton, Rieman, Lewis, & Polson, 1994) and cumbersome to apply in real-world environments. A variation of the original cognitive walkthrough incorporated detailed forms and instructions to simplify the method for practitioners who were not cognitive psychologists. However, these changes made the cognitive walkthrough procedure too laborious (and nearly as complex as the original version) for most practitioners. The 1994 version (Wharton, Rieman, Lewis, & Polson, 1994), was written as “a practitioner’s guide” and considered the primary reference for those who wanted to conduct cognitive walkthroughs.

Streamlined Approach

Spencer (2000) proposed an even more simplified version, the “streamlined cognitive walkthrough,” for fast-paced development efforts. Spencer reduced the number of questions that evaluators asked as they walked through the action sequences for each task. Instead of asking the four questions of the cognitive walkthrough, Spencer simply asked these two questions:

  1. Will the user know what to do at this step?
  2. If the user does the right thing, will she know that she did the right thing and is making progress toward the goal?

Spencer also recommended strict ground rules (for example, "no design discussions") to keep develoment teams from jumping into design debates about the most appropriate solutions.

Spencer provides some limited discussion on the effectiveness of the streamlined cognitive walkthrough, but despite widespread use by practitioners, this variation has not received much published validation.

Data Analysis Approach

This technique involves asking well defined questions about every step of (analyst-defined) tasks based on an agreed system description. The primary data from the cognitive walkthrough are success or failure stories for each correct action in an action sequence. A failure occurs when an analyst answers “no” to any of the questions that are asked about each correct action. For each failure, an explanation based on assumptions about the hypothetical user is recorded and used to generate design solutions.

Special Considerations

The cognitive walkthrough assumes exploratory use, but is not tailored to any particular sector. There are different versions of CW tailored to different application areas as well as the generic approach.

  • Method application to lifecycle stage: Qualification testing. Like other methods, it’s possible that CW could be used at other stages of systems development, but it is most clearly tailored to evaluation.
  • Accessibility: The cognitive walkthrough doesn't normally address accessibility issues, but could easily be adapted to do so by agreeing upon appropriate user profiles (e.g. of users with limited vision) and using evaluators who have expertise in accessibility.
  • International Issues: The cognitive walkthrough doesn't normally address internationalization issues, but can easily be adapted to do so by agreeing upon appropriate user profiles (e.g. of users from different cultures).
  • Ethical Issues: Given that evaluators must self report problems, this method is susceptible to investigator bias.
  • Validity: Nørgaard and Hornbæk (2006) conducted an exploratory study of think-aloud practices by seven Danish companies that have in-house usability and/or consulting groups. The investigators examined audio recordings of think-aloud sessions in detail and found several trends:
    1. Evaluators (moderators) seemed to seek confirmation of known problems. This might result in study designs that will miss problems that were not known.
    2. Evaluators asked participants about hypothetical problems rather than experienced problems.
    3. Evaluators learned about the usability of a system, but sometimes not much about the utility (usefulness) of that system.
    4. Practical realities have a strong influence on thinking-aloud studies (from participants who are not quite a match to incomplete prototypes to prototypes that were changed during the testing).
    5. Evaluators asked leading questions.
    6. Systematic analysis immediately after a session is rare; evaluators don't check to see if they all agree on the most important observations right after a session.

 

Facts

Lifecycle: Interaction design
Sources and contributors: 
Ann Blandford, Nigel Bevan, Chauncey Wilson, Ben Werner, Mary Mascari.
Released: 2011-06
© 2010 Usability Professionals Association