Sunday, November 28, 2010

User Interface Design- Hallway Testing / Human Action Cycle / Evaluation

As I dived into the world of user interface design, I found some interesting information. One in particular was found here: http://www.scribd.com/doc/2409206/User-Interface-Design

I have no clue who created this paper, but it hits some good points to take into consideration while designing the UI. 


Hallway testing
Hallway testing(or hallway usability testing) is a specific methodology of software  usability testing. Rather than using an in-house, trained group of testers, just five to six random people, indicative of a cross-section of end users, are brought in to test the software (be it an application, web site, etc.); the name of the technique refers to the fact that the testers should be random people who pass by in the hallway. The theory, as adopted from Jakob Nielsen's research, is that 95% of usability problems can be discovered using this technique.

In the early 1990s, Jakob Nielsen, at that time a researcher at Sun Microsystems,
popularized the concept of using numerous small usability tests -- typically with only  five test subjects each -- at various stages of the development process. His argument is that, once it is found that two or three people are totally confused by the home page, little is gained by watching more people suffer through the same flawed design. "Elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford." 2. Nielsen subsequently published his research and coined the term heuristic evaluation.

The claim of "Five users is enough" was later described by a mathematical model[2] which states for the proportion of uncovered problems U
U = 1 − (1 − p)n
where p is the probability of one subject identifying a specific problem and n the number of subjects (or test sessions). This model shows up as an asymptotic graph towards the number of real existing problems.

In later research Nielsen's claim has eagerly been questioned with both empirical evidence 3 and more advanced mathematical models (Caulton, D.A., Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology, 2001. 20(1): p. 1-7.). Two of the key challeges to this assertion are: (1) since usability is related to the specific set of users, such a small sample size is unlikely to be representative of the total population so the data from such a small sample is more likely to reflect the sample group than the population they may represent and (2) many usability problems encountered in testing are likely to prevent exposure of other usability problems, making it impossible to predict the percentage of problems that can be uncovered without knowing the relationship between existing problems. Most researchers today agree that, although 5 users can generate a significant amount of data at any given point in the development cycle, in many applications a sample size larger than five is required to detect a satisfying amount of usability problems.

Bruce Tognazzini advocates close-coupled testing: "Run a test subject through the product, figure out what's wrong, change it, and repeat until everything works. Using this technique, I've gone through seven design iterations in three-and-a-half days, testing in the morning, changing the prototype at noon, testing in the afternoon, and making more elaborate changes at night." 4 This testing can be useful in research situations.

Human action cycle
The human action cycle is a psychological model which describes the steps humans take when they interact with computer systems. The model was proposed by Donald A. Norman, a scholar in the discipline of human-computer interaction. The model can be used to help evaluate the efficiency of a user interface (UI). Understanding the cycle requires an understanding of the user interface design principles of affordance, feedback, visibility and tolerance.

The human action cycle describes how humans may form goals and then develop a seriesof steps required to achieve that goal, using the computer system. The user then executesthe steps, thus the model includes both cognitive activities and physical activities.
This article describes the main features of the human action cycle. See the following book by Donald A. Norman for deeper discussion:
The three stages of the human action cycle
The model is divided into three stages of seven steps in total, and is (approximately) as
follows:
1) Goal formation stage
·          Goal formation.
2) Execution stage
·          Translation of goals into a set of (unordered) tasks required to achieve the goal.
·          Sequencing the tasks to create the action sequence.
·           Executing the action sequence.
3) Evaluation stage
·           Perceiving the results after having executed the action sequence.
·           Interpreting the actual outcomes based on the expected outcomes.
·           Comparing what happened with what the user wished to happen.

Use in evaluation of user interfaces
Typically, an evaluator of the user interface will pose a series of questions for each of the
cycle's steps, an evaluation of the answer provides useful information about where the
user interface may be inadequate or unsuitable. These questions might be:
·          Step 1, Forming a goal:
o    Do the users have sufficient domain and task knowledge and sufficient understanding of their work to form goals?
o    Does the UI help the users form these goals?
·          Step 2, Translating the goal into a task or a set of tasks:
o    Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the tasks?
o    Does the UI help the users formulate these tasks?
·          Step 3, Planning an action sequence:
o    Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the action sequence?
o    Does the UI help the users formulate the action sequence?
·          Step 4, Executing the action sequence:
o    Can typical users easily learn and use the UI?
o    Do the actions provided by the system match those required by the users?
o    Are the affordance and visibility of the actions good?
o    Do the users have an accurate mental model of the system?
o    Does the system support the development of an accurate mental model?
·          Step 5, Perceiving what happened
o    Can the users perceive the system’s state?
o    Does the UI provide the users with sufficient feedback about the effects of their actions?
·          Step 6, Interpreting the outcome according to the users’ expectations:
o    Are the users able to make sense of the feedback?
o    Does the UI provide enough feedback for this interpretation?
·          Step 7, Evaluating what happened against what was intended:
o    Can the users compare what happened with what they were hoping to achieve?

No comments:

Post a Comment