Welcome to Mingxian's Home

Human-Computer Interaction Designer 人机交互设计师

interaction design, 交互设计

experience design, 体验设计

design theory, 设计理论

virtual culture theory,视觉文化理论

Wednesday, October 18, 2006

Discussion about Usability Testing and Heuristic Evaluation

Usability Testing-- UT; Heuristic Evaluation--HE. This discussion happened at September 30, 2006, and the big part of answer was provided by Dr. Lim (Youn-Kyung Lim).

Argument: If HE and UT link to the intrinsic features or not, and which one tend to link us with the design’s intrinsic features?

The key to find the answer of this question is the fundamental relationship of usability test and design process.

After we have a design mockup or product, the following process is the process of how to improve it:

A design> do usability test to the design> find usability problems > find out the cause of the problems> get some solutions about how to solve those problems > redesign> implement the redesign > do usability test again…
Now it is time to clarify the roles that can be involved in usability testing--finding problems, analyzing problems to find causes, and solving problems as design solutions.

Now the next problem is which work should be done by usability testers and which work should be done by designers? Do the testers find out the reasons of why the usability problems happen when they do a test or just provide those problems only? Of cause I believed that both the tester and the designer (maybe they are the same people) should know the cause and solution, but in reality process, who has main responsibility for this part, tester or designer? Or is there a role as an analyzer connect tester with designer?


In terms of role distribution, there is no hard answer. It depends on the situation. In the real world, distribution of the roles happen very heavily though. For example, usability specialists may not be involved in design so they may just report what the problems are and what caused those problems--if they can find it from the think-aloud protocol or questionnaire or debriefing interview--to designers who may be able to utilize those information for refining their designs. This is in many times a common situation in the real world, although it may be ideal if designers do the usability tests by themselves and they improve the design based on what they see by themselves. And this could happen too in the real world if the situation supports it.

So if the testers should provide not only problems, but also causes, my conclusion from this is that both of these tow methods link with the intrinsic features of the design.

In my opinion, the process of finding the causes connects the usability problems (external, such as "the log in page is difficult to use, because users cannot log in very easily") and the design's intrinsic features. This means in order to find the causes, the testers should think through the design's structures, functions, the way of its representations, etc, these things are the intrinsic features.

In order to figure out the causes, we may need to have design knowledge about what kinds of defects of design may cause those problems. And the examples of heuristics in HE are actually the examples of design knowledge. HE actually helps you to think about the causes of problems,
which is right. So you got it correctly. Then I guess your question is then how UT then is related to supporting the finding of causes of problems. And it is true that UT does not guide directly what causes of problems are. It is more of a process that helps testers to collect the necessary data which will help them to figure out problems and also causes by looking at the data they gathered carefully. And due to this reason, what expertise and experience the UT testers have definitely matters in terms of their capability to figure out appropriate causes of problems and even for design solution suggestions. So UT does not directly guide you to find the causes, but the data you gather from UT--think-aloud protocols, observed behaviors, debriefing sessions, etc. together will definitely help testers to figure out what may have been the causes of the problems you found. That is why it is dependent on the testers' expertise.

In terms of the distribution of the roles again, it is always a benefit in any person in a large design team if s/he has a multidisciplinary knowledge. So for example, even if her/his specialization area is usability, it will be in many times more useful if s/he also has knowledge and experience in design as well since the person then can better communicate with designers and also can find problems more critical for design and even can extract more appropriate causes for those problems, and can even suggest right directions for recommendations for redesign.

Another point you may need to keep in mind is that finding causes may require design knowledge. However, there are other types of causes may also inform design critically even if they are not based on typical design knowledge--actually "what is design knowledge" is also a
difficult question. Anyway, for example, in a usability test, if we found that a user had a trouble to figure out the right sequence of actions they need to go through to achieve a task, the cause of this problem may include the mismatch of user's typical model of approaching to that kind of a task and system's model that provide the features to guide user's to follow the right sequence. This reasoning requires more of cognitive psychology related knowledge which requires the usability specialist to figure out how to understand user's cognitive model, and how that should be interpreted to the system design. Of course in a broader view, this is definitely a kind of design knowledge, but it is more of structural and architectural and task-model based problems which require different expertise to figure out right causes. I mention this so that you may get a sense that the causes should be extracted from various point-of-views and using multidisciplinary expertise.

0 Comments:

Post a Comment

<< Home