CPSC 481: Foundations of HCI |
You work for the Ace Consulting Company (™), a consulting firm that specializes in evaluating interfaces.
You and your team have been contracted to do a usability study of the system described in the attached handout.
You are to prepare a usability report for the vice-president in charge of that system's use and redevelopment.
This person is extremely busy (so make things clear but concise).
Your report will describe how you went about looking for design problems, what problems you saw, and what changes you recommend.
The VP is already familiar with the 3 observation methods (silent observer, think aloud, constructive interaction) so there is no need to explain what they are. In one of the appendices you are supposed to explain what new insights your group gained from trying these techniques out in the usability study (i.e., what you learned about the value or weaknesses of each technique from actually running test participants through each technique beyond the points that were covered in lecture).
Depending upon how convincing you are, the VP will use your report to authorize changes in the upcoming version.
Create a short pre-test questionnaire (~10 questions) that is used to determine the person's experiences and beliefs as specifically related to the task and system. More than likely each person will have to indicate their prior experience with the system being tested or with similar systems. They should also list any expectations or preconceptions that they may have about the system. The pre-test questionnaire should be given to the person before he or she has used the system. Also, be sure to administer the usability instructions to participants, as indicated in the handout.
Example questions about users experience level with regard to the system to be tested include:
- Never used it.
- Used it once or twice over the last few years.
- Used it ~3-7 times this year.
- Use it on a daily basis.
Example questions about user's beliefs may include:
- Will need personal instruction to get started.
- Will learn it after a bit of playing around.
- Will be able to use the simple features of the system with no problems.
- Will be able to use the advanced features of the system.
Good post-test questions will give you information about how participants judge the system's usability, where they think they had most problems, and so on. You may want to leave space after each question for comments, where you would encourage people to say why they answered a question a certain way. For example, here is a question that uses a rating scale:
I found the system: Easy to use Hard to use. 1 2 3 4 5 Reason for your rating:______________________________________________________________________ Questionnaires should include questions that ask about people’s satisfaction with the system both in broad terms. E.g. one, “I would rather use the WestJet website rather than book a flight through a travel agent.” E.g. two, “I would never want to use the online WestJet system”, and with more specific ratings (Strongly agree, Agree, Neither agree nor disagree, Disagree, Strongly disagree).
For both questionnaires (pre and post test) there should be a reason for each question that you ask – don’t ask for information that has no bearing on the test and which you won’t be using. Finally make sure that you read my tips on designing questionnaires!
Usability studies requires an observer to watch someone go through their paces with 'typical' tasks. It is your job as the experimenter to prepare a set of example tasks ahead of time that the participants will try to perform. These tasks should be realistic ones that typical users would try to do with the system! But how do you discover what those typical tasks are?
The first way is to let people use their own real tasks. To do this, you would have to solicit participants who already use the system to be evaluated, and ask them if you could watch them go about their daily business as they use the system. This is only an option if the system being studied is one that all your test participants have used before.
The second way is to ask a random sample of people who are using the system what they typically do with it, and then generalize those as tasks to give to test participants. More than likely this is the approach that you will have to take for this assignment.
The third way is for you to use the system, and contrive a few sample tasks through intuition. Although this will not produce a set of reliable tasks, you may not have any other choice. (By the way, jot down any problems with the system you see as you try it. You can compare these later with the problems you notice in the actual study).
To get you started, I have enclosed a few example tasks in this year's assignment description, but you must come up with your own as well.
See the handout on "User Observations" for a basic description of the method.
- Administer the post-test questionnaire which asks them what they thought about the system (subjective satisfaction, usability of the system, how easy or hard was it to complete each task etc.)
- Interview participants after the test – what they did think of the system (which parts were strong, which parts were weak etc. – adapt the interview to your observations as you ran the test and/or use the post-test questionnaire as a discussion tool rather than slavishly sticking to a fixed script). As before, the observer should be taking detailed notes.