Assignment: A Usability Study (12%)
 
How can we tell how easy or hard it will be for someone to use our system?  Most developers simply create the system, try it out themselves until they are 
satisfied with it, and then dump it onto the user audience. The result is 
usually an end product that many people have problems with. 
One of the easiest methods for getting to "know the user" and for evaluating 
the human computer interface is through usability studies. Although 
usability studies come in many flavors, all of them require an observer to watch 
a typical user try the system performing a real task. It is surprising how many 
design flaws can be detected this way! 
Usability studies are becoming increasingly popular in industry. M any modern 
software companies now have usability labs staffed by HCI professionals whose 
job it is to find usability problems in products as they are being developed. 
Most labs contain all the equipment permanently in place (e.g., computers), and 
are augmented with additional equipment such as high-end audio/video systems, 
screen-capture software, one-way mirrors, and so on. 
However, usability studies are meant to be extremely practical, and you can 
do them  without these special usability labs (ala the "discount usability" 
approach).  The simplest studies just require you: to pull up a chair next 
to a typical user; to watch them do their work (and perhaps have them explain what they are doing as they 
are doing it); to jot down any noteworthy events that occurred; and to listen to 
the user's comments. 
Quick synopsis of the assignment
  - 
  
You work for 
  Usability Inc., a consulting firm that specializes in evaluating interfaces.
 
  - 
  
You and your 
  team have been contracted to do a usability study of the system described in 
  the attached handout.
 
  - 
  
You are to 
  prepare a usability report for the vice-president in charge of that system's 
  use and redevelopment.
 
  - 
  
This person 
  is extremely busy (so make things clear but concise).
 
  - 
  
Your report 
  will describe how you went about looking for design problems, what problems 
  you saw, and what changes you recommend.
 
  - 
  
She is 
  already familiar with the 3 observation methods (silent observer, think aloud, constructive 
  interaction) so there is no need to explain what they are.  In the 
  appendix you are supposed to explain what new insights did your group 
  gain from trying these techniques out in the usability study.
 
  - 
  
Depending 
  upon how convincing you are, the VP will use your report to authorize changes 
  in the upcoming version.
 
 
The major steps for this assignment are summarized below.
1) Things to prepare ahead of time
Administrative
  - Read the assignment description (please).
 
  - Try system out yourselves (familiarize yourself with it 
  but make sure not to bias participants during the test). 
 
  - Prepare some example tasks (try to create at least three 
  of your own plus you can use any of my examples as well). 
 
  - Decide who you will employ as participants in your study (remember 
  the type of person run can effect the data you get from observations), minimum 
  2 to 4 people (you can use your group members or members but other groups but 
  try to get as large and diverse a group as possible). 
 
  - Decide which group members will do what job.  You 
  need at least two people: 
 
a) The test administrator: Makes 
all the introductions and describes the test to the participant, answers their 
questions and so on.  This is the person who runs the test and interacts 
with the participant.
b) The scribe: Takes down notes 
which are observations of what occurred during the session.
c) (Optional: The security 
person): Prevents other people from interrupting the test session which 
useful if you are conducting this test in a public place like the lab in  
the main floor of Math 
Sciences.   If you only have two people in your group then you can get 
a friend to do the third job.
  - Set up the system  e.g., for classes that are 
  evaluating a web page set up the browser ahead of time (load it the browser, 
  clear the cache and the history, navigate away from the web site to be 
  evaluated). 
 
Write-up the pre-test questionnaire. 
  Create a short pre-test questionnaire (~10 questions) that is used to 
  determine the person's experiences and beliefs about the system.  It is 
  extremely important that you ask relevant questions that helps you understand 
  a participant's background and beliefs, as related to the task and system.  
  More than likely each person will have to indicate their prior experience with 
  computers, the windowing system, and the system being tested.  They 
  should also indicate if they have any prior expectations about the system and 
  if so what these expectations are. 
  Example questions about users experience level with regard to the system to 
  be tested include:
  
    - Never used it, 
 
    - Used it once or twice over the last few years 
 
    - Used it ~3-7 times this year, but not regularly 
 
    - Use it regularly (how often?) 
 
  
  Example questions about users beliefs may include: 
  
    - Will need personal instruction to get started 
 
    - Will learn it after a bit of playing around 
 
    - Will be able to do simple tasks with no problems 
 
    - Will be able to do complex tasks 
 
  
Write up the post-test questionnaire. 
  Good post-test questions will give you information about how participants 
  judge the system's usability, where they think they had most problems, and so 
  on. You may want to leave space after each question for comments, where you 
  would encourage people to say why they answered a question a certain way.  For 
  example, here is such a question that uses a rating scale:
  
    I found the system:            
    easy to use                                                
                      
    hard to use.
                                                     
    1                   
    2                    
    3                    
    4                   
    5   
    Reason for your rating:____________________
  
There should be a reason 
for asking the questions that you ask – don’t ask for information that have no 
bearing on the test and which you won’t be using.  
Some of the typical questions that you 
might ask in the pre-test questionnaire if you were evaluating the usability of 
a web site may ask about the person’s computer experience, their Internet and 
browser experience, their experience with this particular web site and if this 
person actually uses this web site or the web sites of other airlines.
At the end of the test, 
administer the post-test questionnaire.  This second questionnaire should 
include questions that ask about people’s satisfaction with the system both in 
broad terms e.g., “I would rather use the WestJet airline rather than book a 
flight through a travel agent” e.g., “I would never want to use the online system”, 
and with more specific ratings (Strongly agree, agree, neither agree nor 
disagree, disagree, strongly disagree).  Also, be sure to administer the 
usability instructions
to participants, as indicated in the handout.  
  Finally make sure that you read my tips 
  on using questionnaires!
Select a core set of typical tasks.
  Usability studies requires an observer to watch someone go through the 
  paces with 'typical' tasks. It is your job as the experimenter to prepare a 
  set of example tasks ahead of time that the participants will try to perform. 
  These tasks should be realistic ones that typical users would try to do 
  with the system! But how do you discover what those typical tasks are? 
  The first way is to let people use their own real tasks. To do this, you 
  would have to solicit participants who have a real need, and ask them if you 
  could watch them do their tasks. This is only an option for you if the system 
  being studied is a popular one. 
  The second way is to ask a random sample of people who are using the system 
  what they typically do with it, and then generalize those as tasks to give to 
  test participants.  More than likely this is the approach that you will 
  have to take for this assignment.
  The third way is for you to use the system, and contrive a few sample tasks 
  through intuition. Although this will not produce a set of reliable tasks, you 
  may not have any other choice.  (By the way, jot down any problems with the 
  system you see as you try it. You can compare these later with the problems 
  you notice in the actual study). 
  To get you started, I have enclosed a few example tasks in
  this year's assignment description, but you 
  must come up with your own as well. 
   
2) The Usability Study  
See the
handout on "User Observations" for a basic description of the method.  
    Quick synopsis of the test process
  - Provide the introductions (who you are, what test is 
  about – again watch out that you don’t bias people here!) 
 
  - Administer the pre-test questionnaire (to get the 
  background experience of participants and perhaps to find out if they have any 
  expectations about the system, what they expect it to do etc.). 
 
  - Running the test (using one of the three approaches: 
  Silent Observer, Think Aloud or Constructive Interaction) – the test 
  participant gets the instructions for the tasks, these instructions must be 
  complete enough so that the person knows exactly what they are supposed to do.  Problems that arise should not come from an unclear/incomplete task 
  description but from the system itself.  Make sure that you run a pilot 
  study to iron out these types of problems beforehand!  In each of the three cases the person(s) should try to complete all of the tasks.  
 
  - Administer the post-test questionnaire and conduct any 
  necessary debriefing.
 
    Details of the test process
  - Throughout the exercise: Active intervention and 
  conceptual model formation. You can gain a sense of a person's initial 
  conceptual model of the system by having them explain each screen as it 
  appears, what each interface component does, and what they think they can do 
  with it. This conceptual model will be formed from prior experiences with 
  similar systems and their 
  interpretation of the visuals on the screen. You are looking for places where 
  the model is incorrect or undeveloped.  For example, people may not understand 
  the meaning of labels and icons, what they are supposed to do, and how they 
  are supposed to do it. Some of these problems are related (but not limited) to 
  the lack of meaningful visual affordances, constraints, mapping, and so on.
  You are looking for places where the model is incorrect. Start doing this as 
  soon as they get to the main window for the application. 
  - Have them explain the 
  meaning behind the different components of the introductory window. 
 
  - Have them do their task.
 
  - Ask them if they can now 
  explain how some of the components that they previously missed actually 
  work (do this only if the person couldn't explain how all the controls worked 
  in the previous step).
 
  - Redo 
  this exercise for all major 
  screens, but try to minimize interference with the task 
  e.g., for a form-based window you would likely do it after they fill out the needed information but just before they press a 
  button or other control that takes them to the next screen. 
 
  All your test participants should begin with this step. Using this 
  information as a baseline, you can see how a person's conceptual model 
  develops (correctly and incorrectly) during system use merely by asking them 
  to re-explain the display after major dialog steps (e.g., after reading 
  documentation or after completing a transaction) or at the end of their session. 
  Note that this means you are actively intervening in a person's session, for 
  you are disrupting them in the middle of their task, and their act of 
  explaining the screen to you may result in extra learning by them.  Thus you 
  should use this approach carefully, and at opportune moments.
  Run participants through the three main test cases:
  Do not run every person through every case.  The idea is to take your 
  group of participants and have approximately a quarter of them run through the 
  Silent Observer and the Think Aloud case with the remainder being run using Constructive 
  Interaction.
  a) Silent observer case
    - You need one participant to run through this at a time (run 
    participant #1 through this case although you can repeat the silent observer 
    case with different people). 
 
    - Give the person the instruction sheet with the first task 
    on it 
 
    - Have the person try to complete the task (remember 
    that you are not allowed to talk to the test participant)
 
    - Repeat the previous two steps for all the tasks
 
    - The scribe watches the person and takes notes. 
 
  
  b) Think aloud scenario
    - You need one participant to run through this at a time (run 
    participant #2 through this case although you can repeat the Think Aloud 
    case with different people).  
 
    - Show the person how to think aloud by example (do it 
    yourself) and run the person through the tasks
 
    - e.g., "I'm going to try to do this task ... OK, 
    this is probably the menu item I should select. Hmmm ... It's not doing 
    anything, what's wrong? Oh, I see, I have to double click it..." 
 
    -  You might have to remind the person to continue to 
    think aloud if they forget and stop doing it but make sure that you do not 
    otherwise interfere with the test procedure
 
    - Give the person the instruction sheet with the first 
    task on it
 
    - Have the person try to complete the first task
 
    - Repeat the previous two steps for all the tasks
 
    - The scribe watches the person and takes notes 
 
  
  c) Constructive interaction
  
    - You need two test participants to run through this 
    (run participants #3 & 4 through this case although you can re-run the test 
    again with the first two pairs of people if you can't get anyone else - 
    ideally you should try to get as many pairs to run through the constructive 
    interaction case as possible) 
 
    - Don’t explain the procedure (the idea is that the 
    communication that occurs between the two people is a natural dialog so 
    don’t prompt them as the case of the think aloud scenario just tell them 
    that they should use the system together.  They can set it up so that 
    one can be the operator the other can give directions if they wish). 
 
    - Give each person the instruction sheet with the tasks 
    on it 
 
    - Have the pair try to complete the tasks 
 
    - The scribe watches the people and takes notes 
 
  
  Caveat to not interfering with your test participants: If your 
  test participants get stuck. While the experimenter should not 
  help the person with the task, there are a few exceptions to this rule.
  
    - If the person has problems getting started, record the problems that 
    they are having and give them a hint to get going. This is OK, because if 
    they can't get started, they will not be able to do the tasks! 
 
  
  
    - If the person cannot complete a particular task after a reasonable 
    amount of time, tell them to stop and start them on the next task. Or, give 
    them a hint if they cannot overcome some conceptual problem necessary to 
    trying out other parts of the system. Again, be sure to record all problems!
    
 
  
  Remember that getting stuck is discouraging for test participants. Try to 
  give them an early success experience, and don't forget to remind them that they can quit at 
  any time for any reason if they wish. 
 
    Post-test and debriefing 
  - Administer the post-test questionnaire which asks them what they though 
  about the system (subjective satisfaction, usability of the system, how easy 
  or hard was it to complete each task etc.) 
 
  - Interview participants after the test – what they did 
  think of the system (which parts were strong, which parts were weak etc – 
  adapt the interview to your observation as you ran the test and/or use the post-test 
  questionnaire as a discussion tool rather than slavishly sticking to a fixed 
  script).  As before, the observer should be 
  taking detailed notes.  
 
At this point, you are encouraged to repeat the experiment with your friends, 
classmates and so on. The more people you observe, the better!  Just make 
sure that they are all volunteers.  If 
appropriate you can also allow people to perform open-ended tasks where they set 
their own goals.
 
3) The write up. 
Recall that your write-up should be oriented towards a senior executive in your 
company that will make the major decisions on the software changes. Your TA will 
describe details and format of the write up to you. Here is a general template 
for you to follow. 
  - Section 1. Scenario 
 
  - Give a very brief reminder to the VP about what the system is, and then 
  explain the role of your product evaluation team. Make sure you tell her the 
  point of your work! 
 
  
  - Section 2. Methodology 
 
  - Explain what you did. Assume that the VP knows what the particular 
  usability methods are (as described in this sheet) and their purpose. Include 
  the number of participants, the pre-test evaluation, task description, etc. 
  You must provide a list of the tasks that you have developed, and why you 
  included them. You must also provide the pre- and post- test questionnaires, 
  and why you included each question. 
 
  
  - Section 3: Observations 
 
  - Summarize your observations. Use selected raw and collapsed data, 
  paraphrasing, comments, questionnaire and interview results, etc. It is 
  important to present as much information as possible with economy!   
  Your TA will not be impressed with your report if he or she has to read an 
  exact "click-by-click" narration of each participant's session.  
  Again:  It is your job to summarize and point out what were the important 
  parts.
 
  - Section 4: Interpretation: System strengths and weaknesses
  
 
  - Identify common and important problems and strengths of the system. This 
  should be more than a checklist of all the problems seen.  Try to 
  generalize all the minor "symptoms" into a higher-level description 
  of what the major problems are with this system.   You can then 
  provide some illustrations of the higher-level problems  with lists of specific examples.
 
  
  - Section 5: Suggested improvements
 
  - Describe the five most important changes that you would make to the 
  design of the system, with explanations of why you think they are so 
  important.  To do this you should refer back to your observations and the 
  discussion on design as covered in class. 
 
  - Section 6: Conclusion 
 
  - Summarize what you found and what your recommendations were. 
 
  - Appendix 1: Comparison of different techniques 
 
  - For future usability studies, you want to tell your product team what 
  worked well and what didn't in this usability study. Briefly summarize your 
  experiences with each method, contrasting them for ease of use, the richness 
  of the information obtained, their advantages, etc. Then recommend the methods 
  you wish your group to use in the future. Which was most useful? Which was 
  least useful? What would you keep? What would you throw away? 
 
  - Appendix 2: Raw data 
 
  - All original observations/recordings, etc. should be attached here.   
  Although your group is including the raw data for reference you should point 
  out where the "choice" bits can be found in the raw data so that your TA can 
  quickly find and view it.
 
 
Final points
  - 
  
Don’t forget lecture 
  material that deals with test ethics!
   
  - 
  
Remember that the 
  questionnaires, interviews and observations provide raw data.  Your data 
  drives your analysis (what is good or bad about the system) and how improve it 
  (recommendations).  So, it is crucial that your data provides you 
  material sufficient enough material to build your case.
   
  - 
  
Usability studies are 
  immensely practical: you can and should use it every time you design (or wish 
  to select!) a user interface. Good luck, and have fun!