Midterm Review Christine Robson October 11, 2007

advertisement
Midterm Review
Christine Robson
October 11, 2007
Midterm logistics



Soda 405 and Soda 320
Closed book, closed notes, no
electronic devices
Have the full class period
Design process
NEEDS
DESIGN
EVALUATE
IMPLEMENT
Modified from Preece, Rogers, and Sharp, Interaction Design
Needs and need-finding

Identifying users needs
– Latent needs, root causes
– Not wants, symptoms, bugs

Techniques for eliciting and
interpreting users’ needs
– Interviews
– Direct Observation

Needs belong to people (not entities)
Tools for need-finding

Market research/competitive analysis
– Identifying gaps in the current market
– Identifying gaps in competitors’ offerings
– Business perspective, customer (rather
than user) focus
– Team with business folks


Interviewing prospective users
Direct observation
Direct Observation
Observe participants engaged in the desired activity
 In the typical context of that activity
 In a manner that allows you to partner with them
to elicit more rich details about the process
You’ll need
 Good participants
 Good setting
 Good timing
 Flexibility
Asking vs. Observing

Some of the best designs are unnoticeable
– Norman examples




Some work-arounds become invisible
Observing helps you see what, but often
need to ask to understand why
Asking and observing are complementary
Immersion leads to direct observation and
better interviews
Doing a Contextual
Inquiry






Direct Observation + Interviewing
In natural context of activity
Intentionally pick participants,
location, time
Follow where the participant leads
(partner)
Learn user’s vocabulary
Gather artifacts, recordings
Outcome of CI


Experiencing it is perhaps the most
important!
What are we going to do with all this
data?
– Analyze
– Reporting
Goal: Gain understanding of user
Goal of a CI

Gain understanding of user
– What resources are used to accomplish
task?
– What hindrances encountered in
accomplishing task?
Analyze data to get better description
and understanding of resources and
hindrances
Resources used

Tools, devices
– Cell phone, computer, shovel

Information
– Web page, phone directory

Other people
– Expert, peer, grad student
Hindrances encountered

Not having the right tool
– Workarounds

Unable to access resources needed
– Can’t find information
– People unavailable

Unaware of important information
– Confusion
Supplementary methods





i.e. Focus Groups & User Surveys
Conduct after contextual inquiry
Better understanding enables forming
more focused questions
Adding more breadth
More time-efficient
Diary study

Asking people to keep a diary of their
interactions with a computer system,
any significant events or problems
during their use of a system, or other
aspects of their working life
– Record the date and time of an event
– Where they are
– Information about the event of
significance
– Ratings about how they feel, etc.
Ethnography

Natural settings
– a commitment to studying activities in the
“natural” setting in which they occur

Descriptive
– an interest in developing detailed descriptions of
the lived experience

Members’ point of view
– understanding the participant’s activity from the
participant’s point of view

Focuses on what people actually do
– understanding the relationship between activities
and environment
Other methods

Participatory Design (PD)
– Besides partnering in the observation
process, users can also actively
participate in the design process
– Primarily reacting to prototype designs

User-Centered Design (UCD)
– Focused on the user, not the technology
(we’ve been presuming this)
Design process
NEEDS
DESIGN
EVALUATE
IMPLEMENT
Modified from Preece, Rogers, and Sharp, Interaction Design
Design


Ideating – expressing ideas
Brainstorming
– More ideas  more creative  better
– Group vs. individual creativity

Representing
– Sketching
– Enacting
NEEDS
DESIGN
EVALUATE
IMPLEMENT
Storyboarding

Series of frames depicting key steps in
reaching a goal
– Mechanically, can use pin board for easy
rearrangement / editing

Describe the interaction in context
– Show user in at least 1st frame (establishing
shot)
– User and the environment
– User and the system
Fidelity


Fidelity refers to the level of detail
High fidelity
– Prototypes look like the final product

Low fidelity
– Artists renditions with many details missing

Lo-fi prototypes
– Sketches act as prototype



Designer “plays computer”
Other design team members observe & record
Might sound silly, but is surprisingly effective
– Widely used in industry
Role-playing


Enacting scenarios, storyboards
Recording on video
– Presentations
– Publicity

Video records (showing up on YouTube)
–
–
–
–
Microsoft Surfaces (2007)
http://www.youtube.com/watch?v=QigsOR9r36k
Apple Knowledge Navigator (1987)
http://youtube.com/watch?v=3WdS4TscWH8
Bodystorming


Transformation of abstract ideas and
concepts into physical experiences
Imagining the product already exists
– Act as if it exists
– In the context of how you would use it
– Involving entire body in enacting usage

Oulasvirta et al., “Understanding
contexts by being there: Case studies
in bodystorming”
– http://dx.doi.org/10.1007/s00779-003-0238-7
Personas

Archetypal users that represent the needs of larger
groups of users, in terms of their goals and personal
characteristics
– Representing user research
– Guide vision and design

Include:
–
–
–
–

Name
Demographic info
Picture
Paragraph descriptions : User motivations, Goals,
Expectations, Personality
Imaginary but precise, specific but stereotyped
Card Sorting

Card sort when you have:
– Lots of content
– Content that could potentially be organized in
multiple ways
– Problems with navigation/users finding the
correct content


Create cards that break down content to its
constituent, generic parts
Allow users to organize and name their own
categories
Design process
NEEDS
DESIGN
EVALUATE
IMPLEMENT
Modified from Preece, Rogers, and Sharp, Interaction Design
Conceptual Model of a
System

Design Model
– The model the designer has
of how the system works

System Image
– How the system actually
works
– The structure and behavior of
the system

User’s Model
– How the user understands
how the system works

Some “repair” between the designer and user
– User manual
– FAQ
Affordances



“...the perceived and actual properties of
the thing, primarily those fundamental
properties that determine just how the
thing could possibly be used.” [Norman]
How a thing appears tells us how the thing
can be used.
(Whether the implied use matches the
intended use is a question for design.)
Metaphors


A means to bring the real world into your interface
You’re borrowing a conceptual model the user has
experience with, ie.
– A presentation tool is like a slide projector
– The painting metaphor in Photoshop


Use it if you have one, but don’t stretch for one if
you don’t!
Beware of metaphors that are: too limited, too
powerful, too literal or cute, or mismatched
Affordances and
metaphors
Metaphors meant to “jump start” user’s
conceptual model for a system
 Affordances meant to “jump start” user’s
conceptual model for interacting with an
artifact
 As with metaphors, if affordances are
designed poorly, they thwart developing a
correct conceptual model

Natural mapping

Naturally connecting user’s model with
system model
– taking advantage of physical analogies and
cultural standards
– Physical properties (stove burner layout)
– Metaphorical/linguistic (on is up)
– Analogous function (playback control
buttons)
“Natural” is individual and culture-specific
Command Based
Interactions


Know exactly what to do and how to do it
Can be faster then traditional graphical
interfaces
Direct Manipulation

User interacts with visual representation of data
objects
– Continuous visual representation
– Physical actions or labeled button presses
– Rapid, incremental, reversible, immediately visible
effects

Computer objects as visible moveable objects
–
–
–
–
Icons to represent items
Items can be “picked up” and moved
Items can be “thrown out”
Items can be “copied”
Direct ManipulationDesign Principles


Affordances: Perceived and actual properties of a
thing that determine how the thing could be used
Natural mapping:
– Physical arrangement of controls should match
arrangement of functions
– Best mappings are direct, but natural mappings don’t
have to be direct

Visibility: Capabilities and relevant parts of your
system should be visible
– Don’t make the user guess if you have functionality- show
them!

Feedback: Always let the user know that you
caught their action
How do you interact with
your computer?

Keyboards
– QWERTY vs. Dvorak
– Chorded, Ergonomic, etc






Mouse
Joystick
Gesture Interfaces
Stylus
Touchscreens
Speech Interfaces
Natural Language
Processing (NLP)


automated generation and
understanding of natural human
languages
language generation systems
– convert information from computer databases
into normal-sounding human language

natural language understanding systems
– convert samples of human language into more
formal representations that are easier for
computer programs to manipulate
User Interface
Consistency

The Principle of Least Surprise
– Similar things should look and act similar
– Different things should look different

Other properties
– Size, location, color, wording, ordering

Command/argument order
– Pre-fix vs. post-fix

Follow platform standards
Kinds of Consistency



Internal consistency within your
application
External consistency with other
applications on the same platform
Metaphorical consistency with your
interface metaphor or similar realworld objects
Soft vs. Hard Buttons

Hard Buttons
– “real” buttons
– Generally have a single action

Soft Buttons
– Programmable buttons or options
– Can appear and disappear in different
modes
Progressive vs.
interruptive feedback
Feedback can be progressive, a part
of the sequence of actions themselves

Feedback can also be interruptive, a
break in the sequence of actions

Smooth is usually preferred to
interruptive

Designing for experts and
novices
Often
difficult to design for both at once
Experts require rapid control and
feedback. The computer is interrupted
more by them
Novices require steady and
comprehensible control and feedback.
They are interrupted more by the
computer
Stage Theory of Human
Perception & Memory
maintenance rehearsal
Sensory
Image Store
decay
Working
Memory
Long Term
Memory
elaboration
decay,
displacement
“Short Term”
 Visual information store
 Auditory information store
 Pre-attentive Processing
decay?
interference?
Working Memory:
 Small capacity
 ~ 7 +/- 2 chunks
Long Term Memory:
 Huge capacity
Recall vs. Recognition

Who were the seven dwarves in snow
white?
Grouchy
Sneezy
Smiley
Sleepy
Pop
Grumpy
Cheerful
Dopey
Bashful
Wheezy
Doc
Lazy
Happy
Nifty
Sleepy

Does that help?
Leveraging social
behavior






Others benefit from my work
I benefit from others’ work
The more people who participate, the
more interesting it is (viral)
Social side effects from individual work
Linking social benefits with individual
benefits
Leveraging social work (Tom Sawyer)
Communities



Applications can foster online
community
UI’s reflect the social characteristics of
the community
examples:
– Tagging websites
– Virtual worlds
Design process
NEEDS
DESIGN
EVALUATE
IMPLEMENT
Modified from Preece, Rogers, and Sharp, Interaction Design
Gulfs of execution and
evaluation


Gulf of execution
– How do I do it?
Gulf of evaluation
– What did it do?
evaluation
execution
The Gulfs
Where thought is required

Gulf of execution -- thinking required to
figure out how to get something done -transforming high-level intention into
specific physical actions

Gulf of evaluation -- thinking required to
understand what is being perceived -transforming raw sensory data into an
understanding of objects, properties and
events
Semantic & Articulatory
Distance
User’s Goals
Semantic
Distance

Semantic: is it possible to say
what one wants to say?
– Can it be said concisely?

Meaning of
Expression
Articulatory
Distance
Articulatory
– Make form of expression similar
to meaning of expression
– i.e. onomatopoeia: “boom” of
explosion; “cock-a-doodle-doo”
of roosters.
Form of
Expression
Modes & Mode Errors



Modes: states which have different meanings
A mode error occurs when a user performs an
action that is appropriate to a different mode and
gets an unexpected and undesired response.
Avoiding mode errors
– Eliminate modes
– Visibility of mode

Show me I’m in CAPS LOCK
– Spring-loaded or temporary modes

Click to highlight
– Disjoint action sets in different modes

No overlapping commands
Other Types of Errors

Description Error
– Intended action is replaced by another action
with many features in common
– Putting the wrong lid on a jar

Capture Error
– A sequence of actions is replaced by another
sequence that starts the same way
– Leave home and find yourself walking to class
instead of where you wanted to go
Evaluating Human
Capabilities
Power Law of Practice

Task time on the nth trial:
Tn = T1 n-a + c
where a = 0.4 ; c is a limiting constant


You get faster the more times you do it!
Applies to skilled behavior
– eg. Sensory & Motor
– Not to knowledge acquisition or improving
quality
Hick’s Law


Time it takes for a user to make a
decision.
Given n equally probable choices, the
average reaction time T required to
choose among them:
T = b log2(n + 1)
Fitts’s Law

Time T to move your hand to a target
of size S at distance D away is
T = a + b log (D/S +1)
D
start
S
– Index of difficulty: log (D/S +1)
S is in direction of motion (“length” arbitrary)
Note that distance is between center points
Tool palette example
Floating palette
Pegged to edge of screen
Cursor is at x=475, y=180
Tool palette floating
Cursor is at x=475, y=180
Tool palette floating
T = a + b log (D/S +1)
D = 475 – (70 + 30/2) = 390
S = 30
ID = log (390/30 + 1) = log (13 + 1) = log (14)
Cursor is at x=475, y=180
Cursor is at x=475, y=180
Assume pegging palette to edge means 2X size
T = a + b log (D/S +1)
D = 475 – (30/2) = 460
S = 2 x 30 = 60
ID = log (460/60 + 1) = log (7.6667 + 1) = log (8.6667)
GOMS

Describe the user behavior in terms of
– Goals

i.e. edit manuscript, locate line
– Operators

Elementary perceptual, motor, or cognative acts
– Methods

Procedure for using operators to accomplish goals
– Selection rules


Used if several methods are available for a given goal
Family of methods
– KLM, CMN-GOMS, NGOMSL, CPM-GOMS
GOMS How-To:





Generate task description
–
–
–
–
Pick high-level user Goal
Write Methods for reaching Goal (may invoke sub-goals)
Write Methods for sub-goals
Iterate recursively until Operators are reached
–
–
–
–
–
Add up times from operators
Assume experts (have mastered tasks)
Assume error-free behavior
Very good rank ordering
Absolute accuracy (~10%-20%)
Evaluate description of task
Apply results to UI
Iterate
Calculate execution time
Keystroke Level Model

Describe the task using the following
Operators
KLM Heuristic Rules: don’t
memorize, just familiarize
K: pressing a key or a pressing (or releasing) of a button
P: pointing
H: homing (switching device
D(n,L): drawing segmented lines
M: mentally prepare
R(t) : system response time
0: Insert M
–
–
in front of all K
in front of all P’s selecting a command (not in front of P’s ending a command)
1: Remove M between fully anticipated operators ie. MPK  PK
2: if a string of MKs belong to a cognitive unit, delete all M’s except the first
3: if K is a redundant terminator, then delete M in front of it
4a: if K terminates a constant string (command name) delete the M in front of it
4b: if K terminates a variable string (parameter) keep the M in front of it
What GOMS Can Model

Task must be goal-directed
– Some activities are more goal-directed then
others
– Creative activities may not be as goal-directed

Task must be a routine cognitive skill
– As opposed to problem solving
– Good for machine operators

Serial and parallel tasks (CMP-GOMS)
User Testing
General Method









Determine the activity to observe
Develop the method
Human subjects review approval
Pilot test the method
Recruit participants
Collect the data
Inspect & analyze the data
Draw conclusions to resolve design problems,
reflect on what you learned
Redesign and implement the revised interface
Qualitative vs.
Quantitative

Qualitative
– Rich, subjective
– Exploratoring concepts
– More useful for earlier input

Quantitative
– Precise, objective, repeatable
– Demonstrating claims
– More useful at documenting improvement
– Can be expensive
Cognitive Walkthrough



Have user imagine walking through the
process of using system
Can use low-fidelity prototyping, partially
implemented prototype
Can use target user rather than expert
– Pluralistic walkthrough uses experts, users,
developers

Like a code walkthrough
C. Wharton et. al. "The cognitive walkthrough method: a practitioner's guide"
in J. Nielsen & R. Mack "Usability Inspection Methods" pp. 105-140.
Semi-structured
interviews



Interactively asking questions (face-toface, telephone)
Give users chance to explain “why” to
complement “what” they did, subjective
user’s viewpoint
Can help with design questions
– “What improvements would you suggest?”

Can be done individually or in groups
Questionnaires & surveys



User responses to specific questions
Preparation is expensive,
administration relatively cheap
Oral vs. written
– Oral provides interaction, followup, but
takes more time
– Written more efficient, can provide
quantitative data
Wizard of Oz

Research experiment
– Subjects interact with a “computer system”
– Illusion that the system is autonomous
– It’s is actually being operated or partially
operated by a human



Depending on your level of illusion, this can be a
person in another room manipulating the computer
Or a person in front of you physically moving the paper
prototype
The name of the experiment comes from
the “man behind the curtain” in The
Wonderful Wizard of Oz
Thinking aloud




Have subject “think out loud” while
performing task
Psychology to elicit cognition
Requires training task
Facilitator actively prompts if subject
falls silent for more then 10 secondss
– “What are you thinking now?”
– “So, you are trying to…?”
– “And now you are…?”
Wizard of Oz:
Conducting a Test
Roles

Greeter: Puts users at ease & gets data

Facilitator: Gives instructions &
encourages thoughts, opinions

Computer: knows application logic &
controls it

Observer(s): Take notes &
recommendations

Greet; Test; Debrief
Heuristic Evaluation

Fancy way to describe expert review
– HCI expert
– Domain expert


Expert review identifies usability issues
before implementation
Our grades on your homework are
form of heuristic evaluation
Experience Sampling
Method (ESM)


Participant asked to carry “beeper”
When beeper activates
– Fill out short survey (using device)
User’s context
 Reaction to stimulus


Programmed alerts at “random” times
– Several times / day over a week
– Scope time intervals appropriately (i.e.,
awake hours)

Large enough users, samples  model
Pilot test observation
method

Pilot test method with some target
users
– Debug the questions, methods
– Also debug logistics
– Don’t count pilot data in analysis

Make changes now before collecting
data (want method for collecting data
to be consistent)
Quantitative methods

Reliably measure some aspect of
interface
– Especially to measurably compare

Approaches
– Controlled experiments

Doing Psychology Experiments
David W. Martin, 7th edition, 2007
– Collect usage data
Designing an experiment


State hypothesis
Identify variables
– Independent
– Dependent




Design experimental protocol
Apply for human subjects review
Select user population
Conduct experiment
–
–
–
–
Run pilot test
Collect data from running experiment
Perform statistical analysis
Interpret data, draw conclusions
Experiment design

Independent variables
– Attributes we manipulate / vary in condition
– Levels, value of attribute

Dependent variables
– Outcome of experiment, measures to evaluate
– Usually measure user performance




Time to completion
Errors
Amount of production
Measures of satisfaction
Experiment design (2)

Control variables
– Attributes that remain the same across conditions

Random variables
– Attributes that are randomly sampled
– Can be used to increase generalizability

Avoiding confounds
– Confounds are attributes that changed but were not
accounted for
– Confounds prevent drawing conclusions on
independent variables
Tasks

Defining task to test hypothesis
– Pictures will lead to less errors
– Same time to pick users with and without
pictures (Ho)
– Pictures will lead to higher satisfaction


How do you present the task?
Create scenario, movie plot for task
– Immerse subject in story that removes them
from “user testing” situation
– Focus subject on goal, system becomes tool
(and more subject to critique)
Other Testing Options

Remote usability testing
– Telephone or video communication
– Screen-sharing technology
– Greater flexibility in recruiting subjects,
environments

Usage logging
–
–
–
–
Embed logging mechanisms into code
Study usage in actual deployment
Some code can even “phone home”
facebook usage metrics
Controlling data for
privacy






What data is being collected?
How will the data be used?
How can I delete data?
Who will have access to the data?
How can I review data before public
presentations?
What if I have questions afterwards?
Ethics


Testing can be arduous
Each participant should consent to be
in experiment (informal or formal)
– Know what experiment involves, what to
expect, what the potential risks are


Must be able to stop without danger or
penalty
All participants to be treated with
respect
Design process- keep
Iterating!
NEEDS
DESIGN
EVALUATE
IMPLEMENT
Modified from Preece, Rogers, and Sharp, Interaction Design
Nuts & Bolts
Office Hour Switch up
(this week only)



David will run an extra office hour
between 4:00 - 5:00 pm in HMM 360
Christine’s office hours moved to
Monday 4:00 – 5:00 pm in Soda 421
John’s office hours on Tuesday, Oct.
16th and 23rd are being handled by
Bryan
Midterm logistics



Soda 405 and Soda 320
Closed book, closed notes, no
electronic devices
Have the full class period
Download