CS 430 / INFO 430 Information Retrieval Usability 2 Lecture 23

advertisement
CS 430 / INFO 430
Information Retrieval
Lecture 23
Usability 2
1
Course Administration
Wednesday, November 16
No discussion class
Thursday, November 17
No lecture
No office hours
2
Course Administration
Assignment 3
Grades were returned yesterday
If you have any questions, send email to
cs430-l@lists.cs.cornell.edu
Replies may be delayed until next week
3
Usability Factors in Searching
Example: Design an interface for a simple fielded search.
Interface: Fill in boxes, text string, ... ?
Presentation of results ... ?
Manipulation of results ... ?
Functions: Specify field(s), content, operators, ... ?
Retain results for manipulation ... ?
Query options ... ?
4
Data:
Metadata formats ... ?
Data structures and file structures ... ?
Systems:
Performance ... ?
The Design/Evaluate Process
Requirements
(needs of users
and other
stakeholders)
Evaluation
start
Implementation
(may be prototype)
release
5
Design (creative
application of
design principles)
Evaluation
What is usability?
Usability comprises the following aspects:
• Effectiveness – the accuracy and completeness with which
users achieve certain goals
Measures: quality of solution, error rates
• Efficiency – the relation between the effectiveness and the
resources expended in achieving them
Measures: task completion time, learning time, clicks number
• Satisfaction – the users’ comfort with and positive attitudes
towards the use of the system
Measures: attitude rating scales
From ISO 9241-11
6
Evaluation
• The process of determining the worth of, or assigning a
value to, the usability on the basis of careful examination
and judgment.
• Making sure that a system is usable before launching it.
• Iterative improvements after launch.
• Categories of evaluation methods:
– Analytical evaluation: without users
– Empirical evaluation: with users
– Measurements of operational systems
7
Evaluation without Users
Assessing systems using established theories and methods
Evaluation techniques
• Heuristic Evaluation (Nielsen, 1994)
– Evaluate the design using “rules of the thumb”
• Cognitive Walkthrough (Wharton et al, 1994)
– A formalized way of imagining people’s thoughts and
actions when they use the interface for the first time
• Claims Analysis – based on scenario-based analysis
– Generating positive and negative claims about the
effects of features on the user
8
Evaluation with Users
Testing the system, not the users!
Stages of evaluation with users:
Preparation
Sessions conduct
Analysis of results
User testing is time-consuming and expensive.
9
Evaluation with Users
Preparation
• Determine goals of the usability testing
“The user can find the required information in no more
than 2 minutes”
• Write the user tasks
“Answer the question: how hot is the sun?”
• Recruit participants
Use the descriptions of users from the requirements phase
to detect potential users
10
Usability Laboratory
Concept: monitor users while they use system
Evaluators
one-way
mirror
11
User
Evaluation with Users
Sessions Conduct
• Conduct the session
– Usability Lab
– Simulated working
environment
• Observe the user
– Human observer(s)
– Video camera
– Audio recording
• Inquire satisfaction data
12
Evaluation with Users
Results Analysis
• If possible, use statistical summaries
• Pay close attention to areas where users
– were frustrated
– took a long time
– couldn't complete tasks
• Respect the data and users' responses, don't make excuses
for designs that failed
• Note designs that worked and make sure they're
incorporated in the final product
13
Measurements on operational systems
Analysis of system logs
• Which user interface options were used?
• When was was the help system used?
• What errors occurred and how often?
• Which hyperlinks were followed (click through data)?
Human feedback
• Complaints and praise
• Bug reports
• Requests made to customer service
14
The Search Explorer Application:
Reconstruct a User Sessions
15
Refining the design based on
evaluation
Designers and evaluators need to work as a team
Designers are poor evaluators of their own work, but know the
requirements, constraints, and context of the design:
• Some user problems can be addressed with small changes
• Some user problems require major changes
• Some user requests (e.g., lots of options) are incompatible
with other requests (e.g., simplicity)
Do not allow evaluators to become designers
16
Usability experiment:
Ordering of results
The order in which the hits are presented to the user:
• Ranked by similarity of match (e.g., term weighting)
• Sorted by a specified field (e.g., date)
• Ranked by importance of document as calculated by some
algorithm (e.g., Google PageRank)
• Duplicates shown separately or merged into a single record
• Filters and other user options
What impact do these choices have on the usability?
17
Experiment on the Google Interface
Methodology
• 10 information seeking tasks in 2 categories
• Users randomized across tasks
• Click-through data to see what the user did
• Eye tracking data to see what the user viewed
• Google results presented with top ten ranks reversed
An example of interdisciplinary information science
research by Cornell's Human Computer Interaction
Group and Computer Science Department
18
Evaluation Example: Eye Tracking
19
Evaluation Example: Eye Tracking
20
Google Evaluation
Click-Through Data
Number of users who
clicked on link
Rank of hit
21
Google Evaluation
Eye Tracking Data
Number of users who viewed
short record before first click
350
300
250
200
150
100
50
0
1
2
3
4
Rank of hit
22
5
6
7
8
9
10
Google Evaluation:
Eye Tracking Data
Part of short record viewed before first click (% of users)
Title: 17.4%
Snippet: 42.1%
Category: 1.9%
URL: 30.4%
Other: 8.2% (includes, cached, similar pages, description)
23
Google Experiment: Click Through
Data with Ranks Reversed
ChosenRank
Percentage of users
who clicked on link
Cond: r
80
Percent
60
40
20
0
1
2
3
4
5
ChosenRank
6
Rank of hit
24
8
9
10
Download