Testing and Retrospective Testing session Annika Silvervarg

advertisement
Testing and Retrospective
Annika Silvervarg
Testing session
• Demo/Acceptance tests – Whole team and Customer
• (Discount) Usability testing – Interaction designer and
Users
– On LoFi prototype(s)
– On code
Demo and Acceptance tests
• Short presentation of the process of the sprint
(preconditions, constraints, impediments, etc.)
• Short ”walk through” of the product, explaining
decisions made (focus on what, not how)
• Customer (stakeholders) sit down individually and
go through users stories using the system and
judge if acceptance tests pass or fail
– Failed acceptance tests are collected for next iteration
Discount Usability testing
(Nielsen, 1989)
• Simplified user testing
– handful of participants
– think aloud methods
• Narrowed down prototypes
– usually paper prototypes
– a single path
• Heuristic evaluation
– Usability guidelines
• Scrum master (PO) have closing discussion with
customers and
bring the most important bits back to the team
Discount Usability testing
• Qualitative rather than quantitative
• Formative rather than summative
• Advantages
– Early
– Rapid iterations
• Goes well with agile!
Cognitive walkthrough
• Interaction designers steps through a flow in the
application and at each step, asks themselves three
questions:
– Do our target users know what to do on this screen?
– If our target users do the right thing, will they know it?
– Will our target users know how to move forward?
1
10 Usability guidelines/heuristics
(Nielsen)
Heuristic evaluation
• Multiple evaluators (usability experts)
– inspect the interface/system/prototype individually
– try to identify usability problems using a set of
guidelines/heursitics
– can ask for help from a domain/system expert to progress
faster
– report findings verbally to observer, or write down
• Results are aggregated
•
•
•
•
•
•
•
•
•
•
Visibility of system status
Match between system and the real world
User control and freedom
Consistency and standards
Error prevention
Recognition rather than recall
Flexibility and efficiency of use
Aesthetic and minimalist design
Help users recognize, diagnose, and recover from errors
Help and documentation
Think aloud
• Real users perform specified tasks
• Users say whatever they are looking at, thinking,
doing, and feeling
• Observers take notes of everything that users say,
without attempting to interpret their actions and words
• Test sessions are often audio- and video-recorded
• The purpose is to make explicit what is implicitly
present in users who can perform the tasks
Traditional Usability testing
• User perform tasks using the system or a prototype
–
–
–
–
Traditional Usability testing
• Effectiveness – accuracy and completeness with
which users achieve specified goals
• Efficiency – accuracy and completeness of goals
achieved in relation to resources, eg. time
• Satisfaction – freedom from discomfort, and positive
attitudes towards the use of the system
• Learnability/memorability
Measure
(Observe, video film)
(Interview)
(Questionnaire)
Effectiveness
•
•
•
•
•
•
•
•
•
•
•
Number of tasks performed
Percentage of relevant functions used
Percentage of tasks completed successfully on first attempt
Number of persistent errors
Number of errors per unit of time
Percentage of users able to successfully complete the task
Number of errors made performing specific tasks
Number of requests for assistance accomplishing task
Objective measure of quality of output
Objective measure of quantity of output
Percentage of users who can do tasks without the manual
2
Efficiency
•
•
•
•
•
•
•
•
•
•
Time to execute a particular set of instructions
Time taken on first attempt
Time to perform a particular task
Time to perform a particular task after a specified period of time
away from the product
Time to perform task compared to an expert
Time to achieve expert performance
Time spent on correcting errors
Time to install a product
Percentage of time spent using the manual
Time spent relearning functions
System Usability Scale (SUS)
I think that I would like to use this system frequently.
I found the system unnecessarily complex.
I thought the system was easy to use.
I think that I would need the support of a technical person to be able
to use this system.
5. I found the various functions in this system were well integrated.
6. I thought there was too much inconsistency in this system.
7. I would imagine that most people would learn to use this system
very quickly.
8. I found the system very cumbersome to use.
9. I felt very confident using the system.
10. I needed to learn a lot of things before I could get going with this
system.
Satisfaction
• Ratio of positive to negative adjectives used to describe the
product
• Per cent of customers that rate the product as "more satisfying"
than a previous product
• Rate of voluntary use
• Per cent of customers who feel "in control" of the product
• Per cent of customers who would recommend it to a friend after
two hours’ use
• Per cent of customers that rate the product as "easier to use"
than a key competitor
SUS – Score
1.
2.
3.
4.
SUS – ”Grading”
• For odd items: subtract one from the user response.
• For even-numbered items: subtract the user
responses from 5
• This scales all values from 0 to 4
• Summarize and multiply the total by 2.5
Guerilla usability
Quick-and-dirty techniques
Revolving door studies
• Schedule studies in advance to occur once during
every sprint, or every other sprint. This is akin to
having users enter through a revolving door, work
through some tasks for you and then leave again.
• It may seem strange to schedule studies when you
don’t know what you will be testing, but because you
already have tests planned, there is less effort
involved in adding a new task to the study in order to
get user feedback on specific areas of the product.
3
Guerilla usability
Quick-and-dirty techniques
Guerilla usability
Quick-and-dirty techniques
Running usability studies on pre-alpha code
• The sooner you can move from prototypes on to
usability testing working code, the sooner you can
show real impact on the project.
• Prototypes may still be necessary for testing interfaces
which have not yet been implemented or for trying out
alternative ideas
• By always testing against the core product tasks, you
can provide a (hopefully) upward trending picture of
user satisfaction to encourage the team.
RITE (Rapid Iterative Testing and Evaluation)The
• The main principle of RITE is to fix issues as they are
found in a study.
• For RITE to work developers must attend, agree the
fix to any issue that is found, and then be prepared to
code fixes ―on the fly
• The usability engineer must be experienced enough in
the domain of the interface being tested that they can
spot issues and quickly calculate the level of severity
in conjunction with developers.
Guerilla usability
Quick-and-dirty techniques
Retrospective
RITE (Rapid Iterative Testing and Evaluation)The
Issues are categorized and resolved four ways;
1. Issues with obvious cause and solution, quick fix
- Fix and test with the next participant
2. Issues with obvious cause and solution, big fix
- Start fix now, test with the fixed prototype when stable
3. Issues with no obvious cause (or solution)
- Keep collecting data, upgrade issue to 1 or 2 once you discover the cause
4. Issues caused by other factors (test script, participant)
- Keep collecting data, learn from mistakes
• ”Inspect and adapt”
• Draw a timeline of the sprint on a whiteboard
• Sit down individually (5 min)
– Write positive things on green post its
– Write negative things on pink post its
• Place the post its on the timeline (2 min)
• Look at the board and discuss what you wrote to
identify patterns or clusters (5 min)
• Choose max 3 things you want to improve (5 min)
Retrospective
4
Download