Usability Context Analysis Guide

advertisement

Usability Context

Analysis:

A Practical Guide

Version 4.04

Edited by

Cathy Thomas and Nigel Bevan

Serco Usability Services

© Crown copyright 1996

Reproduced by permission of the Controller of HMSO

National Physical Laboratory

Teddington, Middlesex, TW11 0LW, UK

No extracts from this document may be reproduced without the prior written consent of the Chief Executive, National

Physical Laboratory; the source must be acknowledged

Exception: Users of this guide may freely reproduce the

Product Report and Context Report tables in Parts 3 and 4 so that they may be used for their intended purposes.

Contributors

Nigel Bevan, Rosemary Bowden, Richard Corcoran, Ian Curson, Miles Macleod,

Jonathan Maissel, Ralph Rengger and Cathy Thomas

National Physical Laboratory and

Andrew Dillon, Martin Maguire and Marian Sweeney

HUSAT Research Institute

Further Information

For further information regarding MUSiC and the Usability Context Analysis Guide, contact the address below.

Serco Usability Services

22 Hand Court

London, WC1V 6JF

Tel : + 44 20 7421 6474

Fax: + 44 20 7421 6499

Email: info@usability.serco.com

Contents

1 INTRODUCTION ................................................................................................................ 1

Overview ................................................................................................................................ 1

Why is context important? ..................................................................................................... 1

How is UCA used? ................................................................................................................. 2

When is UCA needed? ........................................................................................................... 2

Who should be involved in a Context Meeting? ..................................................................... 2

How is a Context Meeting carried out? ................................................................................. 3

Benefits of Usability Context Analysis ................................................................................... 4

Further information about context ........................................................................................ 5

2 THE PROCESS .................................................................................................................... 6

C ONTEXT S TUDY ...................................................................................................................... 6

T HE USABILITY TEAM .............................................................................................................. 7

Figure 2.1: People who can provide information necessary to carry out a Context Study. .................. 7

T HE PRODUCT ........................................................................................................................... 8

Who is involved ...................................................................................................................... 8

Describing the product .......................................................................................................... 8

C ONTEXT OF U SE ..................................................................................................................... 8

Who is involved ...................................................................................................................... 9

How to collect the information .............................................................................................. 9

C RITICAL USABILITY FACTORS ............................................................................................ 10

Who is involved .................................................................................................................... 10

How to identify the critical components .............................................................................. 10

C ONTEXT OF E VALUATION ................................................................................................... 11

Who is involved .................................................................................................................... 11

Defining the Context of Evaluation ..................................................................................... 11

E VALUATION P LAN ................................................................................................................ 13

Who is involved .................................................................................................................... 13

Contents of the Plan ............................................................................................................. 13

Specifying actions ................................................................................................................ 13

Format of the Plan ............................................................................................................... 14

U SABILITY MEASURES ........................................................................................................... 14

3 PRODUCT REPORT ........................................................................................................ 15

P RODUCT R EPORT .................................................................................................................. 16

4 CONTEXT QUESTIONNAIRE AND REPORT ............................................................ 17

Structure of the questionnaire .............................................................................................. 17

Recording actual Context of Use ......................................................................................... 17

Flexibility of the questionnaire ............................................................................................ 17

Nature of responses - concise and accurate ........................................................................ 18

Processing responses ........................................................................................................... 18

Multiple users or tasks ......................................................................................................... 18

Figure 4.1 Multiple Users and Tasks .................................................................................................. 19

Electronic versions of the report .......................................................................................... 19

C ONTEXT Q UESTIONNAIRE AND GUIDANCE ....................................................................... 20

C ONTEXT R EPORT .................................................................................................................. 36

5 CASE STUDIES ................................................................................................................. 50

1 P AINT P ACKAGE X ............................................................................................................ 50

P RODUCT R EPORT .................................................................................................................. 50

C ONTEXT R EPORT .................................................................................................................. 52

E VALUATION P LAN ................................................................................................................ 57

Users .................................................................................................................................... 57

Task ..................................................................................................................................... 58

Organisational Environment ............................................................................................... 58

Technical Environment ........................................................................................................ 58

Physical Environment .......................................................................................................... 59

2 A UTOMATED T ELLING M ACHINE (C ASHPOINT ) ............................................................ 60

P RODUCT R EPORT .................................................................................................................. 60

C ONTEXT R EPORT .................................................................................................................. 62

E VALUATION P LAN ................................................................................................................ 68

Users .................................................................................................................................... 68

Tasks .................................................................................................................................... 68

Organisational Environment ............................................................................................... 68

Technical Environment ........................................................................................................ 69

Physical Environment .......................................................................................................... 69

Controlled Conditions ......................................................................................................... 69

A GLOSSARY OF TERMS ................................................................................................. 70

B CONTEXT HIERARCHY ............................................................................................... 79

1. USERS ............................................................................................................................. 79

2. TASK CHARACTERISTICS ............................................................................................. 79

3. ORGANISATIONAL ENVIRONMENT ............................................................................ 80

4. TECHNICAL ENVIRONMENT ....................................................................................... 80

5. PHYSICAL ENVIRONMENT ........................................................................................... 80

C MUSiC METHODS .......................................................................................................... 81

T HE M ETRICS ......................................................................................................................... 81

T HE IMPORTANCE OF CONTEXT ........................................................................................... 82

MUSiC PRINCIPLES ................................................................................................................ 83

D THE DEVELOPMENT OF EVALUATION IN CONTEXT ........................................ 85

The science of measurement ................................................................................................ 85

Example study ...................................................................................................................... 85

Context awareness ............................................................................................................... 87

Example study ...................................................................................................................... 88

Environmental Factors ........................................................................................................ 88

Summary .............................................................................................................................. 89

REFERENCES ....................................................................................................................... 90

FURTHER READING .......................................................................................................... 91

1

Introduction

Know your users, know their goals, know the circumstances of system use.

The usability of a product is affected not only by the features of the product itself, but also by the characteristics of the users, the tasks they are carrying out, and the technical, organisational and physical environment in which the product is used. In this guide, we use the term 'context' to include all factors which affect the usability of the product, excluding the features of the product itself. We use the term 'product' to represent any interactive system or device designed to support the performance of users' tasks.

Overview

This handbook provides guidance and support for dealing with important issues which underlie the evaluation of usability. It presents a practical method for identifying and documenting contextual factors which affect usability, and for ensuring that these factors are represented when systems are evaluated.

Though designed principally for IT products and systems, the method is widely applicable to other devices with which humans interact. Usability Context Analysis helps evaluation to reflect real-world usability accurately.

The method implements the principles of ISO 9241-11 (Guidance on Usability), and has evolved through application in a variety of commercial organisations where usability and quality of system use are of concern. It can be applied from relatively early in the development of a system, before a working prototype is available. This can help raise shared awareness of usability issues among the development team.

Usability Context Analysis is designed to be used by a small team of people with a stake in the development of a product, including one or more with a background in usability testing or Human Factors. Basic training in the use of the method is available, and it is recommended that team members receive this training.

Why is context important?

Awareness of contextual factors is important throughout the development process.

To develop a product which is appropriate and usable for its intended users, the contexts in which that product will be used should be considered from the very early stages of product specification and design.

Once a product or prototype is available, evaluation can help to assess the usability of that product. For a usability evaluation to have meaningful results, it must be carried out in conditions representative of those in which the product will actually be used. For example, valid and reliable data about the usability of a system for administrative staff can not be obtained by observing system designers using it. The system designers' knowledge, background, and approach to computer systems, and hence the nature of their interaction with the system under test, are likely to be very different from those of the administrative staff.

NPLUS/UCA/V4.02/Oct 1996 1

2

Equally, any evaluation must be carried out by asking people to perform realistic, representative tasks. Employing a method such as Usability Context Analysis (UCA) helps specify in a systematic way the characteristics of the users, the tasks they will carry out, and the circumstances of use. The method may then be used to define how these, or a subset of these, can be represented in an evaluation.

How is UCA used?

Application of UCA normally involves bringing together a range of people who have a stake in the development or procurement of an IT product or interactive system at a

Context Meeting, and focusing their attention in a structured way on contextual factors relevant to usability. UCA enables the group to arrive collectively at a clearly documented statement of:

 the characteristics of the product (or system)

 who the system is for

 what tasks it is intended to support

 the anticipated circumstances of system use

When is UCA needed?

Analysis of context is an essential pre-requisite for any work on usability. During development there is a need for a continual cycle of user-based evaluation to:

 understand the Context of Use

 specify usability requirements

 construct a solution which can be evaluated

 evaluate the solution with typical end users

The first Context Study should be performed as early as possible during development or acquisition, and more detailed follow-up of the context may be required subsequently. A typical example of the use of UCA would be:

 an initial early Context Meeting to give a broad overview of how the product will be used, and to set overall usability goals

 a meeting to define the user types and tasks for informal evaluation of mock-ups to refine the requirements

 a meeting when a prototype is available to define the user types and tasks for a formal evaluation of whether usability goals have been met.

Who should be involved in a Context Meeting?

A Context Meeting should be led by an experienced facilitator with a background in usability. The other participants, whom we shall refer to as stakeholders, should whenever possible include:

 project managers (when procuring or developing systems)

 designers (when developing systems)

 user representatives

Other potentially appropriate people include:

NPLUS/UCA/V4.02/Oct 1996

 product managers

 quality managers

 user support managers

 documentation managers and technical writers

 training managers

 people with responsibility for certification and auditing

 people with responsibility for health and safety

 Human Factors professionals

These are busy people, and their time is valuable. It is not cost-effective to bring together all these people at a Context Meeting. The requirement is to bring together key people who have a stake in the product, and are willing to make the time to be involved. The team should be of a manageable size (typically 3 to 8) and should include people with sufficient seniority to make or influence decisions concerning the usability of the product, and the course of its development. The participants and conduct of the initial meeting are important in shaping subsequent usability evaluation, and in getting the results of evaluations acted upon. Providing adequate briefing material before the meeting is essential.

How is a Context Meeting carried out?

The Context Questionnaire provides the agenda for the Context Meeting. It is divided into four areas:

 product

 users

 tasks

 environment

These factors are described at a fairly broad level under a number of headings, which provide a pragmatic basis for more detailed work on key aspects of usability-related design. The Context Questionnaire includes guidance on what to consider in answering each question. A copy of this questionnaire should be provided to stakeholders as part of the pre-meeting briefing.

In this first step of UCA, stakeholders are explicitly instructed not to consider the constraints of system evaluation. The aim is to arrive at a fair summary of the circumstances of actual system use. Once agreed, these are recorded by the person leading the Context Meeting, and are entered in the Context Report.

The second step in UCA is to consider each documented factor and assess its relevance to usability. We recommend that this is done separately from recording the

Context of Use. It should draw on the knowledge of a core of the involved stakeholders - the 'usability team' who have been given the job of working with the evaluation. We advise that at least one person with a background in Human Factors contributes to this step, but the key issue is that no single viewpoint should be allowed to dominate.

The Context of Use, and the relevance of each factor to usability, can be considered both in system design and in the planning of an evaluation. If the product is to be

NPLUS/UCA/V4.02/Oct 1996 3

4 evaluated, the third step of UCA is to derive a specification of how the evaluation will be run - the Evaluation Plan. This involves the Usability Team making decisions regarding the choice of representative users, tasks and setting. These should be as near to the Context of Use as possible so valid inferences may be made from the evaluation findings.

The problem of generalising from evaluation findings can be minimised if these three steps are rigorously followed.

The remainder of the evaluation process follows naturally from this point.

Benefits of Usability Context Analysis

Specification of overall Context of Use of a product

Details about the characteristics of users, their goals and tasks and the environments in which the tasks are carried out provides important information for use in the specification of overall product requirements, prior to development of particular usability requirements.

Specification of particular usability requirements for a product

Before developing a custom system, a purchasing organisation can specify the usability requirements for the system, against which acceptance testing may be carried out. Specific contexts for usability measurement should be identified; measures of effectiveness, efficiency and satisfaction selected; and acceptance criteria based upon these measures established.

Product development

Agreement on the Context of Use can assist product development teams in establishing a common understanding of the concept of usability: it can help them address the breadth of issues associated with product usability.

A developer can use UCA to help specify usability targets for the product. At various stages during the development process the developer can measure the usability achieved against these targets. This enables objective decisions to be taken about the need for design changes to enhance usability, and about trade-offs which may be appropriate between usability and other requirements, as part of an iterative design process.

Focused approach

Clear specification of the Context of Use for users of a product allows the characteristics and conditions of use to be considered and documented in a more focused way than may otherwise be the case.

An interactive system is designed to support people (users) carrying out a range of work tasks. Tasks can be considered in terms of the goals people wish to achieve. A system may be intended for specific people with specific skills and capabilities, or a range of users, carrying out different tasks.

NPLUS/UCA/V4.02/Oct 1996

For example, a database may be used by entry clerks, whose job each day is to enter data: a repetitive job, where optimal efficiency of use is important. The same database may be used infrequently by managers who want to access summary reports. They do not want to spend hours looking through manuals each time because they have forgotten how to do it. The system may have quite different levels of usability for these two kinds of user and task.

Contextual validity of evaluation findings

In addition to enabling meaningful evaluation, the acts of specifying and documenting the contextual factors of an evaluation allow judgements to be made concerning the generalisability of that evaluation. For example, prospective purchasers of a product can see in detail the conditions under which the evaluation was carried out. They can then judge to what extent those conditions apply to their own intended use of the product.

A shared view

The activity of discussing a structured set of contextual issues helps the stakeholders work as a group. The significance of involving stakeholders as a group in the process of usability context analysis has become increasingly apparent to us, through applying UCA with a variety of organisations which develop or procure interactive systems and software. The Context Meeting can provide a forum for airing possibly differing views, and arriving at a shared understanding of issues which are relevant to quality of system use.

Agreement on issues raises awareness of usability factors. The group aspects of

UCA also facilitate the subsequent process of acting upon the findings of evaluations. This helps organisations avoid the all-too-common occurrence of human factors evaluation being carried out in isolation, with the danger that findings are ignored, rejected or simply misunderstood by managers.

Further information about context

Appendices C and D provide further background to contextual issues. You are advised to read these if you wish to find out more about the MUSiC approach to context, and to learn how ideas concerning the importance of context have developed over recent years.

NPLUS/UCA/V4.02/Oct 1996 5

6

2

The process

Context Study

A Context Study consists of a number of steps, which involve gathering information about the product and its intended Context of Use, and, when required, using this information to plan an evaluation.

STEP 1 Describe the Product and its use a) describe the Product to be tested – by completing the Product Report b) describe the Context of Use – by completing the Context Report

STEP 2 Identify critical components that could affect usability

STEP 3 Plan the evaluation a) specify the Context of Evaluation b) produce a concise Evaluation Plan c) specify any usability measures and criteria

A number of tools have been developed to facilitate the information gathering.

Product Report

The questions in the Product Report are designed to extract the essential information required to describe a product or system clearly and without reproducing detailed product specifications which may already be in existence. The report should document the product details as they are at the time of evaluation, noting any differences between this and the intended final product.

Context Questionnaire

The Context Questionnaire, with accompanying instructions, is designed to elicit information which forms a comprehensive description of the Context of Use of a product.

Context Report

The Context Report is used to document the answers to the Context Questionnaire. It is in the same format as the Context Questionnaire, but also includes columns for indicating whether or not the analyst considers a characteristic of the Context of Use is likely to affect usability, and how each characteristic will be handled in the

Context of Evaluation. It aims to provide a clear statement of the Context of Use of a product, identify the limitations of the usability evaluation, and thus the generalisability of the results to other contexts.

NPLUS/UCA/V4.02/Oct 1996

The usability team

Before the Context Study is begun, a small "usability team" should be set up consisting of at least one usability analyst, and one person with a good knowledge of the product, its intended users, and any constraints that may occur during the evaluation. It is important to include someone of sufficient seniority to ensure that results of the study can be used to influence decision-making.

Note that parts of the procedure are based upon expert judgements that can only be made by people who have an understanding of Human-Computer Interaction (HCI), and an awareness of usability issues or practical experience of product evaluation.

These individuals, whom we refer to as 'usability analysts', should have some experience of conducting product evaluations, and will probably be either qualified in HCI, psychology or ergonomics; or trained specifically in usability evaluation.

Identify sources of information.

The Usability Team identify and contact the people who are able to provide the necessary information for the Product Report and Context Report.

Information is required for all the contextual components – users, tasks, and environments – and views should be requested from as many relevant departments as possible. Though no two companies are identical in terms of personnel and departmental titles, information is generally required from some of the individuals with jobs similar to those listed below. This is not an exhaustive list – there may be other people who could be involved – for example health and safety, or certification and audit representatives.

Job Title

Customer

Project manager

Product manager

Systems analyst

Role in Product Development

Commissions the product and sets requirements

Responsible for the current product development activities

Ultimately responsible for product development

Identifies requirements and makes specifications

Designer/programmer

Marketing executive

Technical author

Technical/User support

Users

Quality manager

Codes the system

Plans the sales and advertising

Produces user documentation

Provides training, service and user support

Represent the views of users

Responsible for the implementation of quality systems

Training manager Defines user training requirements

Human Factors professionals Responsible for usability

Figure 2.1: People who can provide information necessary to carry out a Context Study.

NPLUS/UCA/V4.02/Oct 1996 7

8

The product

STEP 1a) Complete the Product Report to describe the product which is to be developed or evaluated

The report form can be found in Part 3, with two case studies in Part 5.

Who is involved

The Product Report should be completed by people with appropriate knowledge of the product. During development this would include product development managers, technical developers, sales and marketing staff, and documentation and training material authors. When the product is being evaluated by a user organisation, the individuals involved could be product installation managers and technical support staff .

Describing the product

It is essential that the Product Report be completed before any further steps in the

Context Study, and that the description of the product contained in the report is available to everyone likely to participate in subsequent steps. The questions in the

Product Report are designed to extract details of the product’s functions, and the hardware and software included in it.

The Product Report is intended to identify clearly what is being developed or tested, without reproducing any existing detailed product specifications. When a prototype is being evaluated, the Product Report should distinguish between the state of the prototype that will be evaluated, and the intended final version of the product. It should identify any functions or facilities not currently available.

The description may in fact be of a subset of a product. For example, if a word processor is amended to include a new spelling checker, and the spelling checker is to be evaluated, then the spelling checker itself – not the whole word processor – is the product under consideration.

Context of Use

STEP 1b) Collect the information required to answer the questions in the Context

Questionnaire

The Context Questionnaire asks for details about the product’s intended Context of

Use – who the users are, what tasks they perform, and what kind of technical, organisational and physical environment they work in. The Questionnaire, guidance and Report table can be found in Part 4, with two case studies in Part 5.

Users and tasks are first listed, then the key combinations of user groups and tasks

(which might be evaluated subsequently) need to be described in more detail. The choice of users and tasks is typically based on combinations which are believed to be critical for usability or for the business objectives.

The evaluation objectives and overall usability goals will help to focus the collection of Context of Use information, and these should be reviewed and recorded at the start

NPLUS/UCA/V4.02/Oct 1996

of the Context Study. These will also guide the usability team in formulating the

Evaluation Plan in Step 3.

Knowledge of the Context of Use in itself will improve the design of a product. It encourages designers to tailor the design to the specified real-world usage, and also to specify usability criteria, so the product’s usability can be assessed by evaluation throughout the design process.

Who is involved

Information about the Context of Use of a product should be collected by the usability team from the people identified as having a good knowledge of the product, the users and their tasks. Collecting information about the Context of Use of a product will also encourage other participants in the design process to consider context related issues, and to make explicit their views of the assumed Context of

Use.

It is important that the people involved in this step specify the Context of Use of the product without concern for the practicalities of what can be achieved during a forthcoming evaluation. Decisions regarding how factors will be dealt with during usability evaluation are made at a later stage.

There are two main reasons for this. Firstly, it ensures that a description of the actual

Context of Use of a product is recorded and documented for reference and for future use. Secondly, it facilitates the process of deciding how a factor will be handled during later steps – for example, if it is decided to ignore a factor, that decision will need to be based upon accurate information about the actual Context of Use.

How to collect the information

Context Meetings

One method of collecting information required in the Context Questionnaire is by holding a meeting of the usability team and the people who can supply the required information – the stakeholders (see 'Who is involved', above). This is a costeffective way to elicit the information, but care must be taken to ensure that everyone has an opportunity to express their views, that views can be expressed freely without being affected by any power relationships that may exist between the participants, and that all views are accurately recorded.

For this method to be cost effective it is essential that :

(a) the Product Report is completed prior to the Context Meeting and all participants at the Context Meeting have a copy of the Product Report,

(b) all participants at the Context Meeting have an opportunity to review the items in the Context Questionnaire before the meeting, so that when they are asked to state how the Context of Use of the product relates to the items, they have had a chance to give thought to some of the issues.

Adequate pre-briefing should reduce the time taken for the meeting.

The usability analyst may identify answers to some questions in advance of the meeting: these can then be summarised for agreement at the meeting, saving more time.

NPLUS/UCA/V4.02/Oct 1996 9

By interview

A potentially less cost-effective means of obtaining relevant information is for the usability analyst to perform individual interviews with the product’s stakeholders.

Interviewing takes considerable patience and skill. Interviewers must avoid putting words in respondents' mouths, and should confirm that they have understood the points and record the interviewees' views accurately.

By post

Alternatively, copies of the questionnaire and report can be sent to these individuals with a request that they return them when completed. This option is less successful than being present in person, and, in addition to time spent collating the responses, you will almost certainly require further time to ensure that questionnaires are returned, clarify responses, and reconcile conflicting responses from different stakeholders.

Critical usability factors

STEP 2 Consider each of the characteristics listed, decide whether or not they could affect the usability of the product, and enter the response in the

Context Report .

Who is involved

This step should be carried out by, or in close consultation with, an experienced usability analyst.

How to identify the critical components

Consider each of the characteristics and decide whether they could affect the usability of the product. The possible responses are Yes, No or Maybe. If the answer to a question is Yes then it is considered a critical component of the context.

If the analyst is unsure whether a component will affect the usability of the product, the answer is Maybe, and the analyst will decide how it will be handled when defining the Context of Evaluation. If the answer is No, then the analyst will not need to consider this component any further.

Each decision has to be made based on the usability analyst's knowledge of HCI and ergonomics, and their experience of similar product evaluations.

 Look carefully at each question and the answer given in the Context Report.

 Could usability be affected if the Context of Evaluation was different from the stated Context of Use for this item? If not insert No and go to the next question.

 If you are sure usability could be affected, insert Yes, or if unsure, insert

Maybe in the Affects Usability? column.

10 NPLUS/UCA/V4.02/Oct 1996

As each decision will depend on the details of the context, it is not possible to give specific guidance. There is no way of checking whether the decision is correct or incorrect as the decision is a matter of judgement. However, recording the decisions made will help readers of the Evaluation Report interpret the results.

Critical components must be identified regardless of whether they can be represented in the Context of Evaluation. Other parties, such as readers or consumers, can then assess the validity and generalisability of the usability evaluation results. If it is not feasible to simulate any of the critical components, eg the availability of a HelpDesk, then they will be omitted from the Context of Evaluation. The implementation of any of the critical components of the Context of Use in the Context of Evaluation depends upon the scope of the usability evaluation and any financial and technical constraints.

Context of Evaluation

STEP 3a) Consider the items which will or might affect the usability of the product, i.e. Yes or Maybe answers in the Context Report, and decide how these are to be represented in the Context of Evaluation .

Who is involved

Specifying the Context of Evaluation involves expert judgements that should be made by experienced usability analysts. These judgements are made in consultation with those members of the usability team who are aware of the resources available for the evaluation and any practical constraints that need to be considered.

Defining the Context of Evaluation

Although many different factors may have been defined as having a potential impact on the usability of a product, it may not be feasible to represent them all in the evaluation, and a representative subset will need to be defined. It may not be practical or cost effective to recreate some of the critical components – for example, there will inevitably be constraints on the amount of time that can be spent on the study, and perhaps on the number or type of users that can be obtained. It may be too costly or too difficult to simulate some of the conditions. The usability team must decide which factors affect the required validity, reliability and generalisability of the results.

The components can either be Controlled, Monitored, Real or Ignored .

Controlled components

A component may be Controlled by either fixing or varying the factors to which the component relates.

A factor is fixed when it is kept constant, or within a specified range, throughout the study. For example:

It has been identified in a Context of Use that users of a product may have attended a week-long training course. This factor can be kept constant by

NPLUS/UCA/V4.02/Oct 1996 11

selecting only users who have attended the training course. Another example would be where it has been decided to fix background noise within certain limits.

A factor may be varied to compare usability under different circumstances. For example:

An outdoor product may have three controlled conditions based on three lighting levels: bright sunshine, dull day and night-time; or for a software product, two controlled task conditions based on the provision or denial of a help manual.

Comparison of varied conditions increases the complexity and, consequently the cost, of any study. It may be necessary to use a larger number of users in order to draw firm conclusions based on statistical tests, and this will further increase the cost of the evaluation. Careful consideration should be given to the costs before planning to make comparisons.

Monitored components

Could usability be affected by different values within the expected Context of Use

(for example, younger or older users)? If so these values should be Monitored in the

Context of Evaluation, and reported with the results. It may be desirable to use a representative range of contexts (for example a representative range of ages of users).

In some cases it will be necessary both to Control the Context of Evaluation (eg for ages between 18 and 65) and Monitor the context actually used (eg actual ages).

Components may be Monitored to identify extremes or account for outliers in the data. For example:

The extent of users' experience may be Monitored: users are not selected in order to guarantee equal amounts of experience using products with similar main functions. The range of experience will be reported with the evaluation results, allowing readers of the study to determine whether results from these particular users would be applicable to their own situation.

If the results for one user are much higher or lower than the rest of the group, by studying the Monitored components it may be possible to identify a reason for this, eg the user was a novice: the rest of the group had over six months experience. The user may then be excluded from the overall results.

Real components

A component can be Real when evaluation takes place in the actual Context of Use.

For example:

If the Context of Use of a product states that the system is for use in a call centre where phone calls are received from the general public, then evaluation could be conducted in the Real context of a call centre.

As the Real context may be quite variable, details should be carefully Monitored.

12 NPLUS/UCA/V4.02/Oct 1996

Ignored components

The analyst may decide to ignore the effects of a component: no attempt is made to

Control or Monitor the value of that component throughout the evaluation.

For example:

The analyst decides to ignore the temperature of the environment: it is neither varied, nor held constant, nor monitored in the evaluation.

The analyst may ignore a user’s distinctive ability: for example, ‘artistic flair’.

Components should only be ignored if it is not practical to monitor them, or if the analyst is confident that they will have minimal impact on usability.

Evaluation Plan

STEP 3b) Derive a specification of how the evaluation will be run .

Who is involved

This step must be carried out in consultation with the usability analyst, because decisions concerning experimental design need to be made.

Contents of the Plan

The Evaluation Plan contains all relevant information from the Context Report giving specific details of how the Context of Evaluation will be operationalised.

It includes information about: how many users will be needed, how they will be obtained, how candidates will be assessed to see if they qualify as users, and how information is to be gathered before, during, and after each evaluation session. It describes the tasks and environment in a similar manner, and details any controlled conditions.

Specifying actions

When the analyst has decided whether a component should be Controlled, Real or

Monitored, they must then specify exactly how this is to be done.

For example:

If the analyst has decided to control the age range of the users to between 30 - 50 years, the analyst must also decide how to obtain users within this range and ensure the ages are correct. In this case, the analyst would probably just ask the users their age.

For Monitored components, the analyst may simply observe the user during the sessions in order to ascertain, for example, typing skill.

NPLUS/UCA/V4.02/Oct 1996 13

14

In other cases, the analyst may have to make use of measurement instruments such as thermometers or light meters. All of these details are included in the Evaluation

Plan.

Format of the Plan

The plan may be laid out under the same headings as the Context Report. Extra sections detailing any controlled conditions and which measures are to be taken may be included.

It may also be helpful to include procedural information such as the allocation of evaluators roles and a timetable of events.

USERS

Detail here the number of users who will take part in the evaluation, what characteristics they should have (those which are to be Controlled) and those which are to be ascertained as part of the evaluation (those which are to be Monitored).

TASKS

List the tasks that the users will carry out as part of the evaluation.

ORGANISATIONAL ENVIRONMENT

Specify the organisational conditions under which the users will work. For example, details here could include the number of and nature of any interruptions identified in the Context of Use as affecting usability.

TECHNICAL ENVIRONMENT

Provide details of the hardware, software and any network environment that will be provided during the evaluation.

PHYSICAL ENVIRONMENT

Describe the physical location and characteristics of the workplace. This section will be more relevant where non-office environments have been detailed in the Context of

Use (for example the ATM in the Case Study, Part 5).

Usability measures

STEP 3c) Specify any usability measures and criteria .

Usability measures and criteria should normally be specified prior to evaluation.

This can take place early in design to form part of the product requirements.

During detailed design, the main objective may be to obtain design feedback from informal evaluation of mock-ups and partial prototypes, in which case measures may not be required.

NPLUS/UCA/V4.02/Oct 1996

3

Product Report

1 BASIC DESCRIPTION

1.1 Product name and version

1.2 Product description and purpose

Your description should be based on the product as it will exist at the time of evaluation . If the product will exist only as a prototype, you should consider the ways in which it will differ from the product in its completed form. If any of these differences seem likely to affect the outcome of your usability evaluation, they should be listed here.

For example, a completed product can have many features which in combination users may find confusing. A prototype with only a few representative features may be simpler to use, thereby giving a false impression of usability. Alternatively, a prototype may respond slowly (and unreliably). Any such differences should be listed

1.3 What are the main application areas of the product?

1.4 What are the major functions? (major productive goals achievable using the product)

List in approximate order of importance the product's major functions (perhaps between 1 and 6 functions; do not be too detailed; each should be related to a goal which the user can achieve).

Consider this question carefully, and try to ensure that your answer represents what the manufacturer intends as the main functions of the product.

For example, an accounts package might offer the following major functions:

1. recording ledger transactions; 2. recording stock movements; 3. recording payroll information; 4. producing management reports.

2 SPECIFICATION

2.1 Hardware (If no hardware is included in this product, go to 2.2) a) general description (of any hardware provided as part of the product)

List only the hardware - if any - which comes as part of the product, or is to be evaluated as part of the product, rather than hardware which is used in conjunction with it (which is considered in the Context Questionnaire

Section 4.1.1 - Technical Environment). b) input devices (supplied with the product)

List devices for input from the user (e.g. keyboard, mouse, touch screen, microphone) which are provided as part of the product. c) output devices (supplied with the product)

List devices for output to the user (e.g. VDU, loudspeaker, printer) which are provided as part of the product.

2.2 Software – is any other software always provided with the product?

List only any software which comes with the product, rather than software which is used in conjunction with it

(which is considered in the Context Questionnaire section 4.1.2 - Technical Environment).

2.3 Other items

Provide details of any other items that are included as part of the product, and which have not been listed above.

NPLUS/UCA/V4.02/Oct 1996 15

Product Report

Completed by: (Name and Organisation) ...................................................................................

Date: ....................................................................................................................................

1 BASIC DESCRIPTION

1.1 Product name and version

1.2 Product description and purpose

1.3 What are the main application areas of the product?

1.4 What are the major functions? (major productive goals achievable using the product)

2 SPECIFICATION

2.1 Hardware (If no hardware is included in this product, go to 2.2) a) general description (of any hardware provided as part of the product) b) input devices (supplied with the product) c) output devices (supplied with the product)

2.2 Software – is any other software always provided with the product?

2.3 Other items

16 NPLUS/UCA/V4.02/Oct 1996

4

Context Questionnaire and

Report

Structure of the questionnaire

The questionnaire is based upon the Context Hierarchy given in Appendix B. The five major sections of the questionnaire deal in turn with the characteristics of the user, task, and organisational, technical and physical environments. Each section has a brief introduction, and most are divided into subsections, which also have their own introductions. Although the sections of the questionnaire appear in a set sequence, it is possible to complete them in any order as information becomes available.

For ease of use, the questionnaire has been written in the present tense, as if the product already exists. However, it is also intended to be used with new products, which have intended users, tasks and environments. Users of the questionnaire should not allow the choice of tense of the questions to affect or limit their answers.

Recording actual Context of Use

When completing the Context Report, it is important to consider actual or prospective ‘real-world’ use of the product. "Desirable" characteristics, which may not actually feature in the context, should not be listed, and the practical constraints of evaluation should not be considered. Decisions concerning the factors which affect usability and those which will be represented in the Context of Evaluation, and hence implemented in an evaluation, should be left until the Context of Use has been completely specified. This ensures that an accurate representation of the Context of

Use of the product is derived.

There are two important exceptions to this principle. To save time when filling in the Context Report, sections only need to be completed in detail for each of the user types who will actually be involved in the evaluation, and for each of the tasks they will performing. Decisions concerning who those user types will be, and which tasks they will carry out, must therefore be made whilst filling in the Context of Use. The questionnaire gives guidance on how to arrive at these decisions.

Flexibility of the questionnaire

It should be noted that no questionnaire could ever hope to anticipate all the context issues of every product. If the questionnaire does not cover a component of the

Context of Use which may affect the usability of the product, then a question or section should be added as necessary. Equally, if sections are not relevant, then they should be left blank.

NPLUS/UCA/V4.02/Oct 1996 17

Nature of responses - concise and accurate

The questions are supplemented with guidance on how to respond to the questions, and examples of the appropriate level of detail. Many questions can be answered with a single phrase or sentence, whilst others require more elaboration. Responses should be kept as brief as possible without neglecting any relevant details of characteristics which may affect the use of the product. They should list the most common or average figures for the listed components, and also, where appropriate, give details of the range or distribution of values or include any other relevant information. For example:

When referring to number of hours users spend using the product, the response could be 'average is about 6 hours per day, but with no single session lasting longer than 4 hours'.

When answering questions, it is better to avoid the words 'normal', 'standard' or

'common'. More specific details avoid ambiguity and potential misunderstandings.

However, if it is certain that readers will know what is meant, then these terse descriptions may be used to save time.

Processing responses

In some cases, answers from the different respondents will be similar, in others they will vary. When there is variation, the compiler will have to decide whether this is simply due to use of specialist terminology (for example, marketing personnel tend to have set classification systems for customers which are unlikely to be used by systems analysts), or reflects genuine differences in opinion. Such differences in opinion must be resolved, so it is important to flag the issues for the Context of Use to be clarified explicitly amongst all parties concerned. This would be necessary when, for example, the marketing manager and systems analyst disagreed about the characteristics of the intended user population.

Multiple users or tasks

If more than one type of user or task are to be evaluated, parts of the report must be completed for each of these user types or tasks.

The report has been designed to accommodate answers for two types of user performing four tasks each. If more users are to be evaluated, or each user is to undertake more than four tasks, then these sections must be duplicated and completed for each additional user or task.

Figure 4.1 illustrates how different user types can use the same product, and perform different tasks which need to be described separately.

18 NPLUS/UCA/V4.02/Oct 1996

Product

User type 1

Task 1.1 Task 1.2 Task 1.3

User type 2 User type 3

Task 2.1 Task 2.2

Task 2.3

Figure 4.1 Multiple Users and Tasks

If user types are distinctly different, Sections 3, 4 and 5 (Organisational, Technical and Physical Environments) may also need to be filled in separately. Otherwise, any important differences for the different user types should be noted.

Example:

An accounts software product has two significantly different types of user who are to be evaluated:

 accounts clerks, who enter data, recording the day-to-day details of sales and expenditure, and who work in a communal office, and

 management staff, who occasionally need to refer to summaries of data, from remote terminals in their own offices.

Because these two user groups have different abilities, frequencies of product use, and perform different tasks, they should be considered as separate user types. In this case, each user type must be described in detail, as do the separate tasks performed by each user type, and any significant differences in their working conditions when using the product.

Electronic versions of the report

As responses are bound to vary according to the amount of detail required, and the number of sections which need to be completed, it has not been practical to design a paper response form which would be suitable for every context. We therefore recommend use of an electronic version of the report which can be tailored to suit any particular context. Electronic versions have the advantage that sections can be expanded or contracted to suit the length of the responses, and also sections which need to be completed a number of times can be easily copied and pasted electronically before answers are inserted.

The report is a Microsoft Word 6 document, suitable for both Macintosh and PC.

Copies of the report in these formats (and any other formats currently available) are included on the PC and Mac discs supplied with the Guide.

NPLUS/UCA/V4.02/Oct 1996 19

Context Questionnaire and guidance

Name and version of product

This should be the version of the product at the time of the evaluation, or the version of the product being developed.

Report completed by Date

Organisation

Objectives

20 N NPLUS/UCA/V4.02/Oct 1996

21

If the Context Study is being carried out prior to an evaluation, the objectives of the evaluation and the overall usability criteria should be entered here.

NPLUS/UCA/V4.02/Oct 1996

1.1 U SER T YPES

1.1.1 User types being considered a) user types identified.

Here we ask you to decide whether intended users need to be considered as distinctly different types.

Dividing users into types can incur significantly more work, and is therefore only to be carried out where important differences exist.

Decide whether different user types need to be identified.

If the user population contains groups of people who use the system to perform different sets of tasks, or who have considerable differences in ability or experience, then divide them into separate user types.

For example, disabled users may have different requirements and abilities compared to able-bodied users; accounts clerks who regularly use a system may have different levels of ability and requirements compared with managers who need to access the same system occasionally.

When listing user types, you should use the formal job title of the user type; for example Systems Analyst, Trainee

Programmer, or a description of their most salient features, e.g. expert frequent user, occasional user.

List the user types you have identified .

22 N NPLUS/UCA/V4.02/Oct 1996

23 b) user types for usability evaluation

List each user type from a) above that you consider should be evaluated separately .

This is the first occasion when filling in the Context of Use form that you will need to consider the Context of Evaluation.

When listing which of the user types identified in a) above are going to be involved in the evaluation, you will need to take several factors into account:

1) You will need to consider resource constraints . How many users will you have access to? (If you require statistical reliability of measures and metrics, a minimum of 10 of each type is usually recommended). Will the cost of their time and expenses be included in the overall budget? Does the budget cover the time needed to analyse several groups of users??

2) What is the relative importance of the different user groups?

3) Will any future evaluations which are to be used for metrics comparison have access to users of the same type?

4) Do you have access to the same type of users for any systems which are being used for comparison with this product?

NOTE: If you have selected more than one user type, you will need to complete Subsections 1.2 - 1.6, Section 2, and

1.1.2 Secondary or indirect users possibly Sections 3, 4 and 5 for each of these types. who: a) interact with the product b) are affected by its output

List any individuals who interact with the product, but not for its primary purpose.

An example is an engineer who is required to service and maintain the product. In some cases, where you realise that these users do make significant use of the product, it will be appropriate to 'promote' these other users to be one of the identified user types; however, in most cases it will just be necessary to list them as other users.

List any individuals who do not interact with the product, but rely upon output produced by the users .

This may include individuals who do not interact with the product, but use the results produced to carry out their own tasks. An example is shoppers who use the receipts produced by electronic tills to check their shopping bills without directly interacting with the till.

NPLUS/UCA/V4.02/Oct 1996

24

1.2 S KILLS & K NOWLEDGE

This subsection asks you to provide some details about the formally and informally acquired skills and knowledge of each of the user types listed for Question 1.1.1.

REMINDER: If you have identified more than one user type, then the following sections should be completed separately

1.2.1 Training and experience in the business processes and methods which the for each user type you selected in Section 1.1.1.b.

How much practical experience does this group of users have in performing, either manually or with any automated system, the tasks that this product supports?

product supports

1.2.2 Experience in :

For example, for a financial package, how much experience do these users have of the accounting procedures performed using each of the product's main functions? Without experience of accounting procedures, it may be difficult to use some functions. Or for an ATM (cashpoint machine), when a task is withdrawing money from a bank account, if users are experienced in withdrawing money over the counter, but not in using the cashpoint, their experience with the product will be low, but task experience considerably higher. a) using the product b) using other products with similar main functions c) using products with the same interface style or operating system

1.2.3 Training in a) tasks supported by the products main functions

How much practical experience have users had in using the product for its main functions ?

IMPORTANT: you should refer at this point to your completed Product Report.

List the practical experience users have of using the product for its main functions, as listed for Question 1.1.4 in the

Product Report. e.g. Function 1: Daily use

Function 2: No experience

Function 3: Used less than once a month

How much practical experience have these users had in using other products performing similar functions?

List for each main function.

For computer-based products only: state how much practical experience users have in using the operating system or environment on which the product is based.

For example, for a UNIX™-based product, state experience with other UNIX™-based applications; for a WINDOWS™ based product state experience with other WINDOWS™-based applications

This includes formal training as well as less formal methods such as open learning packages, video instruction or training manuals.

State the amount of training users have received in each of the following areas :

In performing tasks supported by the product's specific functions (as listed for Product Question 1.1.4), manually or with any automated system .

IMPORTANT: you should refer here to your completed Product Report.

N NPLUS/UCA/V4.02/Oct 1996

25 b) using the products main functions c) using other products with similar main functions d) using products with the same interface style or operating system

1.2.4 Qualifications

1.2.5 Relevant input skills

1.2.6 Linguistic ability

1.2.7 Background knowledge

In using the product itself to perform the specific functions, as listed in the Product Report

In using other products to perform similar functions .

For computer-based products only: In using the same operating system or environment, or other products based on it.

For example, a one day course of instruction in using WINDOWS™

What range and distribution of qualifications might members of this user group typically have?

Include formal and informal qualifications; e.g., degrees, apprenticeships

What input device skills do they possess?

For example: regular user of mouse; touch typing (60 to 90 wpm), fast two finger typing or slow 'hunt and peck; familiarity with a touch screen, etc.

State any deficiencies users may have in the language in which the product and its documentation have been written

Is there any general background knowledge which is indirectly relevant to the users' performance of tasks with the product?

Background knowledge is knowledge which is not directly connected to the product, the task, or IT, but which users may have due to membership of a social, cultural, organisational, regional, national or religious group. An example of background knowledge could be that company telephone operators are not on duty after 6.00pm.

NPLUS/UCA/V4.02/Oct 1996

26

1.3 P HYSICAL A TTRIBUTES

This subsection is concerned with the physical characteristics of the user type

1.3.1 a) Age range b) Typical age

1.3.2 Gender

1.3.3 Physical limitations and disabilities

1.4 M ENTAL A TTRIBUTES

1.4.1 Intellectual abilities a) distinctive abilities b) specific mental disabilities

1.4.2 Motivations a) attitude to job & task b) attitude to the product c) attitude to information

What is the age range of the user type?

For example: Age ranges between 16-70 years.

If appropriate, state the typical age of this user group

What is the male/female distribution of the user type?

For example: 10% male, 90% female

Describe any physical limitations or disabilities of the user type.

This includes general physical limitations - such as reach distances, as well as physical disabilities. Examples of such disabilities are short sightedness, colour blindness, loss of hearing, loss of limbs, reduced psychomotor capabilities

This section asks about the mental characteristics of this type of intended user, including their intellectual abilities and motivations

Do the users possess any distinctive intellectual abilities?

Do the users have any specific relevant mental disabilities?

How positive or negative are the attitudes which the users display?

(give reasons where helpful):

For example: highly satisfying work despite low rates of pay, proud of products produced, suspicious that the introduction of IT will lead to loss of jobs, lack of trust with higher management. technology d) employees attitude to the employing organisation

N NPLUS/UCA/V4.02/Oct 1996

27

1.5 J OB C HARACTERISTICS

This section is concerned with details about the jobs carried out by users, i.e. collections of tasks.

If the product is not being used in a work environment, then this subsection will not be relevant. If this is the case, go straight to Subsection 1.6.

1.5.1 Job function

1.5.2 Job history

What is the purpose of the user's work?

List the main objectives and responsibilities of the job, as carried out by the user a) how long employed b) how long in current job

Typically, how long have users been employed by the organisation?

How long have users been doing their current job in this organisation?

1.5.3 Hours of work / operation a) hours of work

What hours do users work? b) hours using product

Provide details about the hours of work of the user, including shift work, irregular hours, home working hours, etc.

What hours do users spend using the product?

Provide details about when the product will be used; for example, the product is used throughout the shift which can either be early, i.e. 0500-1300 hrs or late i.e. 1300 to 2200 hrs. Workers alternate between weeks on early and late shifts.

1.5.4 Job flexibility Can users decide how to approach the job, organise their time and carry out tasks?

NPLUS/UCA/V4.02/Oct 1996

28

1.6 L IST OF TASKS a) tasks identified b) tasks for usability evaluation

Here we wish to define the tasks for each user type.

What constitutes a task is determined by the major functions of the product (i.e. the major productive goals which a user can achieve using the product), as listed for Question 1.4 (Major Functions) in the Product Report.

In almost all cases, there will be more than one task. You are asked to list all tasks users carry out with the product, and then to select those you think should be evaluated.

List all tasks that users perform using the product

List each task from a) that you consider should be carried out in the evaluation .

As with question 1.1.1.b., you have to make a decision concerning the Context of Evaluation of the product. When selecting which of the above tasks are going to be included in a usability evaluation, you will need to consider:

1) Resource constraints. How long will the tasks take? Will the budget cover the users' and analysts' time?

2) Frequency and criticality of the tasks for the intended users. It will make sense to evaluate tasks that are more critical than others, or that will be performed more frequently.

3)Future evaluations. If it is possible that a future version of the product will be evaluated, and the metrics from this study used as a baseline, will the functions supporting the task be included in the future version?

4) If a comparison of different systems is to be made, do the other systems allow these tasks to be carried out? Only equivalent tasks can be used for any comparative study.

It could be that the system under evaluation allows certain functions to be performed that the product being used for comparison does not. If you are interested in these functions, you will have to include tasks which employ them. You could perhaps perform comparative tasks on the other product using the best means available (eg pencil and paper - as may have to be the case if a person using the other product wanted to carry out that task). Alternatively, you could ask users to carry out the advanced task and use the measures for comparison with future evaluation results.

As with all other tasks, observation of users carrying out these tasks will be useful for providing valuable information about how improvements can be made to these functions of the product.

NOTE: If you have selected more than one task you will need to complete Section 2 for each of these tasks.

N NPLUS/UCA/V4.02/Oct 1996

29

2 T ASK C HARACTERISTICS

Here we wish to find out the features of each task.

It is important to avoid describing the task in terms of the products that may be used to carry it out - the answers to this section should be product-independent.

2.1 Task goal

What is the main objective of performing the task ?

For example: to obtain money from bank account as quickly and easily as possible, to type a letter with no mistakes in the minimum amount of time

2.2 Choice

2.3 Task output

2.4 Side effects

Can users choose whether or not to use the product to achieve their goals ?

For example, users can obtain money from the bank using the ATM, but during bank opening hours are also able to withdraw money over the counter

What are the outputs from the task ?

State the contents and medium of the output. For example, a complete letter with no mistakes, printed on paper, folded and sealed in a correctly addressed envelope.

[If this questionnaire is being answered for the purpose of a usability evaluation, Questions 2.2 (Task Goal) and 2.4

(Task Output) are essential, and you should spend time completing them. Measures of usability will be made against users' success in completing task goals, which can only be measured by task output.

Are there any adverse side effects that may occur as a result of carrying out this task ?

For example: User may save file and accidentally overwrite another existing file.

2.5 Task frequency

2.6 Task duration

2.7 Task flexibility

2.8 Physical and mental

How frequently is the task normally carried out?

For example: Continuously throughout the day, three or four times a day, once a week etc.

How long does the task generally take the user ?

For example: Duration ranges between 20 and 35 minutes. In 90% of cases it takes between 25 and 30 minutes.

Do users have to follow a pre-defined order when carrying out the task?

For example: users are not obliged to follow a pre-defined order, although they normally will due to force of habit demands

NPLUS/UCA/V4.02/Oct 1996

30 a) Factors which make task demanding b) How demanding in comparison with others

2.9 Task dependencies

2.10 Linked tasks

Describe any factors that may make the task physically or mentally demanding.

For example; task requires complex split-second decisions to be made

How demanding is this task compared to the other tasks in the evaluation?

For example; setting up a spreadsheet will be more mentally demanding than entering data onto the same spreadsheet

What information or resources are required by the users in order to perform the task ?

For example: an audio tape of dictation, a supply of paper and envelopes, etc.

If there are any potential problems in the dependencies being satisfied, these should be noted here.

Does the user normally carry out the task as part of a set procedure?

If so, list the tasks that would normally precede or follow this task

For example: bank staff processing a loan request must always carry out a credit check before processing the loan

2.11 Safety

To what extent is this task hazardous to the health or lives of the user or other individuals?

2.12 Criticality of the task output

For example: commissioning a gas burner which may explode if set incorrectly

How critical is the output of the task?

Note here if the task output is critical in terms of safety, security or financial integrity. For example: writing software that is to be used to control aircraft in flight, or setting up a spreadsheet controlling the flow of large amounts of money

N NPLUS/UCA/V4.02/Oct 1996

31

3

3.1

O

E

S

RGANISATIONAL

NVIRONMENT

TRUCTURE

3.1.1 Group working

3.1.2 Assistance

3.1.3 Interruptions

3.1.4 Management structure

3.1.5 Communications structure

The social or organisational environment in which the work is carried out will affect the way a job is done, the way a product is used, and consequently the usability of the product. This section is concerned with the structure, attitudes and culture of the user's organisation.

If the product is being used by an individual for his or her own purposes, parts of this section will not be relevant and can be ignored.

If two or more user types have been identified for separate evaluation, then it maybe necessary to fill in this section for each of those types

Here we ask questions about the nature of working relationships, and the flow of information between individuals in the organisation

Does the user do the task alone, or in collaboration with other individuals or groups of individuals ?

If the user collaborates with other individuals, specify their roles and their relationship with the user

Can assistance be obtained if the user has a problem?

Assistance includes the immediate assistance from colleagues in the workplace, as well as assistance via an internal or external telephone 'help line'

How frequently is the user generally interrupted while carrying out the task ?

Describe the frequency and nature of the interruptions. For example, an average of three telephone interruptions per hour

Who has direct influence on the user's work in the organisation ?

Describe the responsibilities of these individuals, and their relationship with the user.

If the product is being used by an individual for his or her own purposes, this question will not be relevant

How does information which is related to the user's task flow between individuals inside and outside the organisation?

Describe the main means of communication between colleagues and/or customers, and the relationships between these individuals.

If the product is being used by an individual for his or her own purposes, this question will not be relevant

NPLUS/UCA/V4.02/Oct 1996

32

3.2 A TTITUDES & C ULTURE

This subsection explores the enduring aims, objectives, opinions and common practices demonstrated or espoused by the members of the organisation within which the product is used.

If the product is being used by an individual for his or her own purposes, this section is not relevant

3.2.1 IT Policy

What is the organisation's policy on the introduction, acquisition and usage of Information Technology?

For example: The organisation is committed to computerising all of its procedures over the next ten years.

3.2.2 Organisational aims

This question will not be relevant for non - IT products

What are the roles, objectives and goals of the user's organisation ?

These may be addressed in an organisation's 'mission statement'

3.2.3 Industrial relations

What is the status of industrial relations within the company?

3.3 W ORKER /U SER C ONTROL

This subsection is concerned with the factors which affect productivity and quality. If the product is being used by an individual for his or her own purposes, this subsection may not be relevant

3.3.1 Performance monitoring

3.3.2 Performance feedback

How is the quality and speed of the user's work monitored and assessed?

For example: Operators are continuously monitored for speed by computer link

How do users receive feedback about the quality and speed of their work ?

For example: Each week all workers are publicly informed of their productivity; staff have a six-monthly review where their work is discussed with line managers

3.3.3 Pacing

4 T

E

ECHNICAL

NVIRONMENT

How is the rate at which users carry out work controlled ?

For example: For banking staff, there is customer queue pressure at busy periods; for factory staff, work is paced by the speed of the conveyor belt

This section is concerned with the technical environment in which the product is used.

If two or more user types have been identified for separate evaluation, then it maybe necessary to fill in this section for each of those types

4.1 Hardware a) required to run the product

What hardware is needed to run the product?

Examples of hardware are items like the processor, storage devices, input and output devices, networks, gateways, other user equipment

N NPLUS/UCA/V4.02/Oct 1996

33 b) likely to be encountered when using the product

What hardware is likely to be encountered when using the product?

List other hardware usually associated with the product and its user interface environment. For example, when using a personal computer, users will often need to produce output on a printer

NPLUS/UCA/V4.02/Oct 1996

34

4.2 Software a) required to run the product

What software is needed to run the product?

(eg. operating system) b) likely to be encountered when using the product

This may include the operating system or user interface environment. For example, WINDOWS™ may be required to run a particular application

What software is likely to be encountered when using the product?

List other applications usually associated with the product and its user interface environment

4.3 Reference materials

5

5.1

PHYSICAL

ENVIRONMENT

E NVIRONMENTAL

C ONDITIONS

If product is for use in standard

European office conditions, then answer “SO”

What reference materials are provided to help the user learn about the technical environment?

For example, manuals on how to operate Windows 3.0 or Apple Macintosh System 7.0.

Please note, this does not refer to the instructional materials for the product. These will be listed in the product description

This section is concerned with the physical environment of the user and product.

In many cases a product will be intended for use in a physical environment similar to the standard office working conditions found in Europe (for example, conforming to ISO 9241). In this case, you need put only 'SO' as your answer.

Where a feature of the physical environment is non-standard you will need to provide as accurate a description as possible.

If two or more user types have been identified for separate evaluation, then it may be necessary to fill in this section for each of those types

Here we attempt to identify the physical conditions of the workplace, or the place where the product will be used.

If the environment in which the product is used is a Standard Office, enter 'SO' as your answer to 5.1, and go to section

5.2. Otherwise, go on to fill in 5.1.1 and the rest of this subsection.

5.1.1 Atmospheric conditions

5.1.2 Auditory environment

What are the atmospheric conditions of the workplace?

If the product is used outdoors then this refers to the weather conditions, otherwise it will refer to the condition of the atmosphere which exists inside buildings such as air quality, speed, humidity etc.

What are the auditory conditions of the workplace ?

List all types of noise or sound, in particular sounds which would limit interpersonal communication, cause stress or annoyance to the user, or affect the user's perception of sounds relevant to the task

N NPLUS/UCA/V4.02/Oct 1996

35

5.1.3 Thermal environment

5.1.4 Visual environment

5.1.5 Environmental instability

5.2 W ORKPLACE

5.2.2 User posture

5.2.3 Location a) of the product

5.3 H EALTH & S

D

5.2.1 Space and furniture b) of the workplace

AFETY

5.3.1 Health hazards

ESIGN

5.3.2 Protective clothing and equipment

What are the thermal conditions of the workplace?

Describe the temperature of the workplace and the heating and air conditioning facilities

What are the visual conditions of the workplace?

Describe the strength and locations of light sources including natural light. Describe the degree of control the user would have over light conditions including use of blinds etc.

Is the workplace physically unstable in any way?

e.g., as a result of vibration or any other motion of the workplace

Here we are concerned with the location and design of the workplace, the layout of furniture, and the posture user adopted whilst using the product.

What is the size, layout, and furnishings of the workplace?

Include items such as desks, screens, cabling, printers etc.

What posture does the user generally adopt when using the product?

For example; standing looking down at a display (height 1.5m)

Where is the product located in relation to the workplace?

How is the product located in relation to the furniture of the workplace and the usual working position of the user?

Where is the workplace located?

How close is this location to the target area of influence, resources, fellow work colleagues, customers, and the user's home?

This section inquires about the conditions of the workplace or surrounding environments which may affect the user's health and safety, and require the use of protective clothing or equipment.

Are there any conditions of the workplace, or surrounding environment, which may affect the user's physical well being?

Include conditions which may affect the user's physical well being in the short term (e.g. by accidents) as well as in the long term (e.g. gradual hearing loss).

Describe any protective clothing or safety equipment the user is required to wear when in the workplace .

This includes such things as clothes or equipment which protects the user from the effects of high or low temperatures.

For example : gloves, steel toe-capped boots, face mask

NPLUS/UCA/V4.02/Oct 1996

Context Report

Name and version of product

Report completed by

Organisation

Objectives

36

Date

N NPLUS/UCA/V4.02/Oct 1996

37

1.1 U SER T YPES

1.1.1 User types being considered a) user types identified b) user types for usability evaluation

1.1.2 Secondary or indirect users who: a) interact with the product b) are affected by its output

Affects

Usability

NPLUS/UCA/V4.02/Oct 1996

38

User Type

1.2 S KILLS & K NOWLEDGE

1.2.1 Training and experience in the business processes and methods which the product supports

1.2.2 Experience in a) using the product b) using other products with similar main functions c) using products with the same interface style or operating system

1.2.3 Training in a) tasks supported by the products main functions b) using the products main functions c) using other products with similar main functions d) using products with the same interface style or operating system

Affects

Usability

Affects

Usability

N NPLUS/UCA/V4.02/Oct 1996

39

User Type

1.2.4 Qualifications

1.2.5 Relevant input skills

1.2.6 Linguistic ability

1.2.7 Background knowledge

1.3 P HYSICAL A TTRIBUTES

1.3.1 a) Age range b) Typical age

1.3.2 Gender

1.3.3 Physical limitations and disabilities

Affects

Usability

Affects

Usability

NPLUS/UCA/V4.02/Oct 1996

40

User Type

1.4 M ENTAL A TTRIBUTES

1.4.1 Intellectual abilities a) distinctive abilities b) specific mental disabilities

1.4.2 Motivations a) attitude to job & task b) attitude to the product c) attitude to information technology d) employees attitude to the employing organisation

Affects

Usability

N NPLUS/UCA/V4.02/Oct 1996

Affects

Usability

41

User Type

1.5 J OB C HARACTERISTICS

1.5.1 Job function

1.5.2 Job history a) how long employed b) how long in current job

1.5.3 Hours of work / operation a) hours of work b) hours using product

1.5.4 Job flexibility

Affects

Usability

NPLUS/UCA/V4.02/Oct 1996

Affects

Usability

User Type

1.6 L IST OF TASKS a) tasks identified

42 b) tasks for usability evaluation

Affects

Usability

N NPLUS/UCA/V4.02/Oct 1996

Affects

Usability

43

User type 1

Task name

2 T ASK C HARACTERISTICS

2.1 Task goal

2.2 Choice

2.3 Task output

2.4 Side effects

2.5 Task frequency

2.6 Task duration

2.7 Task flexibility

Affects

Usability

Affects

Usability

Affects

Usability

NPLUS/UCA/V4.02/Oct 1996

Affects

Usability

44

User type 1

Task name

2.8 Physical and mental demands a) Factors which make task demanding b) How demanding in comparison with others

2.9 Task dependencies

2.10 Linked tasks

2.11 Safety

2.12 Criticality of the task output

Affects

Usability

Affects

Usability

Affects

Usability

Affects

Usability

N NPLUS/UCA/V4.02/Oct 1996

45

User type 2

Task name

2 T ASK C HARACTERISTICS

2.1 Task goal

2.2 Choice

2.3 Task output

2.4 Side effects

2.5 Task frequency

2.6 Task duration

2.7 Task flexibility

Affects

Usability

Affects

Usability

Affects

Usability

NPLUS/UCA/V4.02/Oct 1996

Affects

Usability

46

User type 2

Task name

2.8 Physical and mental demands a) Factors which make task demanding b) How demanding in comparison with others

2.9 Task dependencies

2.10 Linked tasks

2.11 Safety

2.12 Criticality of the task output

Affects

Usability

Affects

Usability

Affects

Usability

Affects

Usability

N NPLUS/UCA/V4.02/Oct 1996

47

User Type

3 O RGANISATIONAL

E NVIRONMENT

3.1 S TRUCTURE

3.1.1 Group working

3.1.2 Assistance

3.1.3 Interruptions

3.1.4 Management structure

3.1.5 Communications structure

3.2 A TTITUDES & C ULTURE

3.2.1 IT Policy

3.2.2 Organisational aims

3.2.3 Industrial relations

Affects

Usability

Affects

Usability

NPLUS/UCA/V4.02/Oct 1996

48

User Type

3.3 W ORKER /U SER C ONTROL

3.3.1 Performance monitoring

3.3.2 Performance feedback

3.3.3 Pacing

4 T ECHNICAL E NVIRONMENT

4.1 Hardware a) required to run the product b) likely to be encountered when using the product

4.2 Software a) required to run the product

(eg. operating system) b) likely to be encountered when using the product

4.3 Reference materials

Affects

Usability

Affects

Usability

N NPLUS/UCA/V4.02/Oct 1996

49

5

5.1

PHYSICAL ENVIRONMENT

E

C

NVIRONMENTAL

ONDITIONS

5.1.1 Atmospheric conditions

If product is for use in standard European office conditions, then answer “SO”

Affects

Usability

5.1.2 Auditory environment

5.1.3 Thermal environment

5.1.4 Visual environment

5.1.5 Environmental instability

5.2 W ORKPLACE

5.2.2 User posture

D ESIGN

5.2.1 Space and furniture

5.2.3 Location a) of the product

5.3 b) of the workplace

H EALTH & S AFETY

5.3.1 Health hazards

5.3.2 Protective clothing and equipment

NPLUS/UCA/V4.02/Oct 1996

50

5

Case studies

1 - Paint Package X

This case study shows how we used UCA to derive an Evaluation Plan, to evaluate the usability of a drawing and painting software product which we will call Paint

Package X, version 1.0.

Product Report

Questionnaire completed by:

(Name and Organisation)

John Smith of 20th Century Design

Name of Product: Paint Package X, version 1.0

(Date)

28th July 1993

1 BASIC DESCRIPTION

1.1 Product name and version

Paint Package X, version 1.0

1.2 Product description and purpose

A graphical tool integrating the features of an object-oriented drawing package and a bitmap-oriented painting package.

1.3 What are the main application areas of the product?

Creating and maintaining computer artwork and graphic design.

1.4 What are the major functions? (major productive goals achievable using the product)

Manipulating graphics files.

Manipulating graphic objects.

Producing output to printer or a variety of file formats.

Painting images.

Drawing objects.

Adding textual information to graphics.

Organising picture structure and views.

Transforming images.

Setting user preferences.

NPLUS/UCA/V4.02/Oct 1996

2 SPECIFICATION

2.1 Hardware (If no hardware is included in this product, go to 2.2) a) general description (of any hardware provided as part of the product)

None b) input devices (supplied with the product)

None c) output devices (supplied with the product)

None

2.2 Software – is any other software always provided with the product?

Paint Package X program

Sample pictures

2.3 Other items

Registration card and license agreement

NPLUS/UCA/V4.02/Oct 1996 51

52

Context Report

Name and version of product

Paint Package X, version 1.0

Report completed by

John Smith

Date

30 July, 1993

Organisation

20th Century Design

Objectives

To assess the suitability of the product for use by staff in the company graphics office for ‘pasting up’

NPLUS/UCA/V4.02/Oct 1996

1.1 U SER T YPES

1.1.1 User types being considered a) user types identified

Graphic designers

Design assistants (paste-up artists)

Design office managers

Contracted professional artists

Software applications purchasers b) user types for usability

Other members of staff who use graphic packages occasionally

Design assistants (paste-up artists) evaluation

1.1.2 Secondary or indirect users who: a) interact with the product b) are affected by its output

In-house technical support staff

Contracted software support staff

Printers

Clients of the organisation

Affects

Usability

Y;C

Y;C

N

M;C

User Type

Design Assistants

1.2 S KILLS & K NOWLEDGE

1.2.1 Training and experience in

Between 1 and 15 years the business processes and methods which the product supports

1.2.2 Experience in a) using the product b) using other products with similar main functions c) using products with the same interface style or operating system

1.2.3 Training in

None

Between 1 and 8 years on all functions of objectoriented graphics packages

Between 1 and 8 years experience of Mac windowing system and applications a) tasks supported by the products main functions

Likely to have completed a Graphic Design course at a college.

Attended one week course provided by the organisation

None b) using the products main functions c) using other products with similar main functions d) using products with the same interface style or operating system

Likely to have attended courses run by other product manufacturers

Likely to have completed Mac tutorial supplied with computer, or have attended a demonstration

1.2.4 Qualifications

1.2.5 Relevant input skills

1.2.6 Linguistic ability

1.2.7 Background knowledge

Full range from degrees and Graphical Design qualifications to no formal qualifications

Skilled mouse user

Typing ability ranging from touch-typist to 'hunt and peck'

Fluent English speaker

Knowledge of quality management system of organisation

Printing facilities available

Affects

Usability

Y;C

Y;C

Y;C

Y;C

Y;C

Y;C

M;M

Y;C

M;M

Y;M

Y;C

M;I

NPLUS/UCA/V4.02/Oct 1996 53

User Type

1.3 P HYSICAL A TTRIBUTES

1.3.1 a) Age range b) Typical age

1.3.2 Gender

1.3.3 Physical limitations and

1.4 disabilities

M ENTAL A TTRIBUTES

1.4.1 Intellectual abilities a) distinctive abilities

1.4.2 Motivations a) attitude to job & task b) attitude to the product c) attitude to information b) specific mental disabilities

Design Assistants

18 to 60 years

18 to 30 years

Normal male/female population distribution

Proportion likely to use glasses for normal work

Small proportion of user type could have lower-body disability necessitating working from a wheelchair

Likely to have 'artistic flair' (eg. good visuo-spatial awareness, ability to sketch)

None

Affects

Usability

M;M

M;M

N

Y;C

Y;I

Y;C

Highly positive. Find results of their work rewarding Y;M

Likely to be sceptical due to familiarity with existing Y;M software

Positive Y;C technology d) employees attitude to the

Mixed employing organisation

1.5 J OB C HARACTERISTICS

1.5.1 Job function

1.5.2 Job history a) how long employed

Take original designs and artwork by graphic designers and produce camera-ready copy for printing.

Some limited original design work.

Liaise with printers and publishers

Various. Currently 1 to 6 years b) how long in current job

1.5.3 Hours of work / operation

Various. Currently 1 to 6 years a) hours of work b) hours using product

Flexible. Typically 37 hours per week, but can include overtime during busy periods

About 70% of working time spent using product. No session lasting longer than 4 hours

1.5.4 Job flexibility

Ability to schedule and prioritise their own work towards deadlines set by others (eg. managers, clients, printers).

1.6 L IST OF TASKS a) tasks identified

Retrieving images from an image library.

Creating original images and adding them to an image library.

Laying out existing graphic objects to a design sketched by the graphic designer.

Colouring outlines.

Typing information.

Producing camera-ready copy for the printer

Laying out existing graphic objects to a design sketched by the graphic designer

M;M

Y;C

M;M

M;M

M;M

Y;C

M;I

Y;C

Y;C

54 NPLUS/UCA/V4.02/Oct 1996

User type 1

Task name

2

Design Assistants

Laying out existing graphic objects to a design

T ASK C HARACTERISTICS sketched by the graphic designer

2.1 Task goal

2.2 Choice

2.3 Task output

2.4 Side effects

2.5 Task frequency

2.6 Task duration

2.7 Task flexibility

2.8 Physical and mental

Produce a design matching a sketch provided, using the graphical objects available

None

The design produced on the screen and stored as a computer file

None

Variable. Average: two or three times a day

Variable. Between 5 minutes and 4 hours, depending upon design complexity

No defined sequence of operations demands a) Factors which make task

None

Affects

Usability

Y;C

N

Y;C

Y;C

Y;M

M;I demanding b) How demanding in

No more or less demanding than other tasks M;I comparison with others

2.9 Task dependencies

2.10 Linked tasks

2.11 Safety

2.12 Criticality of the task output

Existence and location of graphic images.

Designer's sketch of intended output.

Location for final design

None

None

None

Y;C

N

N

N

User Type

3 O RGANISATIONAL

Design Assistants Affects

Usability

3.1

E NVIRONMENT

S TRUCTURE

3.1.1 Group working

3.1.2 Assistance

3.1.3 Interruptions

3.1.4 Management structure

Work using the product is done alone

Office is 'open plan' design, so assistance limited to the Mac operating system can be obtained from colleagues

Usually few or no interruptions while using the product. Maximum of about 2 short-duration interruptions per hour. These include telephone calls, new instructions from the Graphic Designer, and queries from colleagues

The user is immediately responsible to the Graphic

Designer who provides the work

Y;C

Y;C

M;C

N

3.1.5 Communications structure

Work is received from the Graphic Designer by hand with any additional instruction provided verbally or in writing.

Work is handed back to the Graphic Designer for approval and sent to the printers.

The Graphic Designer is not normally available to

3.2 A TTITUDES & C ULTURE answer queries

3.2.1 IT Policy

3.2.2 Organisational aims

The organisation is strongly committed to IT for all aspects of its work

To provide an efficient and high quality design and production service for all business graphic needs

3.2.3 Industrial relations Generally good

Y;C

M;I

M;I

M;I

NPLUS/UCA/V4.02/Oct 1996 55

User Type

5.1

5.3

E

C

H

NVIRONMENTAL

ONDITIONS

EALTH & S AFETY

5.3.1 Health hazards

5.3.2 Protective clothing and equipment

Design Assistants

3.3 W ORKER /U SER C ONTROL

3.3.1 Performance monitoring

All finished work is performed to a deadline and

3.3.2 Performance feedback quality is constantly monitored by the Graphic

Designer and the printers

If work is late or substandard, the user is informed immediately and directly by the individual who

3.3.3 Pacing reviews it.

The user's performance is assessed every six months and discussed as part of a formal progress interview

All work has an associated deadline. The user may pace the work to meet the deadline as he or she wishes

4 T ECHNICAL E NVIRONMENT

4.1 Hardware a) required to run the product b) likely to be encountered

The user computer is an Apple Macintosh II with

8Mb RAM and 24" colour monitor, connected to a local area network

Printer when using the product

4.2 Software a) required to run the product

(eg. operating system) b) likely to be encountered

Apple Macintosh System 6.0.5 (or later) operating system.

32-bit QuickDraw operating system extension for colour documents

Apple Macintosh operating system when using the product

4.3 Reference materials

5 PHYSICAL

Getting Started - installation guide and tutorial

User manual

Quick reference card

On-line help

Apple Macintosh System 7.0 manual, likely to be available but not local to product

ENVIRONMENT

5.1.1 Atmospheric conditions

5.1.2 Auditory environment

5.1.3 Thermal environment

5.1.4 Visual environment

5.1.5 Environmental instability

5.2 W ORKPLACE D ESIGN

5.2.1 Space and furniture

5.2.2 User posture

5.2.3 Location a) of the product b) of the workplace

Affects

Usability

Y;M

M;I

Y;M

Y;C

N

Y;C

Y;C

Y;C

Affects

Usability

If product is for use in standard European office conditions, then answer “SO”

SO

SO

SO

SO

SO

Open-plan office with fully adjustable furniture. The user computer is mounted on a desk

Sitting at a desk

Paint Package X and its on-line help is installed on the user computer, with its icon on the Macintosh desktop, ready to run.

The Paint Package X user guide and quick reference card are available locally to the user computer

The office is likely to be within half an hour's drive of the user's home

None

None

Y;C

Y;C

N

N

N

Y;R

Y;R

Y;R

Y;R

Y;R

Y;C

56 NPLUS/UCA/V4.02/Oct 1996

Evaluation Plan

This is a concise summary of the Context of Evaluation for

Paint Package X, version 1.0.

The numbers next to each statement refer to the items in the Context Report shown above.

Users

The evaluation is based upon the use of the product within a graphical design company. Only one user type has been selected for evaluation: design assistants

(1.1.1b).

The evaluation will be based on a selection (n=10) of representative members of the user type, the sample being obtained by advertising for volunteers within the organisation and, if necessary, through contacts in similar organisations. (1.5.1)

During the recruitment process, the volunteers will be asked to specify the following details:

 experience and training in graphic design (1.2.1 & 1.2.3a)experience and training with Paint Package X (1.2.2a & 1.2.3b)

 experience and training with object-oriented graphics packages (1.2.2b &

1.2.3c)

 experience and training using the Apple Macintosh windowing system

(1.2.2c & 1.2.3d)

 educational and vocational qualifications (1.2.4)

 skill with a mouse (1.2.5)

 fluency in the English language (1.2.6)

 age group (1.3.1)

 whether they normally wear glasses for visual display work (1.3.3)

 whether a wheelchair user (1.3.3)

 attitude to task, product, IT and the employer (1.4.2)

To be selected, volunteers must:

 have at least 1 years experience of the tasks undertaken by Design Assistants

(1.2.1 & 1.5.1)have no experience with the product (1.2.2a)

 have at least 1 years experience with object-oriented graphics packages

(1.2.2b)

 have at least 1 years experience of the Macintosh windowing system (1.2.2c)

 have attended the organisation's one week graphical course (1.2.3a)

 not have been trained to use Paint Package X (1.2.3b)

 be skilled mouse users (1.2.5)

NPLUS/UCA/V4.02/Oct 1996 57

58

 be fluent in the English language (1.2.6)

 agree to wear their glasses if they require them for visual display screen work (1.3.3)

 not be wheelchair users (1.3.3)

 have a positive attitude towards the task and IT (1.4.2a & c)

After the session the volunteers attitude to the product will assessed using SUMI

(1.4.2b)

Task

The evaluation will consist of a single task: laying out existing graphic objects to a given design. The task will

 be designed so that it can be completed in 10 minutes (2.6) and users will be stopped if they have not completed the task after 1 hour (1.5.3)

 be confined to the product (2.2)

 be performed once only (2.5)

The task instructions will be written and will:

 specify the goal of the task without specifying the steps required to achieve that goal (2.1)

 provide information about the location of the existing graphic images and the location for the output (2.9)

Other materials provided for the task will include:

 graphic images stored in the computer (2.9)

 example of required output on paper (2.9)

Organisational Environment

 the users will work alone (3.1.1)

 assistance will be provided by the evaluators about the Mac operating system if requested, although no assistance will be given about the product (3.1.2) or the design (3.1.5)

 the users will not be interrupted (3.1.3)

Technical Environment

The following items will be provided:

 an Apple Macintosh II computer with 8Mb RAM and 24" colour monitor, connected to a local area network (4.1) with System 7 operating system software installed (4.2)

 a System 7 manual will be available on a shelf in the test room (4.3)

NPLUS/UCA/V4.02/Oct 1996

Physical Environment

 The test room will be configured as a European Standard Office (5.1), with open-plan arrangement and furniture meeting the latest European Directive

(5.2.1).

 The user will be seated in front of the desk on which the computer is mounted (5.2.2)

 Paint Package X will be installed on the computer, with its icon on the

Desktop, ready to run (5.2.3a)

 The Paint Package X documentation will be available on the desk next to the computer (5.2.3a)

NPLUS/UCA/V4.02/Oct 1996 59

2 - Automated Telling Machine (Cashpoint)

This case study shows how we identified the Context of Use, specified the Context of Evaluation and composed an Evaluation Plan for an Automated Telling Machine.

An ATM is commonly referred to as a 'hole-in-the-wall banking machine' or

'cashpoint machine'.

In this case the product consists of the software and hardware that a customer sees when using an ATM.

Product Report

Questionnaire completed by: (Name and Organisation)

Name of Product: Major High Street Bank ATM

Cathy Thomas, NPL

(Date)

5 November, 1993

1 BASIC DESCRIPTION

1.1 Product name and version

Automated Telling Machine, November 1993 version installed on external bank walls

1.2 Product description and purpose

Hole-in-the-wall banking machine (cashpoint machine).

1.3 What are the main application areas of the product?

Allows customers to carry out a limited range of banking transactions outside the bank 24 hours a day.

1.4 What are the major functions? (major productive goals achievable using the product)

Cash and travellers cheque withdrawal

Requesting account balances

Ordering cheque books and statements

Paying bills

Transferring funds

Depositing cash and cheques

Changing access code (PIN number).

60 NPLUS/UCA/V4.02/Oct 1996

2 SPECIFICATION

2.1 Hardware (If no hardware is included in this product, go to 2.2) a) general description (of any hardware provided as part of the product)

Wall-mounted interactive terminal b) input devices (supplied with the product)

Keypad c) output devices (supplied with the product)

Screen, printer for transaction receipts

2.2 Software – is any other software always provided with the product?

Standard ATM software

2.3 Other items

Access to the product is carried out by using a plastic card with a magnetic strip. Each customer is issued with his or her own card(s).

NPLUS/UCA/V4.02/Oct 1996 61

62

Context Report

Name and version of product

Automated Telling Machine

Report completed by

Members of the MUSiC consortium

Date

September, 1993

Organisation

NPL and HUSAT

Objectives

1 The product should be as efficient as the existing ATM for withdrawing cash

2 95% of users will be able to deposit a cheque, and change their PIN with 100% effectiveness

3 95% of users will be 100% effective in carrying out the above tasks in both daytime and night time lighting, and in a noisy location as well as a quiet location.

NPLUS/UCA/V4.02/Oct 1996

1.1 U SER T YPES

1.1.1 User types being considered a) user types identified

Authorised ATM card holders b) user types for usability

Authorised ATM card holders evaluation

1.1.2 Secondary or indirect users who: a) interact with the product b) are affected by its output

Bank Staff - refilling machines

Engineers - maintenance and repair

None

User Type

Authorised ATM card holders

1.2 S KILLS & K NOWLEDGE

1.2.1 Training and experience in

Nearly all experienced in withdrawing cash over the business processes and methods which the counter or using card. Task essentially the same but characteristics very different product supports

1.2.2 Experience in a) using the product b) using other products with similar main functions

Varies from none to regular daily use

Majority none, rest varying amounts

Majority familiar, rest varying amounts c) using products with the same interface style or operating system

1.2.3 Training in

None a) tasks supported by the products main functions b) using the products main functions c) using other products with similar main functions d) using products with the same interface style or operating system

1.2.4 Qualifications

None

Majority none, rest varying amounts

Majority none, rest varying amounts

1.2.5 Relevant input skills

1.2.6 Linguistic ability

1.2.7 Background knowledge

Majority none, rest varying amounts

None

Majority fluent in speaking and reading English

Location of machine and hours of operation

(normally 24 hours)

Affects

Usability

Y;M

Y;M

M;M

M;M

N

N

M;M

M;M

M;M

N

Y;C

N

N

Affects

Usability

Y;C

Y;C

N

NPLUS/UCA/V4.02/Oct 1996 63

User Type

1.3 P HYSICAL A TTRIBUTES

1.3.1 a) Age range

14 upwards b) Typical age predominantly 16 - 65

1.3.2 Gender

1.3.3 Physical limitations and

1.4 disabilities

M ENTAL A TTRIBUTES

1.4.1 Intellectual abilities a) distinctive abilities

1.4.2 Motivations a) attitude to job & task b) attitude to the product c) attitude to information b) specific mental disabilities

Authorised ATM card holders

Likely to be 50% male, 50% female

Significant minority with visual, hearing, speech, motor or mental impairments

No distinctive abilities

Minority with mild mental disabilities

Affects

Usability

Y;M

Y;M

M;R

Y;C

N

M;I

Highly motivated to complete task Y;I

Variable Y;I

Variable; range from very negative to very positive Y;I technology d) employees attitude to the Not relevant employing organisation

1.5 J OB C HARACTERISTICS

Do not complete this section if it is not relevant to the use of the product

1.5.1 Job function

1.5.2 Job history a) how long employed

Not relevant

Not relevant b) how long in current job

1.5.3 Hours of work / operation

Not relevant a) hours of work b) hours using product

Not relevant

Not relevant

1.5.4 Job flexibility

1.6 L IST OF TASKS a) tasks identified

Not relevant withdrawing cash requesting current balance ordering statements and chequebooks changing the PIN number depositing cash or cheques paying bills transferring funds withdrawing travellers cheques withdrawing cash depositing a cheque changing the PIN

Y;C

Y;C

64 NPLUS/UCA/V4.02/Oct 1996

65

User type 1

Task name

Authorised ATM card holders

Withdrawing Cash

2 T ASK C HARACTERISTICS

2.1 Task goal

To withdraw cash and possibly obtain transaction slip

2.2 Choice

2.3 Task output

2.4 Side effects

2.5 Task frequency

2.6 Task duration

2.7 Task flexibility

2.8 Physical and mental

ATM or counter service available during bank opening hours. No choice when bank is closed

Affects

Usability

Y;C

Deposit a cheque

To successfully deposit cheque

ATM or counter service during bank opening hours. No choice when bank is closed

Bank notes and transaction slip

None

Variable

2 - 3 minutes

Account credited with correct amount

N None

Y;M Variable

N 2 - 3 minutes

None N None demands a) Factors which make task

Low level N Low level demanding b) How demanding in

Equally demanding N Equally demanding comparison with others

2.9 Task dependencies

2.10 Linked tasks

2.11 Safety

2.12 Criticality of the task output

Bank account containing sufficient funds

Bank card and/or cheque book

For ATM withdrawal, knowledge of PIN and withdrawal limit

None

Generally not hazardous. Possible robbery of people withdrawing

None

Y;C

N

N

N None

Affects

Usability

Y;C

N

Y;M

N

N

N

N

Current bank account to receive cheque

Cheque in favour of above account

For ATM deposit, bank card and knowledge of PIN

None

Generally not hazardous

Y;C

N

N

N

Changing PIN

To successfully change PIN

None

New PIN associated with card

None

Normally very infrequent

2 - 3 minutes

None

Low level

Varies

Bank card

Knowledge of old and new PIN

None

Generally not hazardous

None

Affects

Usability

Y;C

N

N

N

M;M

Y;C

Y;M

Y;C

N

N

N

NPLUS/UCA/V4.0/June 1995

User Type

Authorised ATM card holders Affects

Usability

3 O RGANISATIONAL

3.1

E NVIRONMENT

S TRUCTURE

3.1.1 Group working

3.1.2 Assistance

3.1.3 Interruptions

3.1.4 Management structure

3.1.5 Communications structure

3.2 A TTITUDES & C ULTURE

Not relevant

Not relevant

3.2.1 IT Policy

3.2.2 Organisational aims

3.2.3 Industrial relations

User normally alone, sometimes with partner or friends

M;C

Possibly available from bank staff or other in queue Y;C

Usually none, occasional closing of ATM when run out of cash

Y;C

Not relevant

Not relevant

Not relevant

3.3 W ORKER /U SER C ONTROL

3.3.1 Performance monitoring

PINS monitored for banking purposes, together with response speeds and number of transactions per day

N

3.3.2 Performance feedback

3.3.3 Pacing

4 T ECHNICAL E NVIRONMENT

None

Queue pressure during busy periods

4.1 Hardware a) required to run the product b) likely to be encountered

ATM linked to bank's computer

None

N

Y;C

Y;C

N when using the product

4.2 Software a) required to run the product

Bespoke transaction software Y;C

(eg. operating system) b) likely to be encountered

None N when using the product

4.3 Reference materials

Basic instructions received through post when card received

Y;C

66 NPLUS/UCA/V4.02/Oct 1996

User Type

5 PHYSICAL

ENVIRONMENT

5.1 E NVIRONMENTAL

C ONDITIONS

5.1.1 Atmospheric conditions

5.1.2 Auditory environment

5.1.3 Thermal environment

5.1.4 Visual environment

5.1.5 Environmental instability

5.2 W ORKPLACE D ESIGN

5.2.1 Space and furniture

5.2.2 User posture

5.2.3 Location a) of the product b) of the workplace

5.3 H EALTH & S AFETY

5.3.1 Health hazards

5.3.2 Protective clothing and equipment

Authorised ATM card holders Affects

Usability

If product is for use in standard European office conditions, then answer “SO”

UK outdoor weather conditions M;C

Moisture: 5gm/m3 (dry) to 15gm/m3 (raining)

Humidity: 55% (dry July pm) to 90% (damp October am)

UK urban street - 40 dB(A) (quiet side street) to 80 dB(A) (noisy high street)

UK outdoor weather -10c (cold January) to +30c

(hot august)

Range bright sunlight - dark night (100,000 lux to

100 lux)

None

ATM mounted 1m. above ground

Standing, wheelchair users sitting

Inset in wall

Street, public thoroughfares

None

Winter clothing would include gloves, muffs etc.

Y;C

Y;C

Y;C

N

Y;C

Y;C

Y;C

Y;C

N

M;C

NPLUS/UCA/V4.02/Oct 1996 67

68

Evaluation Plan

This is a concise summary of the Context of Evaluation for an ATM.

The numbers next to each statement refer to the items in the Context Report shown above.

Users

The evaluation will be based on a selection (n=40) of the general public who are authorised ATM users, the sample being obtained via newspaper advertising (1.1.1)

The advertisement will add that people of all ages and physical abilities will be welcome to take part in the evaluation.

During the recruitment process, the member of the evaluation staff concerned will assess or ask the volunteers to specify the following details:

 male or female (1.3.2)

 age group (1.3.1)

 if regular bank user (1.2.1)

 length of time card held (1.2.2) and approximate frequency of use for cash withdrawal, cheque deposit and PIN change (2.5)

 experience or training with products similar to an ATM (1.2.2/3)

 qualifications (1.2.4)

 whether a wheelchair user (1.3.3/5.2.2)

 (corrected) eyesight strength (below average/average/good) (1.3.3)

 English linguistic ability (below average/average/good) (1.2.6)

During the selection process, if it appears that the user characteristics of the sample obtained are too narrow, (e.g. all users between 20-30, no users in wheelchairs), then specific action will be taken to obtain a more even balance.

Tasks

The evaluation will be composed of three tasks, all to be carried out using an ATM, which users will perform in balanced orders:

 withdraw £30 in cash and obtain transaction slip.

 deposit a cheque.

 change the PIN.

All dependencies will be satisfied (2.9)

Organisational Environment

The evaluation will be based on users working alone (3.1.1)

NPLUS/UCA/V4.02/Oct 1996

No assistance will be given (3.1.2)

There will be no interruptions (3.1.3)

No queue will be simulated (3.3.3)

Technical Environment

Standard hardware (4.1) and software (4.2) will be used, but instructional materials will not be available (4.3)

Physical Environment

Ideally the ATM evaluation will be performed out of doors (5.2.3) controlling, where possible, for:

 midrange moisture and humidity (5.1.1)

 noise 40 db (A) and 80 db(A) (5.1.2) (See controlled conditions)

 temperature 10-20 Celsius (5.1.3)

 lighting - testing will be carried out during the day and night (5.1.4) (See controlled conditions)

 clothing - no gloves or bulky clothing to be worn (5.3.2)

The ATM will be set at 1 metre above the ground .

If an outside evaluation is not possible, then the evaluation will be run indoors in a laboratory, with thermal, noise and lighting conditions simulated as closely as possible.

Controlled Conditions

The user sample should be split into four equal groups of 10 users, carrying out each of the specified tasks under the following conditions:

1.

Daytime lighting, 40 db(A) (quiet side street)

2.

Daytime lighting, 80 db(A) (noisy high street)

3.

Night-time lighting, 40 db(A) (quiet side street)

4.

Night-time lighting, 80 db(A) (noisy high street)

The users will be provided with PIN numbers, a dummy bank account number, and a bank card.

NPLUS/UCA/V4.02/Oct 1996 69

70

A

Glossary of terms

Age

The chronological age of the user.

Assistance

The availability of assistance for individuals who are experiencing problems with the product or technical system. This includes the availability of immediate assistance from colleagues in the workplace, as well as assistance via an internal or external

'help line'.

Atmospheric Conditions

The atmospheric conditions which affect performance of user, product and technical system. This generally refers to the outdoor weather conditions, but may also include the condition of the atmosphere which exists inside buildings such as air quality, speed, humidity, etc.

Attitude

The user's relatively stable emotional opinions of their job, the task, the product, information technology, their organisation, etc. These attitudes may be closely linked to membership of a professional, cultural, religious, national, regional or political grouping.

Audience

The individuals who depend on the output or results of the task.

Auditory Environment

The auditory conditions of the workplace which may distract the user, limit interpersonal communication, cause stress or annoyance to the user, or affect the user's perception of sounds relevant to the task.

Choice

The user's ability to choose whether or not to use a particular product to carry out their tasks. This is closely related to job flexibility, which is the worker's ability to decide how to approach the job, organise their time and carry out tasks.

NPLUS/UCA/V4.02/Oct 1996

Cognitive Style

The user's common mode of thought as determined by a recognised test of cognitive style. This includes the way the user transforms information by perceptual processes, memory and cognitive activity, and also includes the problem solving strategies adopted by the user. There may be significant inter-cultural differences in cognitive style.

Communication Structure

The flow of information between individuals and groups inside and outside the organisation.

Context

The factors which affect the usability of the product, excluding the features of the product being evaluated. These factors are the characteristics of the users, tasks, and the physical, organisational and technical environments.

Context of Evaluation

The context in which the usability of a product is evaluated. The Context of

Evaluation should be specified to match the Context of Use of the product. It should always be stated when presenting the results of the usability evaluation.

Context of Use

The context in which the product is used or intended to be used.

Context Questionnaire

Questionnaire asking for information about a product's Context of Use.

Context Report

The table used to record the Context of Use (the answers to the questions posed in the Context Questionnaire) to record those factors which are considered to affect usability, and to define the Context of Evaluation.

Context Study

The process of carrying out the steps defined by the Usability Context Analysis

Method.

Environmental Instability

The physical instability of the workplace which may affect performance of user and product. This would include the effects of vibration and motion of the workplace.

NPLUS/UCA/V4.02/Oct 1996 71

72

Gender

The sex of the user.

General Knowledge

The knowledge possessed by the user which is not directly connected to the product, the task, or IT, but which results from living and working in a particular organisation or culture. This type of knowledge is not generally taught or learned in a formal manner, nor does it exist in written form, but is passed between individuals informally, and is absorbed over a period of time. This knowledge may be attributable to membership of a cultural, religious, national, regional, social or organisational group.

Group Working

The organisation of groups of individuals to carry out work, and the relationships between different groups and individuals within those groups.

Hardware

The physical parts of a system. The ergonomic characteristics of the user interface hardware (display screen, keyboard, mouse, etc.) are important factors in determining the usability of a product.

Hardware Environment

The hardware environment in which the product functions. This may include items like the processor, storage devices, input and output devices, networks, gateways, other users' equipment.

HCI

Human-Computer Interaction. The study of the way in which humans use computers, and vice-versa.

Health Hazards

The conditions of the workplace, or surrounding environment, which may affect the user's physical well being either in the short term (e.g. by accidents) or long term

(e.g. gradual hearing loss).

Hours of Work

The duration, start and end times of the user's working day. This item is intended to identify shift work, irregular hours, home working hours, etc.

Incentives

The rewards (or punishments) which are used to motivate users to complete their tasks to a desired standard of quality or within a particular time period.

NPLUS/UCA/V4.02/Oct 1996

Industrial Relations

The nature of relationship which exists between different individuals or groups of individuals in an organisation. This is most typically the nature of relationship between workers and management.

Intellectual Ability

The user's specific and general intelligence as measured by intelligence tests.

IT Policy

The organisation's formal position on the introduction, acquisition and usage of

Information Technology.

Job

The occupation of the user, or more generally, a collection of tasks that is carried out by the user.

Job Breakdown

A decomposition of the job into its constituent tasks.

Job Characteristics

A description of the employment details of the user.

Job Function

The purpose, responsibilities and activities of the user's work.

Job History

The length of time the user has been employed by the organisation, and the number of years experience in his or her job.

Job Title

The formal job title of the users, for example Bus Conductors, Town Planners.

Keyboard and Input Skills

The user's experience and skill at entering data using various input devices.

Examples are keyboard skills, mouse skills, etc.

Linguistic Ability

The linguistic abilities of the user in the language used by the product and its documentation.

NPLUS/UCA/V4.02/Oct 1996 73

74

Location

The location of the workplace in relation to fellow work colleagues, customers, resources, target area of influence, and the user's home.

Management Structure

The relationships and responsibilities of individuals in the organisation.

Mental Attributes

The mental characteristics of the user, including their intellectual abilities, and attitudes.

Motivating Factors

The factors (or incentives) which motivate the user to work at a job, or to act as a user for usability evaluation sessions. Examples of such factors are money, course credits, management directives, interest in IT, commitment to advancement of science, etc.

Organisational Environment

The social environment in which the work is carried out.

Organisational Structure

The nature of working relationships, and flow of information between individuals in the workplace

Pacing

The process by which the rate at which workers perform their tasks is controlled.

Performance Monitoring

The process by which data is collected to monitor and assess the quality and speed of the user's work.

Personality

The user's personality type as measured by recognised personality tests.

Physical Attributes

The physical characteristics of the user, such as age, gender, physical limitations and disabilities.

NPLUS/UCA/V4.02/Oct 1996

Physical and Mental Demands

The extent to which the task makes physical and mental demands on the user relative to other tasks carried out by the user as part of the job.

Physical Environment

The physical environment of the user and product.

Physical Limitations and Disabilities

Limitations of user capabilities resulting from the anthropometric or biomechanical characteristics of the user, (e.g. physical size, bodily proportions, strength of the user), or the impairment of his or her physical or mental faculties. Examples of physical or mental impairment include short sightedness, ‘colour-blindness’, loss of hearing, loss of limbs, reduced psychomotor capabilities.

Product Experience

The user's practical experience of the product or products with similar interfaces.

Product

The system or component of a system which is the user of usability assessment. The product may be part of a wider technical environment, the characteristics of which may affect the usability of the product, but are part of the context. For the purpose of assessment, the product might be for example, the cut and paste function in a word processor, the word processing application, or the complete computer system on which the application runs, including peripheral devices.

Protective Clothing and Equipment

Clothing and equipment which the user is required to use to protect him or her from injurious effects of hazards in the workplace. This includes clothes which protect the user from the effects of high or low temperatures.

Qualifications

The educational achievements of the user, including both formal and informal qualifications. This broad category includes non-vocational and vocational achievements, for example doctorates and apprenticeships.

Reference Materials

The instructional materials which are provided to help the user learn about the technical system and environment.

Safety Criticality

The extent to which the way a task is carried out may be hazardous to the health or lives of the user or other individuals.

NPLUS/UCA/V4.02/Oct 1996 75

76

Skills and Knowledge

The formally and informally acquired knowledge and skills of the user.

Software Environment

The software environment in which the product functions. This may include the operating system and other applications which provide information.

Space and Furniture

The size, layout and furnishings and other items in the workplace, such as desks, screens, cabling, printers, etc.

Task

A job or part of a job carried out by the user, whilst using the product.

Task Characteristics

The features of the task including the task breakdown, name, goal, frequency, duration, frequency of events (connected to the task), flexibility, physical and mental demands, dependencies, output and safety criticality.

Task Dependencies

The information and other resources required by the user in order to perform the task.

Task Duration

The time required for the user and product to carry out a task.

Task Experience

The user's practical experience of performing the task.

Task Flexibility

The extent to which the user may deviate from a predefined method when carrying out the task.

Task Frequency

The frequency with which the user carries out the task.

Task Goal

The objective of performing the task.

NPLUS/UCA/V4.02/Oct 1996

Task Output

The contents and medium of the results of carrying out the task.

Task Name

The term used to refer to the task.

Technical Environment

The technical system and technical infrastructure (e.g. power supply), in which a product functions.

Technical System

The remainder of the system of which the product is a part. This would include descriptions of hardware environment and the software environment in which the product functions, for example an IBM personal computer with 4 Mb RAM, and an

MS DOS operating system, as well as a description of the reference materials provided to explain these items.

Thermal Environment

The thermal conditions of the workplace which affect performance of user and product.

Training

The general IT, or product specific training that users have received. This is intended to include formal training as well as less formal methods such as open learning packages, video instruction or training manuals.

Usability

The extent to which a product can be used with effectiveness, efficiency and satisfaction by specific users to achieve specific goals in specific environments

Usability Analyst

Person who has an understanding of the nature of Human-Computer Interaction

(HCI), an awareness of usability issues or practical experience of product evaluation.

Will probably either have qualifications in psychology or ergonomics, have received on-the-job training and education about usability in the design process, or have some experience of conducting product evaluations.

Usability Metric

A number expressing the degree or strength of a usability characteristic, possessing metric properties, obtained by objective counting rules, and with known reliability and validity. Usability metrics have known maxima and minima, their scale of measurement is known, they possess scale metric properties, they are gathered by

NPLUS/UCA/V4.02/Oct 1996 77

78 objectively verifiable rules of counting, and they have demonstrated reliability and validity. They are said to instantiate or operationalise a characteristic of usability.

They must be interpreted according to the context in which they were measured.

Usability Team

The team set up to drive a product's usability evaluation process. Will usually consist of at least one usability analyst, and at least one person with a good knowledge of the product and its intended users, and of any constraints that will come into play during the evaluation.

User Posture

The physical position adopted and the freedom of movement of the users while carrying out the task.

User Types

Groups of individuals who have at least one attribute in common (e.g. job title), who have cause to use a particular product.

Users

The population of (intended) users or equivalent representative group of individuals who use the product.

Visual Environment

The visual conditions of the workplace which affect performance of user and product, and would affect the user's ability to see items relevant to the task.

Worker Autonomy

The worker's ability to decide how to approach the job, organise their time and carry out tasks. This includes the ability to choose whether or not to use a particular product to carry out their tasks.

Worker Control

The methods which are used to ensure that desired levels of productivity and quality are maintained.

Workplace Conditions

The physical conditions of the user's workplace.

Workplace Design

The location and design of the workplace, the layout of furniture, and the posture which user will adopt when carrying out the task.

NPLUS/UCA/V4.02/Oct 1996

B

Context hierarchy

It is difficult to produce a fully comprehensive list of all the components of the context that might affect the usability of a product. By carrying out a wide-ranging literature survey, and eliciting knowledge from experienced evaluators, however, we have produced a hierarchical list of the most common factors that are likely to affect the usability of IT products. This hierarchy of context factors is shown here, with explanations of terms used to be found in the Glossary of terms (Appendix A).

The context hierarchy is composed of five major sections dealing with the most important characteristics of a product's users, the tasks they perform, and the organisational, technical and physical environments in which a product may be used.

Each of the components listed in the context hierarchy is represented by a question in the Context Questionnaire.

1.

2.

USERS

1.1.

1.2.

1.3.

1.4.

1.5.

1.6.

USER TYPES

1.1.1.

User types being considered

1.1.2.

Secondary or indirect users

SKILLS & KNOWLEDGE

1.2.1.

User Type

1.2.2.

Training and experience in the business processes

1.2.3.

Product Experience

1.2.4.

Training

1.2.5.

Qualifications

1.2.6.

Input Skills

1.2.7.

Linguistic Ability

1.2.8.

Background Knowledge

PHYSICAL ATTRIBUTES

1.3.1.

Age

1.3.2.

Gender

1.3.3.

Physical Limitations & Disabilities

MENTAL ATTRIBUTES

1.4.1.

Intellectual Ability

1.4.2.

Motivations

JOB CHARACTERISTICS

1.5.1.

Job Function

1.5.2.

Job History

1.5.3.

Hours of Work or Operation

1.5.4.

Job Flexibility

LIST OF TASKS

TASK CHARACTERISTICS

2.1.

2.2.

2.3.

Task goal

Choice

Task output

NPLUS/UCA/V4.02/Oct 1996 79

80

3.

4.

5.

2.4.

2.5.

2.6.

2.7.

Task side effects

Task frequency

Task duration

Task flexibility

2.8.

2.9.

Physical & mental demands

Task dependencies

2.10.

Linked tasks

2.11.

Safety

2.12.

Criticality of the task output

ORGANISATIONAL ENVIRONMENT

3.1.

3.2.

3.3.

STRUCTURE

3.1.1.

Group Working

3.1.2.

Assistance

3.1.3.

Interruptions

3.1.4.

Management Structure

3.1.5.

Communications Structure

ATTITUDES & CULTURE

3.2.1.

IT Policy

3.2.2.

Organisational Aims

3.2.3.

Industrial Relations

WORKER CONTROL

3.3.1.

Performance Monitoring

3.3.2.

Performance Feedback

3.3.3.

Pacing

TECHNICAL ENVIRONMENT

4.1.1.

Hardware

4.1.2.

Software

4.1.3.

Reference Materials

PHYSICAL ENVIRONMENT

5.1.

5.2.

5.3.

ENVIRONMENTAL CONDITIONS

5.1.1.

Atmospheric Conditions

5.1.2.

Auditory Environment

5.1.3.

Thermal Environment

5.1.4.

Visual Environment

5.1.5.

Environmental Instability

WORKPLACE DESIGN

5.2.1.

Space and Furniture

5.2.2.

User Posture

5.2.3.

Location

HEALTH & SAFETY

5.3.1.

Health Hazards

5.3.2.

Protective Clothing & Equipment

NPLUS/UCA/V4.02/Oct 1996

C

MUSiC methods

The original Usability Context Analysis Guide was a product of MUSiC (ESPRIT II project 5429 - Measuring Usability of Systems in Context), which developed a number of methods and tools to measure the usability of interactive systems. MUSiC was concerned with usability metrics of four broad types – performance, user satisfaction, cognitive workload and analytic metrics – applicable at various stages of the design cycle.

The Metrics

Performance Metrics

Performance metrics can be used where an operational simulation or prototype of the system is available. Users are observed as they carry out tasks representative of the work for which the system under test is intended, under conditions reproducing the important features of the normal working context. Performance is measured through objective (rule-based) observations, and metrics are derived from these performance measures. The MUSiC Performance Measurement

Method can be applied flexibly to give different levels of measures and diagnostic data, and used iteratively to refine the product design.

User Satisfaction Metrics

User satisfaction metrics are applicable wherever a working version of a software product is available (e.g. previous version, prototype, or alpha test version). Users carry out tasks using the system before completing a standardised questionnaire (SUMI) which provides measures of their attitude toward the system. This gives a global measure of perceived usability, and measures of five subscales: control, helpfulness, affect (likeability), learnability and efficiency.

Measures of Cognitive Workload

These measures may be applied iteratively throughout the design cycle, once a prototype of the system is available. Users are monitored as they perform tasks using the system. Measures of cognitive workload assess the mental effort required by users. The ratio of performance to effort is used as an indicator of usability, with a high value indicating a high level of usability. For more information, contact NPL Usability Services.

Analytic Metrics

Applicable early in the design cycle where formal representations of system components exist, these metrics require, as a minimum, only a documented product specification in order to indicate aspects of software usability. They can also be integrated with tools which use or generate formal specifications (CASE

NPLUS/UCA/V4.02/Oct 1996 81

82 and prototyping tools). The objective is to test specifications early and to reduce the need for design changes later in development, when alterations are more difficult and expensive to make. The metrics complement rather than replace empirical evaluation of user performance and satisfaction.

For more information on the SANE software tool used for deriving analytic metrics, contact: Elke-Maria Melchior, Gesellschaftsführer, ACit GmbH,

Spiekerhof 6-8, D-48143 Münster, Germany, fax:: +49 251 37223

The importance of context

The definition of usability

MUSiC uses the ISO 9241 definition of usability:

The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified

Context of Use.

This international standard definition explicitly defines usability as being dependent upon the context - the users, their goals, and their environment - in which the product is used.

An example

Let us take the example of a watch that a manufacturer claimed was the most ergonomic in the world. We can see that it is in fact only usable in a small range of contexts. This is probably because it has been designed - with an implicit, unstated, and probably inadequately considered, Context of Use - for the mass market, and some design decisions have been dictated by aesthetics rather than usability. If we consider the users, the size of the strap shows that it may not be suitable for children, and the visual nature of the display almost certainly precludes blind users. The design of the watch face may indicate that it is not suitable for all types of tasks. It may lack numbers and minute markings, so it would be unsuitable for tasks that required accurate timings such as those carried out by the military or at sports meetings. When we consider the other features of the watch we can see that it may only be suitable for use in a limited range of environments. The lack of luminous or illuminating dial markings shows that it would not be suitable for use in dimly-lit environments, whereas the use of reflective glass would impede viewing from some angles in bright conditions.

Context dictates usability

This simple example is intended to show the pitfalls of classifying a watch or any other product as usable without referring to the context for which it is intended.

Also, contrary to many manufacturers' beliefs, adding extra features to a product does not necessarily make a product more usable. Again, the context in which the product will be used needs to be considered, as some of the features that are added might actually make the product less usable for other users. For example, adding split second timing to the watch mentioned above may make it more usable for timing sports events, but may make it over complicated, and hence less usable for some other users.

NPLUS/UCA/V4.02/Oct 1996

Applicable to all products

These same principles apply to all products, including hardware or software computer products that are only usable in specific contexts, (i.e. by specific users, for specific tasks in specific environments). For example, adding extra features to a word processor may make it more usable for the professional secretary who has sophisticated requirements, (e.g. producing long documents with complex page layouts - multiple columns, headers, footers, etc.), who also has time to learn how to use the new features. However, the new features may reduce the usability of the product for executives who have very limited requirements, (e.g. production of short memos), who have no time to invest in relearning how to use the product after long periods without use.

Specifying Context

It follows from the above that for a product to be usable by its intended users, and to be evaluated with meaningful results, the contexts in which that product is used need to be very carefully considered, and well documented.

MUSiC principles

In an attempt to ensure that proper attention is paid to context issues, MUSiC advocates the following guidelines or principles. These guidelines and principles encapsulate the commitment to context issues.

1.

The usability of a product depends on its Context of Use.

The usability of a product is affected not only by the features of the product itself, but also by the characteristics of the users, the tasks they carry out, and the technical, organisational and physical environment in which the product is used. As a result of this, you must invest time and effort in order to identify and carry out evaluations in an appropriate context if the usability measurement results are to be at all meaningful and useful.

2.

Products should be designed for specific contexts.

Knowledge of the context in which a product is to be used will improve the design of a product, both by allowing the designers to tailor the design to the specified Context of Use, and also to help specify the product's usability goals, so that both formal and informal evaluation of the product can occur throughout the design process.

3.

Measurement of usability must always be carried out in an appropriate context.

Usability measurement must take place in a context which matches the context in which the product will be used. A systematic method for describing the Context of Use and specifying the Context of Evaluation was therefore developed by the MUSiC project.

4.

Usability measurements should always be accompanied by a detailed description of the Context of Evaluation.

NPLUS/UCA/V4.02/Oct 1996 83

Usability measurements are only meaningful if presented with a detailed description of the Context of Evaluation, listing the types of users, the tasks performed and the environment in which the measurement took place. This information allows other people to assess the validity or fairness of the measurement, and gives them the opportunity to generalise the results of the measurement to their own context if they see fit. For example, suppose the product was a database system which was tested using secretaries between the ages of 16 and 60, who have received one hour's training, and who were asked to input client details into the database and retrieve information for incorporation in reports and letters. The usability measurement was carried out in an open-plan office with background conversation and occasional interruptions. These and other details will allow potential purchasers of the product to decide whether this was a valid and fair test of the product, and whether the results are generalisable to another specific context. For example, whether the database would be as usable for operators who have to look up information to answer customers' telephone enquiries, or for scientists who would use the database to store data resulting from their experiments.

84 NPLUS/UCA/V4.02/Oct 1996

D

The development of evaluation in context

The science of measurement

In the field of natural science, the procedure of specifying, controlling and reporting the context in which measurement takes place has been routine for decades - if not centuries. This procedure ensures that measurements are both meaningful and reproducible. Likewise, in the field of Human-Computer Interaction, it has been recognised for many years that the users and the tasks they carry out are likely to have a strong effect on the results of any system evaluation (Miller 1971).

Realistic users and representative tasks

Many authors have emphasised the importance of selecting representative users and realistic tasks when carrying out user testing or evaluation of IT products (Neal and

Simon 1984, Bury 1984, Rosenbaum 1989). Yet, if the literature is explored, one continually finds evaluation studies have either used unrepresentative users to carry out unrealistic tasks, or more commonly, have failed even to report the nature of the users and the tasks they carried out. Often it is only after the study has been completed that effects of badly chosen users and tasks will be used to explain the

'odd' nature of the results. A recent example of such a study, which aimed to evaluate three interfaces for on-line public access catalogues, used users who were

“not preselected in any way (the majority were actually final year students)”, and later presented results that had to be explained by referring to the “artificiality of the exercises”(Akeroyd 1990).

Quality of industrial evaluations

One might imagine that product evaluations carried out by industry would more accurately reflect the product's Context of Use, as they may be closer to the customer base and real work situations. Unfortunately this is not always the case, and industrial evaluations have been known to use members of the product design team as users, rather than obtain individuals who represent the future users of the product.

The use of members of the product design team as users may actually invalidate an evaluation, if the study aims to evaluate the ease of use of the product for typical end-users. Members of the design team will not only have higher levels of knowledge of, and experience with the product, but they may also be highly motivated to show their product in a good light.

Example study

Bailey et al (1988) carried out a study to compare the user performance on their company's old physically controlled, and their new computer controlled oscilloscopes. They took many measures, including video recording, verbal protocols, behavioural encoding, task time and accuracy, which all showed the new

NPLUS/UCA/V4.02/Oct 1996 85

86 oscilloscope to be far better than the old model. The most impressive result showed a 77% productivity gain in favour of the new oscilloscope.

Inappropriate context

These results, however, must be questioned if one considers the context in which the measurements were taken. In discussing the results, the authors noted that the users were possibly more qualified than the normal end-users; and when they later carried out a number of interviews with actual users of their oscilloscopes, they developed a more realistic picture of the tasks performed with the product (in Good, 1989).

When taken together, their comments show that this study had actually been carried out in an inappropriate context. Some of the comments are represented below.

Unrepresentative users

Firstly, the users for this study were all expert users of both types of oscilloscope, and were also members of the new oscilloscope design team. This meant that they had a ready recall of all the new oscilloscope's automatic functions, and were probably more computer literate than most of the current population of oscilloscope users. It is also quite likely that they would want to show the new product in a good light given that they had invested time in its development.

Artificial tasks

Secondly, the four “typical tasks” that they were asked to perform were quite simply unrealistic, as the users were asked to obtain very precise measures rather than just note the existence of a signal or the shape of a wave form.

Unnatural work environment

Thirdly, although the users had been trained to give verbal protocols during the experiment, (i.e. to say what they were doing and why they were doing it), they were also reminded by a beeper that the experimenter pressed if they remained silent for more than three seconds. By using this type of procedure, the experimenters dramatically changed the conditions under which an oscilloscope would normally be used. This is because an oscilloscope operator would not normally be interrupted so often while working, and also because verbal protocols require the user's attention and so may affect their ability to carry out the tasks.

Meaningless and Misleading Results

As the evaluation was carried out using unrepresentative users, carrying out unrealistic tasks in an unnatural environment, we can say very little about the realworld usability of the product. Regrettably, it is studies such as these, carried out in unrealistic Contexts of Evaluation, which produce results that are not simply meaningless, but positively misleading.

Unrealistic Tasks

There are many other examples of evaluations in which users were asked to carry out unrealistic tasks. For example, Roberts and Moran (1983) proposed a benchmark task for evaluating text editors, which consisted of a set of simple editing changes to be carried out on a 6 page document. Holzblatt (1987) observed real users in their

NPLUS/UCA/V4.02/Oct 1996

offices using a text editor that had been previously evaluated using Roberts’ method, and found that users typically edited 30 to 50 page documents. This later study showed that the product had been evaluated on an unrealistic task, and alerted developers to the need for tools to support “long range navigation and search”, which was not revealed during the previous evaluation. There are also evaluations where the users' ability to carry out the task is unnaturally distorted by the methods of measuring performance. For instance, in one study users were “instructed to measure and record their own time and errors” (Banks and Pihiman 1989).

Context awareness

There are a number of reasons why evaluations are often carried out in unsuitable

Contexts of Evaluation. The major reason is probably the cost, in time and money, of (a) identifying the context in which the system will actually be used, and (b) obtaining suitable users and specifying realistic (i.e. possibly longer) tasks. Other reasons are the lack of appreciation of the impact that the Context of Evaluation can have on the evaluation results; and the lack of any systematic method for identifying the Context of Use and specifying the Context of Evaluation.

In the mid-eighties there was an increase in awareness of context issues which was mainly brought about by the work of Whiteside and his colleagues (Whiteside et al,

1988; Wolf, 1989). Basically, they found that although many products performed well in their laboratory experiments, they did not work when they were transferred to the real world. They put this down to the fact that the research often overlooked something crucial to the context in which the product would be used. The classical research methodology they applied told them a lot about how to control variables, but little about how to select the important variables in the first place. As a result, the conditions in the laboratory were “so remote from the conditions of actual use that the relation of the data to life was at best irrelevant and at worst distorting”.

As a result of this they developed contextual research , where they would work with people carrying out real work in real situations rather than “artificially contrived” ones. There was also an emphasis on collaboration with the participants in order to produce a result rather than “to discover the supposed objective truth”. As a result, the number of experimental sessions (user hours) carried out by their group dropped from 450 (in 1984) to perhaps 50 less formal sessions (in 1988). They claim their approach is a breakthrough in producing systems which are “genuinely usable and enjoyable” for their customers.

In adopting this approach, Whiteside and his colleagues not only stimulated discussion on the relative merits of laboratory versus field studies, but they also highlighted the importance of context issues more effectively than the impotent pronouncement of experts of the previous two decades. The response to contextual research was two-fold. Firstly, there was a backlash of support for laboratory based studies, and secondly, there was a serious determination to address context issues.

Laboratory studies

Strong support for laboratory based studies was expressed at a CHI '89 conference panel session on this user (Wolf 1989), where Thomas Landauer stated :

“I know of very few instances of dramatic differences in the relative overall usability of computer systems, or of the effect of feature variations or principles between lab tests and the real world. And I know of many instances in which

NPLUS/UCA/V4.02/Oct 1996 87

88 devices... greatly shaped by lab testing have been resoundingly successful in highly demanding and variable contexts”.

Other arguments in support of laboratory based evaluations are that they are more rapid, less expensive, more controlled, less confounded and can be carried out on products earlier in the design process.

Complementary Approaches

In fact, laboratory tests and field observations are both valuable methods of product evaluation, which complement each other in the design process. The high degree of control and enhanced observation and video recording facilities associated with laboratory studies are particularly suited to summative evaluation, where one wants to test whether a product meets certain predefined usability criteria. Field studies may then be used to “identify special problems associated with the integration of the product into the actual working environment” (Neal and Simon, 1984). Furthermore, field studies can tell you about the acceptability of a product, (i.e. whether the product will actually be used in real-life), whereas, in laboratory tests where the users generally have no option but to use the product, this is often not possible.

Example study

Karat (1989) demonstrated the complementary nature of the two approaches by applying both laboratory and field studies in order to help iteratively design a security application. Interestingly, participants completed the tasks in 25% less time in the field than when users completed similar tasks in laboratory conditions. Karat comments that, although “there are possible problems in comparing the results of the different tests; however the benefits of having both types of test data outweigh the negative factors.”(Karat 1989).

Environmental Factors

The renewed interest in contextual research or field studies of IT products has shown that the environment can have a significant effect on the usability of a product. In other words, even if one carries out a usability evaluation of a product using representative users carrying out realistic tasks, one may still obtain different results in the laboratory rather than in the field. This is because in the real work situation the physical, social and technical environments can have a dramatic effect on the usability of a product.

The physical environment can have a profound effect on the usability of a product.

Bad lighting or loud noise in the workplace may actually prevent the users from receiving vital feedback from the product. Likewise, even the location of the product in relation to user's workplace can magnify the effect of 'trivial' usability problems like having to reinsert cassettes frequently when a tape backup machine is located down the corridor (Brooke, 1986).

Similarly, the organisational environment will also affect the usability of a product.

On a macro level, the attitudes of the organisation and its employees towards the introduction of an IT system, and the way work is monitored, is bound to affect whether a system is accepted and used to carry out the work. On a micro level, the structure of the organisation, the way people work (individually and in groups), the availability of assistance and the frequency of interruptions are also likely to affect

NPLUS/UCA/V4.02/Oct 1996

the usability of a product. Unfortunately, items on both levels are often overlooked in product evaluations.

The technical environment, i.e. the software and hardware which is used in conjunction with the product, is also commonly left unspecified. Since the characteristics of the technical environment, (such as the speed of the processor, or the layout of keys on the keyboard), may have an affect on the usability of the product, these items should be specified alongside the results of any formal product evaluation. Regrettably, this type of technical information is too frequently left out of evaluation reports.

Summary

This appendix has traced the development of evaluation in context, and highlighted some of the important context issues that have been overlooked in past studies.

Appendix C of this Guide gives a list of principles which encapsulate the MUSiC project's response to some of these issues, and the Usability Context Analysis Guide was developed to provide a means of dealing practically with them.

NPLUS/UCA/V4.02/Oct 1996 89

90

References

Akeroyd, J. 1990, Information Seeking in On-line Catalogues , Journal of Documentation,

Vol. 46 No. 1, 33-52.

Bailey, W.A., Knox, S.T. and Lynch, E.F., 1988, Effects of interface design upon user productivity , In proceedings of CHI, 1988 (Washington, May 15-19) ACM, New York

1988, 207-212.

Banks, W. and Pihiman, M. 1989 , MIPS and BIPS are Megaflops: Limits of Unidimensional

Assessments , Proceedings of the Human Factors Society 33rd Annual Meeting, 597-601.

Brooke J. B. 1986, Usability Engineering in office product Development , Proc. of HCI'86

People and Computers: Designing for Usability, (Cambridge University Press, UK), 249-

259.

Bury K. F., 1984, The Iterative Development of Usable Computer Interfaces , In INTERACT

'84, 343-348.

Good, M. 1989, Seven Experiences with Contextual Field Research , SIGCHI Bulletin 20,4

(April), 25-32.

Holzblatt, K.A. 1987, Key concepts of phenomenological research methods for studying human-computer interaction . Digital Equipment Corporation Technical Report 520,

Nashua, NH, September 1987.

Karat, C-M, 1989, Iterative usability testing of a security application , Proceedings of the

Human Factors Society 33rd Annual Meeting.

Miller, R. B., 1971, Human Ease of Use Criteria and Their Tradeoffs , IBM Technical Report

No. TR 00.2185.

Neal, A.S. and Simon, R.M., 1984, Playback: A Method for Evaluating the Usability of

Software and its Documentation , IBM Systems Journal Vol.23 No.1, 82-96.

Roberts, T.L. and Moran, T.P. 1983, The Evaluation of Text Editors: Methodology and

Empirical Results , Communications of the ACM 26, 4, 265-283.

Rosenbaum, S. 1989, Selecting Appropriate Users for Documentation Usability Testing , In

Work with Computers : Organisational, Management, Stress and Health Aspects, Edited by

Smith M.J. and Salvendy G. (Elsevier Science Publishers, Amsterdam), 620-627.

Wasserman, A. S., 1991. Debate on "Research that Reinvents the Corporation ", John Seely

Brown, Harvard Business Review, March-April 1991.

Whiteside, J. Bennett, J. Holtzblatt, K.1988, Usability Engineering: Our Experience and

Evolution , In Helander, M. (Ed.) Handbook of Human-Computer Interaction, (Elsevier,

Amsterdam), 791-817.

Wolf, C.G. (Organiser) 1989, The Role of Laboratory Experiments in HCI : Help, Hindrance or Ho-hum?

(Panel), In proceedings of CHI, 1989 (Austin, Texas, April 30-May 4, 1989)

ACM, New York, 1989, 265-268.

NPLUS/UCA/V4.02/Oct 1996

Further reading

Bias G and Mayhew D (eds) (1994) Cost-Justifying Usability . Academic Press

CEC (1990). Minimum Safety and Health Requirements for Work With Display Screen

Equipment Directive (90/270/EEC). Official Journal of the European Communities No L

156, 21/6/90.

Hix D and Hartson HR (1993). Developing User Interfaces: Ensuring Usability Through

Product and Process , New York, USA, Wiley; ISBN 0 471 57813 4.

Jordan (1995) In: Usability Evaluation in Industry , Jordan et al (eds), Taylor and Francis.

Nielsen J (1993). Usability Engineering ; San Diego, USA, Academic Press, ISBN 0 12

518405 0.

Nielsen J and Mack RL (Eds.) (1994). Usability Inspection Methods . John Wiley and Sons,

New York, April 1994.

Norman DA (1988). The Psychology of Everyday Things ; New York, Basic Books, ISBN

0465 06709 3.

Preece J et al, (1994) Human Computer Interaction . Addison Wesley.

Wiklund ME (Ed.) (1994). Usability in Practice ; Boston, USA, AP Professional, ISBN 0 12

751250 0.

Papers

Bevan N (in press) Measuring usability as quality of use . Journal of Software Quality.

Bevan N (1995) Usability is quality of use . In: Anzai & Ogawa (eds) Proc. 6th International

Conference on Human Computer Interaction, July 1995. Elsevier.

Bevan (1995) Human-Computer Interaction standards . In: Anzai & Ogawa (eds) Proc. 6th

International Conference on Human Computer Interaction, July 1995. Elsevier.

Bevan N, Macleod M (1994) Usability measurement in context . Behaviour and Information

Technology, 13, 132-145.

Bevan N (1991) Enforcement of HCI?

Computer Bulletin, May 1991.

Kirakowski J and Corbett M (1993). SUMI: the Software Usability Measurement Inventory .

British Journal of Educational Technology, 24 , 210-212.

Macleod M, Bowden R and Bevan N. (in press) The MUSiC Performance Measurement

Method . In: Usability Measurement - the MUSiC Approach, Bösser T (ed).

Macleod, M (in press) Performance measurement and ecological validity In: 'Usability

Evaluation in Industry' , P Jordan (ed.) London: Taylor and Francis

Macleod M (1994) Usability in Context: Improving Quality of Use . In: G Bradley and HW

Hendricks (eds.) Human Factors in Organisational Design and Management - IV -

Proceedings of the International Ergonomics Association 4th International Symposium on

Human Factors in Organisational Design and Management, (Stockholm, Sweden, May 29 -

June 1 1994). Amsterdam, Elsevier / North Holland.

Macleod M and Rengger R. (1993) The Development of DRUM: A Software Tool for Videoassisted Usability Evaluation . In: People and Computers VIII, Proceedings of the HCI’93

Conference (Loughborough, UK, Sept 1993), CUP.

NPLUS/UCA/V4.02/Oct 1996 91

Download