Usability and accessibility testing plan and guidance

advertisement
Usability and Accessibility Testing plan and guidance, 22 December 2010
RAP/DOULS Usability and Accessibility Testing Plan and Guidance
1 Why Test?
The best products/items are developed by testing often.
Jacob Neilsen says “Some people think that usability is very costly and complex and that user tests
should be reserved for the rare web design project with a huge budget and a lavish time schedule.
Not true. Elaborate usability tests are a waste of resources. The best results come from testing no
more than 5 users* and running as many small tests as you can afford.” Source
http://www.useit.com/alertbox/20000319.html
(* N.B. the 5 user limit is not applicable for accessibility testing)
Also
On the Web, usability is a necessary condition for survival. If a website is difficult to use,
people leave. Source http://www.useit.com/alertbox/20000319.html
There are alternatives to including end users and this includes expert user or heuristic testing. Heuristic
evaluation is conducted by studying a user interface and then forming an estimation about what is good and
bad about it. The evaluation is conducted according to a certain set of rules, such as those listed in typical
guidelines documents (Nielsen 1993). Heuristic evaluation and user testing should be alternated, because
these two usability practices have been shown to find fairly distinct sets of usability problems (Nielsen 1993).
It is the aim of this testing plan to adhere to the basic principles of testing often and conducting expert
testing. The outputs after each testing cycle are likely to be about the items themselves, their design, and
other related developments such as changes to the VLE. They are also likely to include changes to other
design and direction documents such as the personas and the Roadmap as we learn more about the
facilities that students want and how they want to use them.
The OU has an obligation to ensure usability and accessibility within products. Visit
http://www8.open.ac.uk/about/main/admin-and-governance/policies-and-statements/website-accessibilitythe-open-university
2 What to Test?
The tests will cover all parts of the student study journey associated with the item tested including:

How the student is signposted to the asset – e.g. from the study materials, StudentHome etc.

Installation (if necessary)

Working with items/using the data, in whatever environment they have been designed for (e.g. VLE,
Google, Facebook, mobile)

Uninstalling if necessary.
This means that the test materials are always likely to include mock-ups, prototypes, test versions or actual
live versions of all of these:

Parts of the OU environment such as StudentHome, the module website

The test items themselves

Parts of the wider environment e.g. the students’ email or Google account.
To inform the Usability/Accessibility staff prior to the testing sessions, think about:

Where would the student experience start for this item?

How does the student engage with the test item to get started?

What can the student do once the item is working?

How does the student end or remove the item?
We need to test that we have a common approach to tasks/actions across the strands of the RAP VLE and
to coordinate this especially when considering new strands.
1
Usability and Accessibility Testing plan and guidance, 22 December 2010
We then need to produce a summary statement of how the VLE meets usability and accessibility guidelines
for Moodle 2 (see ‘Outputs from testing’ section below).
3 Tests
Process for Expert testing
At any point in a development project managers (PM’s) may request an IET expert review (Expert testing) of
functionality within their strand of the RAP. These will be passed through IET who will either give a quick
response covering the issue, for example checking accessibility of a service, or will suggest that the tool
needs further testing within a testing cycle in the labs. It is important that all strands request expert testing
before asking for any lab based testing to iron out any obvious issues with the interface or design - this will
also familiarise the tester/s with the item ahead of testing. Caroline Jarrett can also be asked to ‘Expert test
against the Personas’ which will give other perspectives and this will also help with the review process of the
Personas keeping them current.
You can ‘expert’ test the following at an early stage:

Visuals on paper

Screenshots/eVisuals

Linked screenshots/eVisuals

Links to websites or documents

Links to websites/ideas that do something similar that you might like to adopt as similar concept
We advise that you always do some expert testing first before any idea is developed very far.
To book your expert test liaise with IET directly but please inform Kathy McNeely/Nicola Hicks (LTS).
Process for End User testing (8 cycles per year)
The lead time for ‘End User testing’ is longer - at least a month. The labs have therefore been booked for 8
slots over the year and the appropriate testers will be invited in (-Anna Page in IET books the labs).
Ahead of a regular testing cycle PM’s identify functionality to test within a cycle which they pass to IET where
the work is checked and resourced. Passing on the functionality to IET should be as early as possible and at
least one month prior to the start date for a cycle, otherwise the functionality will have to be tested at the next
cycle (approximately 6 weeks later) however if in doubt ask IET to check if testing is required.
Each test opportunity will have a minimum of three participants (but preferably 5 standard students and 5
disabled students), and each participant attends alone.
You can user test:

Visuals on paper

Screenshots/eVisuals

Linked screenshots/eVisuals

Links to websites or documents

Prototypes at any stage

The final product
To book your lab sessions and confirm material for testing, please contact Kathy McNeely/Nicola Hicks
(LTS).
Appendix 1 diagrammatically outlines a typical testing cycle. Some of the milestones mentioned on the
diagram will be explained in more detail later on in this document.
Frequency of testing and testing ‘windows’
There are 8 End User testing cycles during a twelve month period (i.e. 8 per year), roughly every six weeks.
Each End User Test cycle takes place in the Jennie Lee Labs.
Prior to any lab based testing, expert testing needs to have been undertaken to iron out and resolve any
obvious usability/accessibility issues (see ‘Process for Expert testing’ above).
2
Usability and Accessibility Testing plan and guidance, 22 December 2010
Any student may apply to take part, but we expect to focus on students as noted due to start times/exam
times.
December 9th
January
First testing window.
24th-28th
Students on modules starting in October
Feb 28th – Mar 4th
April
Any students
11th-15th
Students on modules starting in February (exam revision time for October starts)
May 23rd – 27th
Any students
July 4th-8th
September
Any students
12th
–
16th
Students on modules starting in February (brought forward slightly to avoid their
exam revision time, but can’t avoid exam time for October starts)
Extra dates on the same schedule, if required:
Oct 31st – Nov 4th
Any students (aim particularly for students who are continuing from Level 1 modules
with ETAs as they will have had a quiet time in October)
December 5th – 9th
Students on modules starting in October; continuing students who are waiting for
exam / assessment results.
LTS RAP PMs to confirm with their strand lead technical developer (LTD) which testing window informs
which VLE release.
Testing format
The key points are:

The test opportunity focuses on product/item(s) that are currently being developed, and where
changes will be made.

The people who decide on what changes to make may attend the test opportunity as observers.
We envisage that each test item will proceed through three stages of development:
Concept:
The test material and its associated context exist as a vision, plus some mock-ups,
wireframes, or presentation.
Prototype:
This can include paper prototypes but more often there is some amount of interactivity
available leading to functional testing.
Development:
The user can use the test item and its associated context approximately as in ‘real life’.
Pedagogic evaluation is important at this stage to identify alternative learning and teaching
environments/techniques.
Accessibility testing will usually require items to be at the prototype or development stage as the underlying
code has impact on accessibility. At development stage, it is assumed that the WCAG and the LTS
guidelines have been followed and incorporated prior to handing over for accessibility testing. Accessibility
testing involves testing with assistive technologies to evaluate 'real-life' accessibility.
Usability testing can happen at all three stages, but is most valuable at the concept and prototype stages. In
the first test, all items will be at the ‘concept’ stage. Later tests may have a mixture of items at different
stages.
The testing time can be allocated to a single student journey (e.g. from StudentHome through to deleting the
widget) or divided into smaller tasks/areas to consider. We may decide to bring in elements of other project
strands if this will create a more coherent experience for the student.
The examples below indicate how testing might happen over a day or half day lab session. These vary
greatly depending on what is being tested, the types of participants (e.g. disabled or non disabled) and the
number of issues that need to be addressed and are therefore shown for guidance purposes only.
3
Usability and Accessibility Testing plan and guidance, 22 December 2010
Example 1 Full day testing schedule
9:00
Participant arrives for session: 10 minutes preliminary discussion, testing (with a mid point break), 5
minutes closing
10:30
Second participant’s session
12:00
Lunch break, half an hour
12:30
Third participant’s session
14:00
Fourth participant’s session
15:30
Fifth participant’s session
17:00
Debrief with observers
Example 2 Minimal half-day testing schedule
9:30
Participant arrives for session: 10 minutes preliminary discussion, 45 minutes testing, 5 minutes
closing
10:30
15 minute break to reset equipment
10:45
Second participant’s session
11:45
15 minute break to reset equipment
12:00
Third participant’s session
13:00
Lunch break, half an hour
13:30
Debrief meeting
Each testing cycle will be completed ahead of a release with enough time to feedback and make changes
prior to the release. Kathy McNeely/Nicola Hicks (LTS) will liaise with Anna Page (IET) to set up these
sessions.
Venue and support for testing
We will use two of the IET labs in the Jennie Lee building:
 The HCI room

The observation room.
We will use the labs’ facilities to ensure that observers can watch the sessions and hear the participants’
remarks.
As the recording equipment is available, we will record the sessions but we do not plan to make any video
report or highlights because observers will be invited to the sessions. All recordings will be held by IET
according to their usual retention schedule (see ‘Outputs from the testing’ below).
Each testing opportunity will require support:

Technical assistance from lab personnel to ensure that the test materials can be loaded onto the lab
computers and the sessions can be observed.

Participant recruitment and organisation: (see “How the participants are organised for testing”
section below)

Preparation of test materials, discussed further below.
‘Usability developmental testing’ (sometimes called remote testing) involves students testing materials at
home and focuses on the educational aspects (see
http://kn.open.ac.uk/sitewide/getfile.cfm?documentfileid=15226). At least a month is also needed to prepare
for this. Typically this might involve 10 students who would be paid around £20.
4
Usability and Accessibility Testing plan and guidance, 22 December 2010
Remote testing is useful when you need longitudinal studies of the product over a period or when you need
more quantitative data and want to make sure that the participant is working naturalistically, so most useful
when dealing for example with social technologies or tools that require specific use cases or when you need
to test but it’s not possible to do lab based testing (e.g. when the Labs are snowbound for example). For
more information on when you might do remote testing on testing materials contact Anne Jelfs. Remote
testing is booked once this technique is identified as being required.
Duties of the Evaluation expert (facilitator)
1. The evaluation experts will organize the appropriate type of testing needed at each stage.
2. Each test will have a facilitator, who is responsible for:

Collecting test materials ahead of the test (but not creating them: that is the responsibility of the
developers)

Preparing a test script

Ensuring that the test materials are loaded onto the lab computers and that the sessions can be
observed

Inviting observers to the test

Drafting the advertisement for students and the specification of which students are appropriate
candidates for the test

Running the test itself, including obtaining the appropriate consent from the participants

Facilitating the debrief meeting.
The test facilitator will be Anne Jelfs or a member of the Learning and Teaching Development team from
within IET, Chetz Colwell or Caroline Jarrett.
Anne Jelfs A.E.Jelfs@open.ac.uk is the liaison person for Usability and Pedagogic Evaluation within IET
and co-ordinates lab-based testing
Chetz Colwell C.Colwell@open.ac.uk is the liaison person for Accessibility within IET
Caroline Jarrett is the Usability consultant expert
The choice of test facilitator will depend upon whether the test is aimed primarily at exploring usability and in
which RAP strand (Anne or Caroline) or accessibility (Chetz), and workload / availability. This will be
coordinated by IET.
For accessibility testing Chetz will test materials through a wide range of assistive technologies.
For the purposes of testing, DOULS will be considered as a RAP strand.
The evaluation experts may be able to respond to queries which require small amounts of testing or research
into accessibility issues at fairly short notice (Expert testing), but this should not be relied on for full testing.
Personas are used to establish use cases for testing materials and products developed by the University.
http://intranet6.open.ac.uk/teaching/learning-systems/files/learnsys/file/ecms/web-content/2010-07-02Roadmap-Personas.pdf Caroline Jarrett has provided a set of personas for the OU and will periodically
review and add to these personas based on the participants coming through the labs, liaising with Anne Jelfs
and Chetz Colwell, to ensure that any new user types are identified and included.
Duties of the Project Manager (strand) and/or LTD
Each strand will need to come up with a set of appropriate questions for testing and anything you need to
know, and send this information at least a week before testing along with any materials you wish to test,
paper, links or prototypes.
It will not be possible to conduct separate rounds of user testing for each strand, so ideally strands should
coordinate which slots they are going to use. This should be booked as soon as possible with IET.
PMs of strands may want to observe the User Testing.
The PM for Design and Usability would like to keep an overview so would like to understand what is being
tested and the results where possible.
5
Usability and Accessibility Testing plan and guidance, 22 December 2010
If the testing cycles are appropriately spaced then there will almost always be something worth testing. To
check on whether functionality or design are significant enough to warrant testing you can simply ask for an
expert opinion, i.e. expert testing (see above).
You need to feed back the actions taken against issues raised in testing. You should do this before the next
testing cycle by sending a bullet pointed response to the issues raised during testing, providing a copy to the
PM for Design and Usability/Accessibility and a copy to IET (see the ‘How to contact IET about testing’
section below for details). This feedback will be passed to SeGA and to other bodies to ensure that the OU is
complying with obligations around accessibility and usability.
How Participants are organised for testing
IET take responsibility for organizing the Lab based End User testing and for gathering students to test.
Each test opportunity will have a minimum of three participants (but preferably 5 selected students based on
demographics, module/course level, faculty etc and 5 disabled students based on disability).
The administration of this is via the support office in IET. The recruitment is done via Camel or StudentHome
well in advance of the testing cycle.
IET has a blanket arrangement with SRPP for all of the RAP testing which means that the students who
participate are added to the list of students who have agreed to test. Visit http://ietintranet.open.ac.uk/research/index.cfm?id=7082 for further information.
Sara Crowe in IET is responsible for administrative aspects of testing support. Sara with Anne Jelfs confirms
participants for user testing for each testing window; and Sara and Chetz Colwell do the same for
accessibility testing.
Incentive for Participants
Tasks would typically take up to 1.5 hours in total (plus a break) and students are normally paid £125 (increased recently due to tax) for participating in one of these sessions. If there is a lot to test within a
particular cycle then additional testing sessions can be added, IET will organize this depending on the testing
requirements.
Participants will have their reasonable travel expenses reimbursed through the usual OU claim process.
Sara Crowe (IET) organises payment and travel expenses for participants. Nicola Hicks/Kathy McNeely
(LTS) and Sara Crowe/ Karen Byrne (IET) will organise the transfer of RAP/DOULS funds to IET at
appropriate times for payments to students to be processed. DOULS has limited funds for testing so RAP will
subsidise where required - Nicola Hicks/Kathy McNeely will work this out with Judith Pickering (DOULS PM).
Outputs from the testing
The main record of the test will be a written bullet point report (plus video footage) with recommendations for
improvements; however the evaluation experts will be available to give a verbal briefing after the testing: this
report will be entered into a feedback loop.
The video data may be reviewed in MKV format straight after the event however an edited replay of the
session and a written bullet point report of the testing will normally be made available one week after testing.
The video of the testing session will be available for three months. This provides the guidance on how to deal
with issues arising from testing. These materials are for the RAP programme team members only and not
available to share more widely. The original videos are retained for archival purposes only (see
confidentiality section below).
Anne Jelfs, Chetz Colwell and Caroline Jarrett provide bullet pointed output from their testing to the LTS RAP
strand PM and LTD, copied to the RAP PM for Design and Usability/Accessibility. LTS will then respond as
to how issues are being resolved in the light of testing feedback.
Towards the end of the Roadmap Acceleration programme the overall summary of accessibility and usability
within the Learning Systems developed within RAP will be provided as a record of how the VLE is meeting
usability and accessibility objectives and identify where further work is required. Anne Jelfs and Chetz
Colwell (IET) will produce this report of the overall findings.
What should be done about issues raised in testing?
Issues that are of immediate concern which may affect testing will be raised by the evaluation expert straight
away however all other issues will go into the testing report. The LTD’s should put these into the issue
6
Usability and Accessibility Testing plan and guidance, 22 December 2010
tracker system for the developers to deal with them. Feed back the actions taken to IET to ensure that the
loop is complete and the outputs of the testing have been incorporated into the development.
Our advice is that LTD’s should check progress on issues ahead of the next full testing cycle (i.e. within three
months of receiving the report) to ensure that issues are being dealt with and where appropriate arrange retesting if necessary on specific issues where significant problems have been encountered. PM’s should
endeavor to provide feedback to the evaluation experts, detailing how issues are being resolved, on a
quarterly basis ahead of the quarterly VLE release.
If an issue arises that affects all strands or significantly impacts on other strands then the PM should flag
these immediately to Nicola Hicks/Kathy McNeely (LTS).
Confidentiality
We do not expect there to be any risks to participants, or any matters discussed that are particularly personal
or invasive. To help safeguard participants, we will follow these good practices:

All participation will be voluntary

Participants can end the session at any time. The facilitator will also watch out for any sign of
participant fatigue or distress and offer to end the session if appropriate

Participants will be given their token payment at the start of the session, so that they do not feel they
must continue with the session if they are uncomfortable

Participants will be informed that there are observers watching the session. If they do not wish to be
observed or recorded then observation will be turned off, the facilitator will take paper notes and later
relay the session to the observers.

Participants’ involvement will be confidential, unless they specifically request that their names and
other details are associated with the project.
How to contact IET about testing?
Kathy McNeely & Nicola Hicks will keep a spreadsheet in LTS of testing dates and what is being tested at
each date. This will be available for all to see via Documentum. They can advise on the dates and can book
your session for you. They will keep an overview of what is being tested and advise PMs by email when a
date is coming up.
You may email jlb-lab-enquiries@open.ac.uk with the subject heading “VLE RAP strand <name>” where
name corresponds to the strand within RAP for any general enquiries. The same process should be
completed for DOULS testing enquiries; however you may contact the evaluation experts directly for testing
advice or guidance.
4 Future testing beyond RAP/DOULS
Appendix 2 highlights the initial thinking behind trialling a new testing model for RAP/DOULS. At the end of
RAP, the above model will be reviewed with a view to streamlining a similar approach for future VLE usability
and accessibility testing.
There is likely to be further evaluation work conducted around student engagement with the technologies
within RAP (-either as part of LEAP 2 or contributing to the planning of LEAP 3) to access how these
technologies are being used in practice.
7
Appendix 1 – Typical testing Cycle
13/12/2010
Expert testing completed
10/12/2010
request expert testing
03/01/2011
Participants
recruited for
testing
23/12/2010
Prepare questions and
Identify the artefacts
to test
01/01/2011
01/12/2010
Key
Diamond = LTS responsibility
Square = IET responsibility
01/02/2011
User Testing cycle
(Lab based)
25/01/2011
Material submitted
for testing
09/03/2011
Feedback response to testing
recommendations to Anne Jelfs
07/02/2011
Edited video
and bullet pointed report
Available to view
01/02/2011
01/03/2011
16/03/2011
artefact launch
29/04/2011
Video data archived
01/04/2011
01/05/2011
Appendix 2: Usability and Accessibility testing and research for the
OU's Learning Systems: using DOULS and the Accelerated Roadmap
projects
Prepared by Liz Burton-Pye (LIO) with Paul Beeby (LTS), Caroline Jarrett (consultant from
Effortmark Ltd.), Anne Jelfs (IET), Jason Platts (LTS) and Will Woods (IET); updated 4 August 2010.
Background
The usability and accessibility testing model required for the DOULS JISC project is one based on a usercentred development process. The model currently in operation for the VLE is one based on a quarterly
release cycle and the testing of new or changed functionality prior to release (-where fixes occur both prior to
the release and within future releases).
There is no easy way to locate documentation, guidance or policy on usability and accessibility testing and
research for the OU's Learning Systems. There have however been statements made by experts in the OU
that the current practices are not ideal:
“For example, the VLE has generated access problems for disabled students. We would have liked access to
be higher up the agenda of those setting up the VLE.” Robin Stenham, Student Services’ Curriculum Access
Manager in OpenHouse (Issue number 428, June-July 2010)
Current practices
The Jennie Lee Research labs enable innovative research into how ambient technologies can be used for
teaching and learning, and support core services of usability, accessibility and developmental testing e.g.
Tobii eye tracking services and assistive technologies to support disabled participants.
There are mainstream projects commissioned by Online Services (Communications) and conducted by IET
or external usability experts to explore areas of the OU online experience, for example the Study at the OU
website. Student Services’ Communication Strategy has and continues to be informed by consultation with
student focus groups
For StudentHome, regular user research and testing is undertaken prior to implementing significant changes
and/or new features.
For the mainstreamed VLE, a quarterly usability/accessibility testing cycle is in place at the component level
post development. Expert testing proves especially useful for accessibility testing. Bugs detected are entered
directly into the LTS Bugzilla system for prioritising and actioning at available development windows. For
commercial services that are integrated into the VLE, recommendations from OU usability/accessibility
testing are fed through the appropriate routes for resolving e.g. Google Apps for Education.
Recommendations of usability and accessibility improvements to Open Source products or modules, for
example standard Moodle modules, are fed back through the Moodle forums to ensure that, where possible,
all services are as open and accessible as possible.
There is a range of guidance/information relating to accessibility and usability in the Curriculum Management
Guide, LTS intranet, IET’s intranet including EPD, Knowledge Network and Cloudworks to name but a few
sources available on the intranet. This provides the guidance to follow for testing new learning
components/services e.g. developmental, technical, usability and accessibility testing; and for using those
components/services that have already been developed and tested e.g. VLE sites, structured content.
User feedback is collected via a range of elicited and non-elicited channels:
 Courses survey (students)
 eLearning preparedness survey (staff)
 Computing and Library helpdesks (students and staff)
 VOICE (students)
 VLE feedback form (students)
 Forums (students and staff)
 Study at the OU (students)
...via a range of controlled methods (e.g. SRPP for recruiting students; Units recruiting and paying a student
panel to be drawn upon for feedback for a specified period) and informal methods (e.g. notices on
StudentHome, TutorHome and Platform; consultation with OUSA and the AL communities)
...with varying frequency – monthly, quarterly, ad doc.
Then there are one-off IET commissioned research projects by central units around the online student
experience e.g. LEAP 2 and the Computing Guide; and IET commissioned work by academic units often
tightly connected to modules/courses, programmes and qualifications.
Enhancing current practices
Work is already underway in IET with the Triangulation project which aims to bring together a number of data
sources (e.g. module design and Moodle usage data; staff eLearning preparedness data and student
surveys) to provide analysis to drive policy and practice including around technology enhanced learning.
General learning systems-related feedback could be turned into a set of recommended actions, progress
from which could then be tracked and audited.
IET’s ‘Investigating techniques for remote evaluation studies’ project has been approved and is now in the
planning stage. DOULS and/or the Accelerated Roadmap could provide a test bed for evaluating some
remote techniques – remote techniques are particularly useful for disabled students who can then use their
own set-up; as well as enabling testing for international users for instance.
A programme of change management across the university called Securing Greater Accessibility (SeGA)
has been commissioned by PVC-LTQ. One of the key aims is to establish a common web accessibility
standard for the OU. The outputs of this work could be to provide a core set of guidelines to follow to ensure
accessibility and usability is integrated into all learning activity and systems reducing retrofit solutions
needing to be found after the event/developments.
The proposition to make progress
1. Use the DOULS JISC project to develop and test an enhanced model that can be rolled out to the
Accelerated Roadmap project and beyond, including:
 recruiting students and choosing the appropriate methods for testing.
 developing and validating personas and user types (students, tutors, administrators) for testing.
 demonstrating how testing can be carried out more across the board/holistically and in context
rather than at the component level.
 suggesting processes for tracking and progressing recommendations from usability/accessibility
testing (and ensuring actions are not based on one person’s experience).
 Ensuring that testing takes place throughout the development life-cycle from the early
prototyping stage onwards. Initially try the “morning a month” model evangelised by Steve Krug
(-a model which may only be possible and affordable for tightly managed projects like DOULS
and the Accelerated roadmap).
2. Develop a DOULS usability/accessibility plan which could inform all future learning systems
developments at the OU.
3. Out of implementing the DOULS usability/accessibility plan document:
 the holistic approach and guidance on usability and accessibility testing and research for the
OU's Learning Systems (-which the Accelerated Roadmap could further test) in the style of a
clear and easy to follow practical checklist; providing the framework, steps to take, when, by
whom, how and how often. This can then be put forward as a set of institutional guidelines and
processes after consultation with relevant stakeholders. IET colleagues are happy to lead on the
documentation while LTS/LIO colleagues lead on the more practical aspects.
 a transparent process for reporting, processing, tracking and auditing usability/accessibility
recommendations and bugs (including ownership for seeing the process through with
mainstream learning systems).
4. Explore practical means of deployment and consider risks (such as not having testing done at all
because the process becomes too heavy weight) and benefits of moving to an iterative testing model
and user-centred design approach for all OU developments. Report findings and recommendations to be
taken forward outside the project.
5. Approach the Triangulation and Course Business Models projects, the Quality Office and the Student
Survey and Statistics Office to flag the need for a university-wide model for receiving, synthesising,
analysing and in some cases responding to user feedback. Consultation over processes and
implementation could go via the Director Curriculum and Qualifications to Senior Faculty Administrators
and Senior Curriculum Managers.
Download