2013 PHEA ETI Final Evaluation Report

advertisement
PHEA ETI Summative Evaluation
Report
Patrick Spaven
December, 2013
1
Contents
Executive Summary.............................................................................................................................................. 3
Introduction ......................................................................................................................................................... 16
The programme.............................................................................................................................................. 16
The evaluation ................................................................................................................................................ 18
Baselines ........................................................................................................................................................ 19
Inputs 21
The Programme Results..................................................................................................................................... 22
Part A 22
Part B 24
ET strategies .................................................................................................................................................. 24
Improved teaching and learning practices ..................................................................................................... 26
Productivity improvements ............................................................................................................................. 40
A community of practice ................................................................................................................................. 41
New transferable knowledge .......................................................................................................................... 43
Other outcomes .............................................................................................................................................. 51
Success Factors and Constraints ....................................................................................................................... 55
Programme design, management, and facilitation ........................................................................................ 55
Institutional leadership ................................................................................................................................... 58
Institutional structures, processes, and culture .............................................................................................. 60
ICT infrastructure and electricity .................................................................................................................... 61
External disruption ......................................................................................................................................... 62
ETI governance .............................................................................................................................................. 62
Management and support within the institutional programmes ..................................................................... 63
Change of scope and direction ...................................................................................................................... 67
Availability of key personnel ........................................................................................................................... 67
Synergy with other programmes .................................................................................................................... 68
Conclusions ........................................................................................................................................................ 70
Improved teaching and learning practices and productivity ........................................................................... 70
ET strategies .................................................................................................................................................. 73
Community of practice ................................................................................................................................... 74
Research ........................................................................................................................................................ 75
Wider awareness and appreciation of ET ...................................................................................................... 76
Sustainability .................................................................................................................................................. 76
Annexes .............................................................................................................................................................. 78
Annex A: Acknowledgements ........................................................................................................................ 78
Annex B: Abbreviations and acronyms .......................................................................................................... 79
Annex C: Key documentary sources .............................................................................................................. 80
Annex D: ETI projects .................................................................................................................................... 81
Annex E: Summative evaluation guidance to ETI institutions ........................................................................ 82
2
Executive Summary
The programme and projects
1.
The Partnership for Higher Education in Africa (PHEA) Educational Technology
Initiative (ETI) was a four-year programme funded by the PHEA, a consortium of
foundations.1 The ETI was facilitated by a team (‘the support team‘) drawn from
or contracted by the South African Institute for Distance Education (Saide), in
association with the University of Cape Town (UCT) Centre for Educational
Technology (CET).2
2.
The ETI began field operations in October, 2008 and ended in December, 2013
after an 18-month no-cost extension. It had seven participating institutions:
 Kenyatta University (KU), Kenya;
 Makerere University (MAK), Uganda;
 Universidade Católica de Moçambique (UCM)/Catholic University of
Mozambique;
 University of Dar es Salaam (UDSM), Tanzania;
 University of Education, Winneba (UEW), Ghana;
 University of Ibadan (UI), Nigeria; and
 University of Jos (UJ), Nigeria.
3.
The ETI had five expected outcomes:
1. Educational technology (ET) strategies in place and operational in all
institutions;
2. Improved teaching and learning practices in some areas of every institution
through the application of ET; with replication (or momentum towards it) in
other areas;
3. At least three institutions achieve, or are on course for, significant
improvements in productivity through the application of ET for teaching and
learning;
4. A community of practice established around ET for teaching and learning, with
active participation from people in all the institutions, and some from other
African higher education institutions; and
5. New transferable knowledge of how to develop capacity in the use of ET for
teaching and learning in African higher education. Dissemination of that new
knowledge beyond the ETI players.
4.
There was a conscious ‘emergent’ approach, in which detailed programme-level
results were not defined in advance, reflecting the devolved and innovative
nature of the programme.
5.
There were two distinct parts to the programme.
 Part A was primarily concerned with developing ET strategies at the
institutions and scoping and preparing for the projects; and
 Part B was intended to focus on:
o Implementation of the projects;
o A programme of inter-institutional activities; and
o A coordinated research programme.
1
At the start of the ETI, the PHEA consisted of the Carnegie Corporation of New York, the Ford Foundation, the
John D. and Catherine T. MacArthur Foundation, the Rockefeller Foundation, the William and Flora Hewlett
Foundation, the Andrew W. Mellon Foundation, and the Kresge Foundation.
2
Renamed Centre for Innovation in Learning and Teaching from January, 2014.
3
6.
There were 26 projects in total in the ETI (ranging from two to six at each
institution).
Twenty-two were referred to as ‘implementation projects’ – projects with tangible
outputs:
 Twenty projects whose outputs were mainly in the form of teaching and
learning products, particularly online courses;
 One project aiming to create an ET policy and strategy; and
 One project aiming to come up with the specification for an electronic
executive management information system.
The other four projects were termed ‘independent research projects’, with
research as their principal output.
The evaluation
7.
The external evaluation was conducted from July, 2009, to June, 2013, leading to
three principal reports, including this summative one. The methodology was
predominantly qualitative.
8.
The main limitation of this summative evaluation is a lack of quantitative data,
which was intended to be captured by the institutions’ own evaluation activity.
Baselines
9.
When they joined the ETI in 2008/09, none of the participating institutions was
advanced in the use of ET for teaching and learning. Existing human capacity in
ET to a large extent mirrored the duration and depth of institutions’ ET
experience. At the beginning of the ETI, only UDSM, KU, and MAK had units
dedicated to supporting ET. Another important baseline for the programme is that
of research experience and expertise among the participants in the ETI. It was
generally very low.
Inputs
10.
The financial resources provided by the outside funders were as follows:
 Grants to the institutions, including funds for additional research activity,
totalled US$3.2 million. The institutions received these in roughly equal
measure; and
 Other costs – including programme management, support, evaluation, and
the inter-institutional workshops – totalled approximately US$2.45 million.
11.
The institutions contributed substantial amounts of staff time and the use of
existing information and communication technology (ICT) and other
infrastructure and consumables.
Results
ET strategies
12.
Draft ET strategies were produced by each institution under Part A of the ETI.
They were based on a template provided by the support team. They addressed
institution-wide issues but confined their focus of implementation almost
exclusively to ETI-funded projects. Most institutions had other, parallel ET
initiatives that were not included in these draft strategies.
4
13.
Very little progress had been made with the ET strategies by June, 2013. None
has been formally adopted. An intensive effort was made by the support team to
promote strategy revision in the second half of 2013, with some success, notably
at UCM where senior management was involved.
Teaching and learning products
14.
By the end of the ETI, including its no-cost extension, the principal tangible
outputs from the implementation projects were the following:
 Learning management systems (LMS) installed and/or upgraded at all
institutions;
 Approximately 150 new online course modules developed;
 Approximately 100 existing online course modules improved;
 Advanced multimedia produced for 13 courses;
 System established to digitize past examination papers, and 26,520 papers
digitized;
 System established to digitize theses – 4,500 digitized theses (abstract and
author details) can be viewed online ;
 Substantial amount of distance learning material improved and digitized;
 Lectures for two course modules recorded;
 100 radio lessons scripted, recorded and broadcast, and then packaged for
DVD and intranet;
 Mobile learning platform developed;
 Four courses developed and piloted for smart phones;
 Four courses developed and piloted for two disciplines;
 e-Portfolio models produced for two disciplines; and
 30,000 local history artefacts scanned and uploaded to an institutional
repository.
15.
Of the 20 ETI implementation projects with teaching and learning products as
their tangible outputs, 16 fully achieved or exceeded their original targets.
16.
These targets were expressed as numbers and did not have an explicit quality
dimension. For the online courses, evidence available from external reviews
suggested that quality overall was average to good, given the stage of
development the institutions had reached. Improvements were needed in course
redesign for the online medium; and in opportunities and incentives for student
interactivity and collaboration.
17.
All projects took longer to produce these outputs than was planned. The majority
needed the no-cost extension to complete their targets.
18.
Capacity building was delivered in several different aspects of ET and in different
ways. From the data available, it is clear that:
 Over 120 academic staff received intensive training3 in the production of
online courses or other digital teaching and learning products;
 Over 1,300 academic staff received training in the use of the Moodle LMS
and/or other digital teaching and learning products; and
 Over 3,520 students (mainly at UDSM) were trained in the use of Moodle.
19.
These numbers substantially exceeded those originally planned. This was partly
due to additional capacity building by or through the ETI support team, and partly
through an unanticipated level of replicated training within the institutions.
3
‘Intensive’ is defined as having at least two formal training experiences complemented by some element of
guided practical experience; or a prolonged series of coaching experiences.
5
20.
The quality of capacity building delivered by the ETI support team was seen as
consistently high. There is not enough data on the quality of other training to
allow any firm conclusions to be reached about the other training.
21.
There is little data about student learning outcomes from the ETI. From the
feedback that is available, the indications are promising. Students demonstrated
a strong appetite for engaging with online courses and other digital products. The
weakest aspect of the students’ experience was participation in forums and other
forms of interactivity. This was partly due to lack of opportunities – reinforcing
the conclusions of the external reviewers – and partly to weak take-up of
opportunities that were provided.
22.
By the time of the final evaluation, KU and UCM were already experiencing
significant productivity gains from digitization projects for, respectively, past
examination papers (KU1) and distance education materials (UCM3). A second
digitization project at KU – for theses (KU6) – has similar potential.
23.
The other implementation projects focused principally on teaching and learning
quality and staff capacity development. These aspects have potential productivity
benefits, which were not yet being fully realized because most had not
progressed beyond the pilot stage.
Community of practice
24.
The principal examples of professional networking during the ETI were the
following:
 Intensive networking among institutions, which took place at the three interinstitutional workshops;
 The multi-site research initiative (conducted in terms of the third strand of the
programme’s research dimension), which was led by a member of the ETI
support team, and coordinated research activities by participants at four
institutions that were engaged in full research activities (KU, MAK, UI, and
UJ), and three institutions that were only partially involved in the initiative
(UCM, UDSM, and UEW). The completed report is titled ‘Factors Influencing
the Uptake of Technology for Teaching, Learning and Assessment at Five
African Universities’;
 KU, MAK, UDSM, UI, and UJ collaboration in the design of the summative
evaluation;
 Participation by several ETI team members in e-Learning Africa conferences
during the life of the programme; and
 The e/merge Africa initiative, led by UCT CET with Saide involvement, which
by the end of the ETI had experienced a short period of pilot activity
facilitated by UCT CET; and which continued virtual networking through a
Facebook group, several members of which were ETI participants.
25.
Other attempts were made by the ETI support team to stimulate activity for a
community of practice, but did not lead to sustained momentum.
Research
26.
Four of the ETI projects – at MAK, UCM, and UEW (two) – centred on the design,
conduct, reporting and dissemination of original research (these four projects
were contrasted with the 22 ‘implementation projects’ and were termed
‘independent research projects’). By June, 2013, one of these pieces of research
6
– at MAK – had been completed and presented at a conference.4 The two pieces
of research at UEW were at the draft report stage. The research at UCM had not
yet been completed. By the end of the project, in December, 2013, one of the
UEW research projects (UEW3) was approaching finalization. In the other – UEW1
– the research report had been submitted, although the research was undertaken
too late to serve as a baseline for UEW2. The UCM project remained incomplete.
27.
Research was also conducted outside the boundaries of the ETI main projects:
 Nine evaluative cases studies – one from MAK, two from UDSM, one from
UEW, two from UI, a joint case study from KU and UDSM, and two by
members of the ETI support team – had been completed and submitted for
publication by Saide in the form of a compilation titled Unlocking the Potential
of ICT in Higher Education: Case studies of the Educational Technology
Initiatives at African Universities;
 Articles and presentations about the UJ e-learning fellowships programme had
been produced by that institution and delivered in publications, face-to-face
(e.g. at e/merge 2012) and via online forums;
 UCM completed a case study ‘Taking Education to the People (TETTP) –
Models and practices used by Catholic University of Mozambique in its
distance education program’ and presented it at the Distance Education and
Teachers’ Training in Africa (DETA) 2013 conference. At the time of the final
evaluation the case study was being prepared for formal publication. The
university was also working on a case study focusing on the use of technology
within its CED (Centre for Distance Education);
 Numerous papers had been authored and delivered to a range of conferences
(including the 5th, 6th, 7th and 8th e-Learning Africa conferences; 11th
International Educational Technology Conference; Euro-Africa ICT Research
Forum; the online conference e/merge 2012: ‘Open to Change’; First African
School on Internet Governance; 18th Biennial ICSD Symposium; 2nd eLearning Update Conference; Distance Education and Teachers’ Training in
Africa, DETA, 2013 Conference; and 8th International Conference on eLearning);
 A multi-site research initiative (conducted in terms of the third strand of the
programme’s research dimension), led by a member of the ETI support team,
was completed: ‘Factors Influencing the Uptake of Technology for Teaching,
Learning and Assessment at Five African Universities’; and
 Two PhDs had been supported by the ETI.
28.
Apart from the case studies and other evaluative research listed in the paragraph
above, very little quality evaluation had been completed by the institutions by
December, 2013, despite the inclusion of evaluations in most project plans and
the requirement that all institutions complete a programme evaluation.
29.
These research outputs are potentially useful to the host institutions. Their
development provided capacity building for the academic staff and research
assistants involved.
Other outcomes
30.
Wider awareness and appreciation of the potential benefits of using ET to support
teaching and learning was an implicit objective of the programme and an explicit
objective of several of the projects. Results of advocacy activity have not been
measured, but there is enough qualitative evidence to show that wider awareness
and appreciation were generated on a significant scale.
4
Ruth Nsibirano and Consolata Kabonesa: ‘Time to Change: Perceptions on Use of Educational Technology for
Teaching in Makerere University’ – 18th Biennial ICSD Symposium, July, 2013, Kampala, Uganda.
7
31.
There is anecdotal evidence that the ETI contributed to enhanced reputation at
two to three participating institutions.
32.
The online course quality improvement process led to the development of
an
evaluation instrument that may have wider application.5
33.
Core members of the ETI teams have developed their competence in project and
programme design and management, principally through a combination of
experience and informal coaching by the ETI support team.
Success factors and constraints
Programme design, management, and facilitation
34.
A support team, consisting mainly of ET specialists, was assigned to give
comprehensive support to the seven institutions in the main part of the
programme. The team provided workshops, monitored progress and responded
with tailored advice, mentoring where possible, and liaised with other sources of
support where necessary. The nature and performance of the support team was
the most critical factor in the success of the programme.
35.
The shaping of the projects was actively facilitated by the support team, leading
to a degree of harmonization; but there was no blueprint. Every institutional
coordinating team felt that the ETI struck a good balance between devolved
project identification and management – local empowerment – and direction from
the support team.
36.
The ETI as a whole had high-level objectives but did not pre-define how these
objectives should be pursued. Project planning followed an emergent path. The
project teams were not committed to inflexible logical frameworks. Changes in
deliverables were agreed if they could be justified.
37.
The funders had close involvement in Part A of the ETI (i.e. the strategydevelopment/scoping/planning stage). However, after the start of Part B (i.e.
project implementation/inter-institutional activities/coordinated research), the
funders played a much lower-key role. Agreement to changes at project level was
devolved to the support team. This enabled quick and informed decisions.
38.
There was universal satisfaction with the relevance and quality of the online
course development workshops. The early workshops in particular were valuable
– they were essential launching pads for the projects.
39.
The workshops covered a very big agenda and it is not surprising that some
projects did not develop rapid momentum after the delivery of the workshops.
The reasons for this early variation in performance do not seem to be related to
the nature of the support, which was similar in design across the programme. It
is more plausible that it related to local factors – for example, incentivization or
availability of key personnel.
40.
The models for online course structure and for quality assurance were of critical
importance. Local customization of these models was encouraged, leading to
increased ownership.
5
The online course review instrument has since been revised by the Saide project team and is to be released
on the Saide website under an open licence.
8
41.
Support for research was less comprehensive, particularly in the first year of Part
B. This may have been a contributory factor in the chronic research challenges
faced by two institutions. Support for the multi-site research project since 2011
has been more consistent.
42.
The support team’s administrative services were appreciated. They were
described by those in the participating institutions as responsive, appropriately
persistent, friendly and personal.
43.
The three annual inter-institutional workshops contributed to progress in varying
ways. There is consensus among the stakeholders that they were worth their
considerable investment.
 The first workshop served its primary purpose of contextualizing the individual
programmes and projects. It did not make a noticeable difference to the
proposed overarching research project and inter-institutional networking and
collaboration;
 The second workshop added greater value. Participants brought to the
workshop at least a year of experience of working with ETI projects and were
able to share their experiences to date, and to engage more meaningfully in
discussions about what was needed in the final phase; and
 The final workshop, in March, 2012, generated detailed planning for the final
phase of the programme.
44.
Other aspects of programme design, management and facilitation made a
difference.
 There was a relatively long time frame, which allowed for overruns and
changes of scope;
 The ETI support team had a large and flexible enough central budget to
provide additional capacity development and support visits where necessary;
 The hands-on, real-time approach to evaluation was found to be helpful, with
the evaluator regarded as someone who understood the institutions, and who
acted as a formative resource and not an ‘inspector’;
 The seven institutions worked to the same high-level agenda and time frame,
providing an element of competition that acted as an incentive, particularly
towards the end of the programme; and
 After an initial upfront payment to seed project activities, funds were released
against planned deliverables and not in advance. This was effective as an
accountability mechanism, with minimal disadvantages.
Institutional factors
45.
The involvement of apex and other senior management was a major factor in the
programme’s success. In one institution, it was critical. The objective of apex
management involvement at the start of the programme was achieved. The
memoranda of agreement were signed by the vice-chancellors (VCs) (or
equivalent), and accountability was pursued through this channel when needed.
Continuing apex management involvement varied across the institutions and over
time as the ETI rolled out, but it was clear that the ETI was on the radar of at
least one member of apex management at every stage of the programme. This is
an indicator of apex management’s buy-in and commitment to ET.
46.
Universities traditionally have oriented their reward and recognition systems
towards published research. This was the case at all the ETI participating
institutions. None had an institution-wide mechanism in place for rewarding
either course development or effective teaching. All experienced difficulties in
motivating people, particularly in the early part of the programme when the
9
learning curve was at its steepest. Later in the programme, the institutions
reached the point where the learning curve began to flatten and the benefits of
ET – including time saved – became more tangible, providing a degree of autoincentivization.
47.
Slow Internet speeds, network and power outages, and shortage of equipment
for staff and students alike were a discouraging factor in the early stages of the
programme, in particular for people trying to develop and deploy online courses.
ICT infrastructure improved substantially, however, as the programme
progressed. The main problems in the later stages were electricity and network
instability, particularly at the Nigerian institutions.
48.
The ETI work at several institutions was disrupted by factors other than power
and ICT outages. The most extreme example of this was at UJ, which was
affected by repeated bouts of insecurity in the city and its surrounding areas.
Strikes were another disruptive factor. This problem particularly affected MAK
where, for example, the piloting of the e-portfolios was seriously delayed due to
strike action.
49.
Governance should ensure accountability but also effectiveness, through
monitoring, advice and, where necessary, sanctions. Only one institution had a
consistently effective, formal governance process with regard to the institution’s
ETI project involvement. At the other institutions, in the absence of such a formal
governance setup, contact with senior management took place as needed, usually
initiated by the programme coordinator. For getting permissions, this worked
well; but it was not a satisfactory long-term arrangement. No matter how
effective and engaged the programme coordinator, it is important to have an
external perspective, especially on difficult decisions to be made about
performance issues.
50.
The building blocks of ETI organization were the project teams. These teams
were not meant to be autonomous. They were intended to work as part of an
overall programme, facilitated by, and interacting through, a programme team
led by a coordinator. In this way, a supporting environment would be created,
synergies between projects picked up, and relations managed with the wider
institutional community. The programme coordinating team was intended to deal
with issues such as handling governance, sensitization, advocacy, and
development of strategy for the future.
51.
There was no single model structure for coordination in the programme. Each
institution found its own way of doing it. Styles of management and teamwork
also differed. Most institutions developed effective coordination processes and
good levels of teamwork. In one institution, this did not extend to the entire ETI
programme, with negative consequences. The projects at another institution
operated to a large extent independently of one another. This resulted in missed
synergies and the failure to map out a clear process for sustainability.
52.
There are examples of projects that would have benefited from more systematic
project planning and implementation. The research projects in particular required
meticulous planning and management. There were challenges in this regard at
two institutions. Other projects, however, benefited from a lack of rigidity in
project planning.
53.
For most project participants, it was their first substantial experience of
developing online courses or other ET applications or of producing major pieces of
research. Whatever the quality and quantity of formal training, they needed
regular technical support as they progressed. The ETI support team – with all
10
members based in South Africa – offered periodic support but was not on hand all
the time. A local support framework was needed as well.
54.
It was the programme coordinators’ responsibility to build the framework, by
identifying support needs, locating the support, and brokering the relationships
that would activate it. In some cases, it took time for some of the coordinators to
appreciate the importance of a support framework.
55.
Three institutions (KU, MAK, and UDSM) had units with responsibility, in principle
at least, for the mix of support needed for ET development. Two of these
performed well throughout the programme, although the fact that one of the two
has limited resources is likely to make sustainability more challenging. The third
unit was not at first mobilized to provide the necessary support, which led to
delays and missed opportunities. At other institutions, support was provided on
an ad hoc basis. At times this was effective enough, but is probably not a
sustainable option in the long run.
56.
At three institutions, slow progress in developing online courses was transformed
by taking team members away on e-content development retreats – residential
workshops of three days or more. Not only did the retreats break the back of the
work deficit, but relationships were reinforced not least with the technical support
personnel.
57.
The ETI was dependent chiefly on the programme’s intellectual and social
resources at each institution. No amount of financial resources or external
support would have been able to make up for deficits in those areas. Although
there were considerable variations in ET experience and expertise, the level of
commitment and professionalism among the institutional teams was generally
high. In the long run, this was the more important factor.
58.
All institutional programmes suffered from personnel availability issues of one
sort or another. No participant was dedicated full-time to the ETI; in fact most
had positions that involved substantial commitment to other work. Even with the
most motivated people, there were occasions when they could not find time to
take ETI work forward, especially when it meant working with others with the
same constraints but different timetables. These were not optimal conditions, and
led to delays.
59.
Even if it had been possible to create and fund full-time project posts, however,
this would not have been welcomed. It was important to have the participants
working within the normal fabric of the university and not in bolt-on projects.
This was important for credibility during the programme and for sustainable
mainstreaming in the future. Some degree of formal workload alleviation might
have offered a solution to this problem. If not, it was even more important that
the institutions adequately recognize this type of work in their reward systems.
60.
Most institutions had other ET programmes running parallel to the ETI. Ideally
these programmes should have been managed either by the same people or in
close liaison with one another. In that way, they could have ensured that they
shared resources, applied learning from one another, and avoided duplication of
effort. Most institutions captured synergies from parallel projects and
programmes with ET dimensions.
Conclusions
Improved teaching and learning practices and productivity
11
61.
By June, 2013, significant progress had been made in improving teaching and
learning practices. The numerical targets set for ET products and capacity
building in the original project plans were exceeded overall. Three institutions
substantially exceeded their targets for at least one project. Although there was
variation in overall progress among the institutions – eight projects had not fully
achieved their targets by June, 2013 – every institution completed at least one
project with objectives fully met or exceeded. By the end of the project, in
December, 2013, of the 20 implementation projects with teaching and learning
products as their tangible outputs, five had substantially exceeded their targets,
eleven had fully achieved their targets, and four had partially achieved their
targets.
62.
These findings represent a successful overall result for improved teaching and
learning practices. The progress was made against very low baselines for the use
of ET in all the institutions. The ETI can be credited as the main vehicle for ET
product development at each of the seven institutions in this period.
63.
The ETI’s model for support to the component projects, and the quality of
delivery of this support, made a major contribution to the success of the projects.
It was universally appreciated.
 Project identification and planning during what was known as Part A of the
programme was substantially devolved to the institutions. This approach
generated a strong sense of ownership: the institutions saw the projects as
‘theirs’;
 The support in Part B was largely delivered through two ET specialists, each
assigned to three to four institutions; and
 Capacity building in this model was incremental and cumulative, designed to
be delivered in ‘absorbable’ packages.
64.
Several other features of the programme design and execution also promoted
successful results for improved teaching and learning practices.
• The time allocated to the programme, including the no-cost extension, was
generous. It allowed for overruns and changes in scope and direction;
• The funding model was also effective. Funding was sufficient for the work,
both planned and unplanned, and allowed for additional support. Given limited
absorptive capacity, there would probably have been diminishing or even
negative returns on substantially more funding. The processes for presenting
business cases and accountability for funding were disciplined but not
burdensome;
• Apex management from every institution was involved in Part A to an
acceptable extent. As Part B progressed, involvement varied, but at every
institution the ETI coordinating team had regular access to at least one
member of apex management;
• Although the intended community of practice of researchers and practitioners
did not take off, networking at the annual inter-institutional workshops helped
people to feel part of a bigger enterprise. The fact that institutions were
required to report on progress at the workshops injected elements of
competition and accountability; and
• The role of the institutional programme coordinators was important in dealing
with issues that defined the ETI as a programme and not just a set of
disconnected projects. Coordinators mostly performed their roles well.
65.
The greatest areas of challenge were the following:
 The project overruns generally left insufficient time for evaluation and
knowledge sharing. This will need to be addressed by the institutions
themselves;
12






Quality improvement of online courses may require more capacity building.
The latter could be partly addressed by more uniformity of course structure
with built-in requirements for interactivity and collaboration;
Quality assurance in the programme mainly came from outside the
institutions. This is not a sustainable option. The institutions need to integrate
ET for teaching and learning into their wider systems for quality assurance;
The application of rich multimedia to online courses was a big challenge. Two
of the three projects with multimedia as their central focus made faltering
progress. A more comprehensive package of support for multimedia than the
ETI was able to provide is needed;
The depth and universality of the capacity gaps in some areas was a factor in
the late-running, and in a few cases underperformance, of projects. A more
systematic skills audit in Part A might have been worthwhile;
The incremental model of capacity building did not always work optimally.
There was sometimes little or no progress between increments, and some
capacity building had to be repeated. The reasons for these deficits were
usually grounded in institutional challenges such as the lack of incentives and
absences of key personnel; and
Although self-motivation was high among some project participants, this was
not always sufficient to maintain momentum, particularly in the middle period
of the programme. No institutional incentive systems recognized the
development of capacity and products in ET for teaching and learning. Even if
participants were not seeking monetary and/or status rewards for their
commitment to the programme, the lack of recognition was not helpful. In
addition, no formal alleviation of workload was offered.
ET strategies
66.
Although, at the beginning of the ETI, some institutions had ICT and overall
institutional strategies that referred to ET for teaching and learning, none had an
agreed road map for its development.
67.
The lack of progress made with the ET strategies after Part A did not in general
impede the ETI teams in effectively pursuing ETI projects. However there are
complex institutional issues surrounding the development of ET in the longer
term that require strategic approaches. The chances of sustainability of the
progress made through the ETI would be increased by the effective design,
adoption and implementation of ET strategies.
68.
The best opportunity for revisiting the draft strategies would have been in the
final months of the ETI. This opportunity however was not seized.
Community of practice
69.
The expected community of practice therefore did not take shape in any
significant sense. There were several reasons for this, which probably worked in
combination against the investment of the necessary time and resources.
 There appears to have been a lack of appreciation of the benefits of
collaboration;
 At successive inter-institutional workshops, intensive efforts were made to
kick-start a community of practice, but these efforts did not generally extend
to the periods between the workshops. There was also no tailor-made virtual
platform on which the community’s online interaction could have been built;
 Competition was identified by some informants as a factor that worked
against collaboration; and
 There was limited scope for face-to-face collaboration between the
institutions. ETI participants had to find time and other resources to organize
13
and conduct visits to other institutions for this type of interaction. In their
eyes, there were more pressing demands, particularly on their time.
70.
The e/merge Africa initiative may provide the platform for the community of
practice envisaged by the ETI. In 2012, at the fourth e/merge Africa online
conference, nine of the 40 presentations emanated from PHEA ETI involvement.
Later in 2012 there was also active participation of several members of the UJ elearning fellowships group in an e/merge Africa online seminar. Virtual
networking continued through a Facebook group,6 which by the end of December,
2013 had 304 members (about 14 of whom were participating members of the
ETI programme support or project teams). The e/merge Africa initiative is a
useful vehicle for exchange of information, although there is no evidence yet of
collaborative spin-offs.
Research
71.
The agenda that was originally set for the research dimension to the ETI was
elaborate and ambitious. It overestimated the interest and, particularly, the
capacity in research among the university personnel attracted to the ETI. The
pan-institutional research project in particular was too elaborate a model. There
was also insufficient consensus within the ETI support team as to its rationale.
Even if there had been greater research capacity and interest, in retrospect it
seems it may have been conceptually flawed to combine research and
implementation from the outset. The more logical timing for the type of research
envisaged in strand two of the research component7 was later in the programme,
or even on conclusion of the programme, once the teams had more experience
with the ET domain.
72.
The devolved evaluation agenda too, particularly the scale of the summative
evaluations,8 was over-ambitious. To work well, it would have needed prior
capacity development generally in the teams, more hands-on support from the
external evaluator and/or the ETI support team, and probably a skilled evaluation
focal point in the institutions. It also needed its own breathing space. This space
was restricted by the project overruns.
73.
Nevertheless, several pieces of research have been produced in terms of the first
strand of the programme’s research dimension: by the end of the project
(December, 2013), one of the independent research projects had exceeded its
targets, while the remaining three were partially completed. They will primarily
be of use to the institutions concerned. They offer limited scope for the transfer
of knowledge. In addition, nine evaluative case studies for publication as a
compilation had been produced. Furthermore, a UCM case study, ‘Taking
Education to the People (TETTP) – Models and practices used by Catholic
University of Mozambique in its distance education program’, had been presented
at the Distance Education and Teachers’ Training in Africa (DETA) 2013
conference. The university was also working on a case study focusing on the use
of technology within its CED. Finally, numerous papers had been authored and
delivered to a range of conferences.
6
The Facebook group can be found at www.facebook.com/groups/137699809709364.
This ‘second strand’ was envisaged as a three-year research strategy intended to complement and inform
research work at institutional level, and to ensure that local-level research activities were conceptualized and
developed under the auspices of a common, overarching framework. This framework would be created prior
to commencement of the ETI projects, and would include the mapping of research questions.
8
The evaluative research component to provide summative assessment (both process and results) of
individual projects was one of the types of research included in the first research strand.
7
14
74.
Two further research activities, however, may address the ‘transferable’ element
in the outcome: the multi-site research initiative (conducted in terms of the third
strand of the programme’s research dimension), culminating in the completed
report ‘Factors Influencing the Uptake of Technology for Teaching, Learning and
Assessment at Five African Universities’, and the current external summative
evaluation.
Sustainability of outcomes
75.
The progress that these institutions have made in ET product development is
likely to be broadly sustainable owing to the demand-side momentum for these
products and approaches.
76.
The best prospects for sustainability and continued forward momentum are where
improvements have become institutionalized in one way or other, for example:
 By being linked to economic viability;
 By being driven by strong teamwork and a robust technical support unit;
 Where apex leadership has identified itself strongly with the progress; and
 Where there has been a broad and systematic process of awareness raising
and sensitization.
77.
Critical mass is likely to play a part. At three institutions, at least 40 online
course modules each were produced, providing broad-based buy-in to the use of
ET.
78.
The prospects for sustainability of the research outcomes supported by the ETI
are not as good.
15
Introduction
The programme
The Partnership for Higher Education in Africa (PHEA) Educational Technology Initiative
(ETI) was a four-year programme9 funded by the PHEA, a consortium of foundations10
originally established in 2000 to coordinate their support for higher education in Africa.
The ETI was facilitated by a team (‘the support team’) drawn from or contracted by the
South African Institute for Distance Education (Saide), in association with the University
of Cape Town (UCT) Centre for Educational Technology (CET).11
The ETI began field operations in October, 2008, and had seven participating
institutions:
 Kenyatta University (KU), Kenya;
 Makerere University (MAK), Uganda;
 Universidade Católica de Moçambique (UCM)/Catholic University of Mozambique;
 University of Dar es Salaam (UDSM), Tanzania;
 University of Education, Winneba (UEW), Ghana;
 University of Ibadan (UI), Nigeria; and
 University of Jos (UJ), Nigeria.
The programme proposal document12 (‘the ETI proposal’) contained the following vision
for the initiative: ‘To support interventions in partner universities to make increasingly
effective use of educational technology to address some of the underlying educational
challenges facing the higher educational sector in Africa.’
The ETI’s strategic objectives were to:
 Support teaching and learning initiatives that integrate educational technology (ET);
 Promote collaborative knowledge creation and dissemination;
 Get core institutional systems to work so that they support teaching and learning
more directly; and
 Research and report on educational technology activity in African universities by
means of a long-term project.
The following set of five expected outcomes was agreed by the ETI support team in
2009:
1. ET strategies in place and operational in all institutions;
2. Improved teaching and learning practices in some areas of every institution through
the application of ET; with replication (or momentum towards it) in other areas;
3. At least three institutions achieve, or are on course for, significant improvements in
productivity through the application of ET for teaching and learning;
4. A community of practice established around ET for teaching and learning, with active
participation from people in all the institutions, and some from other African higher
education institutions; and
9
The ETI is often referred to as a project. In this report it is called a programme to distinguish it from the 26
‘projects’ that it encompassed.
10
At the start of the Initiative, the PHEA consisted of the Carnegie Corporation of New York, the Ford
Foundation, the John D and Catherine T MacArthur Foundation, the Rockefeller Foundation, the William and
Flora Hewlett Foundation, the Andrew W Mellon Foundation and the Kresge Foundation.
11
To be renamed Centre for Innovation in Learning and Teaching from January, 2014.
12
Effective Technology Use in African Higher Education Institutions: A proposal for Phase Two of the PHEA
Educational Technology Initiative. Saide and CET at UCT, April 2008.
16
5. New transferable knowledge of how to develop capacity in the use of ET for teaching
and learning in African higher education. Dissemination of that new knowledge
beyond the ETI players.
There were two distinct parts to the programme:
 Part A of the ETI was primarily concerned with scoping and preparing for the projects
and other activities at the institutions. Each institution prepared an ET strategy in
which the projects and other activities were located. Part A was completed in
February, 2010, when the last of the institutional plans for Part B was approved.
 Part B was intended to focus on the following:
o Managing successful implementation of the strategies and projects;
o Developing a programme of inter-institutional activities (including interinstitutional workshops); and
o Implementing aspects of the coordinated research programme (which comprised
three strands).
Part B commenced early in 2010 and was scheduled to end in June, 2012. There has
been, however, a no-cost extension of 18 months, which all the institutions to a greater
or lesser extent have taken advantage of to complete late-running projects,
complementary activities, and reporting.
There were 26 projects13 in the ETI, ranging from two to six at each institution. (See
Annex D for the full list of the projects.)
Of the 26 projects, 22 were referred to as ‘implementation’ projects – projects with
tangible outputs (in the form mainly of teaching and learning products). This included
the e-learning fellowships project at UJ, where the main focus was capacity building but
with online courses developed in the process.
Fifteen of the implementation projects – at least one at every institution – had the
creation of online courses as a principal output. The other seven implementation projects
had the following as their original principal tangible outputs:
 Digitized exam papers and theses (KU1 and KU6);
 Digitization of distance learning material (UCM3);
 A tele-classroom facility – with multimedia material available online and via DVD
(UI3);14
 Course content produced for radio broadcast (and also available online) and the use
of mobile phones – both for distance education (UI4);
 e-Portfolio frameworks for students and academic staff (MAK3);
 The functional specification for an electronic executive management information
system (KU5); and
 An institutional information and communication technology (ICT) policy and strategy
(UCM1).
In addition to the 22 implementation projects, there were four projects with research as
their principal output:
 Research into the context of ET at their institutions (MAK2, UEW1); and in one case
(UCM5) also a wider context; and
 Evaluative research on the introduction of Moodle and online courses (UEW3).
13
Apart from these 26 projects, there were other activities and outputs such as case studies. These are not
referred to as projects.
14
The tele-classroom aspect was dropped mid-project, but the other outputs were retained.
17
The evaluation
External evaluation followed a real-time model. The evaluator maintained a watch on the
programme as it unfolded from September, 2009 to June, 2013 and, when appropriate,
offered suggestions for improvement, both to the support team and institutions.
There were also three phases of more intensive engagement.
1. The first, at the end of Part A, concluded with an evaluation report in February, 2010,
that was summative of Part A, appraising of Part B, and formative of the programme
as a whole;
2. The second phase was an interim evaluation of Part B in early 2011, roughly a year
after it had commenced. The interim evaluation had two principal functions, as
follows:
o To report to stakeholders on progress in the programme – to provide evidence of
what was being achieved for the resources, including people’s time, that were
being devoted to it; and
o To identify strengths, weaknesses, opportunities and threats that could help the
programme partners to build on success and manage challenges in the time
remaining of the programme.
3. The third and final phase of intensive engagement opened in October, 2012, and
ended in June, 2013,15 leading to the current summative evaluation report. It has the
principal aim of providing practicable findings and recommendations on how to
promote the effective use of ET for teaching and learning in higher education
institutions in sub-Saharan Africa and beyond.
Evaluation methodology
There were four principal elements in the evaluation methodology over the three-and-ahalf years of periodic engagement:
1. Three visits to each of the seven institutions,16 which included interviews and
facilitated group discussions with institutional stakeholders;
2. Participation in the three inter-institutional workshops in Johannesburg (2010, 2011,
and 2012);
3. Examination of reports from the institutions, and the ETI support team respectively,
as well as from external quality review undertaken of the online courses; and
4. One-to-one interviews with members of the ETI support team.
The most important element of the methodology was the visits to the participating
institutions. The focus in the first two rounds of visits was mainly on the projects. In the
summative evaluation round of visits, the focus also included wider factors and
outcomes.
In addition to the expected outcomes identified by the ETI support team, further
potential outcomes were identified through collaborative activity involving
representatives of the seven institutions between April and July, 2011. The collaboration
helped to inform the summative evaluation guidance (see Annex E), particularly the
evaluation framework.
15
Active evaluative work ended in June, 2013. However, further summative evaluation reports were
subsequently received from ETI participating institutions; and updates on key developments between July and
November were provided by the support team. Relevant findings from this material were incorporated into
the final report.
16
UJ was visited once – at the end of Part A of the programme. In 2011 and 2012, the UJ team met with the
evaluator in Abuja because of security concerns in the Jos area.
18
One of the benefits of the regular engagement aspect of this evaluation model is that it
enabled deep familiarity with the programme, which facilitated the methodology known
as process tracing. Process tracing is a methodology that retraces, stepwise, the
activities that have led to the outputs.
Limitations
This pan-institutional summative evaluation was intended to leverage off summative
evaluations that were to be completed by each institution before the evaluator began his
site visits in October, 2012. The summative evaluation guidance, including model survey
formats, was provided to the institutions in May, 2012. Online surveys were created for
each institution to use if they chose.
The visits – three or four days at each institution – were designed to complement the
institutions’ own evaluations and not to conduct surveys. Unfortunately, no summative
evaluations were completed by the time of the visits. Individual project reports were
provided in some cases, but these were principally inventories of activities and outputs.
By far the most useful evaluative material available at the time of the visits was in the
form of case studies produced by UDSM and UI. However, these covered only four of the
26 projects.
The dearth of institution-level evaluation material – particularly survey results – meant
that the external evaluator had to rely heavily on qualitative data he gathered during the
visits. On the whole, these encounters were highly productive. The evaluator’s real-time
engagement with the programme for nearly four years supported the analysis and
contextualization of this qualitative material.
(For a wider discussion of evaluative research in the ETI, see the section New
transferable knowledge, where a sub-section Evaluative research focuses on evaluative
research as part of the first strand of the research dimension.)
The evaluator is not an expert in ET and was therefore not qualified to personally assess
the quality of products such as online courses. Instead, he looked for evidence of
appropriate quality assurance processes and was able to draw on the findings of the
most extensive of these processes: an external review of online courses that had been
conducted across the ETI programme.
The current summative evaluation report covers the standard programme value chain of
inputs, processes/activities, outputs, and outcomes. It locates them, though, within a
wider context, where the programme is acknowledged to be affected by the institutional
environment, which it in turn shapes to some degree. The evaluation looks closely at this
wider context for evidence of factors that supported and hindered progress in the ETI.
Baselines
ET usage
When they joined the ETI in 2008/09, none of the participating institutions was
advanced in the use of ET for teaching and learning. Across this baseline, there was
some variation:
 UEW, UI, and UCM had very little experience of ET for teaching and learning – at best
one or two isolated applications;
 UJ had experienced several, but somewhat fragmented, initiatives; and
19

KU, UDSM, and MAK had the most experience. KU’s ET activity was well established,
mostly channelled into distance education rather than blended learning. UDSM and
MAK had had mixed results with ET and significant sustainability challenges.
All the institutions except UEW and UI had a learning management system (LMS)
installed and in active use. UEW had trialled the use of Moodle, but had not progressed
further.
 Some of the UCM sites were using Moodle but were not networked;
 UJ, MAK, and UDSM had previously used KEWL (Knowledge Environment for Webbased Learning) or Blackboard – or both – and were considering switching to Moodle;
and
 KU already used Moodle, although other systems co-existed there in the context of
external partnerships.
ET capacity
Prior to the inception of the programme, existing human capacity in ET in the institutions
mirrored to a large extent the duration and depth of institutions’ ET experience. UEW
bucked this trend somewhat in that it had several people with professional education in
ET (and in one case considerable experience of it), but all obtained outside of UEW. Of
the participating institutions, UDSM and KU had the most capacity in ET at the start of
the ETI. UDSM’s capacity was concentrated in its Centre for Virtual Learning but
available to support blended teaching and learning throughout the university. KU’s ET
capacity was mainly focused on distance education.
By the time Part B of the programme was under way, the ETI programmes at UJ and
MAK were led by personnel who had graduated from or were undertaking the MEd (ICT)
at UCT. This was an important factor in maintaining momentum and innovation in the
ETI at these two institutions, both against considerable odds.
Only UDSM, KU, and MAK had centralized support units dedicated to the development of
ET.
Attitudes
Attitudes towards ET – among teaching staff, other university personnel, and students –
are an important dimension of a baseline. Expectations, interest, and demand are
variables that can help or hinder a programme like the ETI.
We do not have comprehensive baseline data on attitudes in the institutions. However,
we know that UDSM, UEW, UJ, and MAK had experienced ET programmes that had not
led to lasting positive outcomes and had in varying degree left a legacy of scepticism.
This posed both challenges and opportunities for the ETI at these institutions.
UDSM, for example, had experienced difficulties developing capacity in ET and had had
unsatisfactory experiences with other LMSs, as a result of which ET’s reputation had
been tarnished. The ETI coordinating team reported a proprietorial attitude among staff
to their materials: a reluctance to put them online. This was manifest in Part A of the
programme implementation, when attracting people to the programme was difficult.
Once Part B was under way, however, the ETI gained and maintained momentum at
UDSM.
UJ had benefited from externally funded ICT infrastructure, but had not developed
capacity in a sustainable manner under these programmes. An ETI coordinating team
member said: ‘They were just offering workshops and not nurturing people.’ However,
the first round of e-learning fellowships – funded by the Carnegie Corporation – had
begun to promote more positive attitudes about ET at UJ.
20
ICT and electricity
Because ET relies heavily on the scale and quality of ICT infrastructure and equipment,
the institutions’ baselines for this factor are relevant.
At the start of the ETI, Internet bandwidth, local networks, and the availability of
university computer terminals and/or Wi-Fi for teachers and students alike were far from
adequate at every institution. UCM had the weakest infrastructure, with very low
bandwidth and fragmented networks.
There were regular power outages in most of the programme locations. At UI these were
daily and often for several hours. The situation was not much better at UJ. At the other
institutions, outages were less frequent – typically one to three times a week.
Although these challenges clearly did not deter the institutions from committing to the
ETI programme, they had some impact on the teams’ advocacy efforts among wider
groups. The teams reported scepticism among some colleagues about the viability of ET
in this environment. This was most pronounced at UCM.
Research
Another important baseline for the programme is that of research experience and
expertise among the participants in the ETI. It was generally very low.
 There had been no significant published research in ET at any of the participating
institutions;
 Only one member of the ETI coordinating teams – at UI – had previously published in
an international, peer-reviewed journal;
 UCM and UEW had very weak research cultures generally – yet between them they
opted for three out of the four full research projects in the ETI; and
 Even the more experienced researchers lacked confidence in analysing qualitative
data.
Inputs
The financial resources provided by the PHEA ETI funders were as follows:
 Grants to the institutions, including funds for additional research activity, totalled
US$3.2 million. The institutions received these in roughly equal measure; and
 Other costs – including programme management, support, evaluation, and the interinstitutional workshops – totalled approximately US$2.45 million.
The institutions contributed substantial amounts of staff time and the use of existing ICT
and other infrastructure and consumables but these contributions were not formally
recorded.
The programme did not fund the salaries of project teams in the institutions. Some
programme funding was used as compensation for participants’ time, but no institution
funded a position out of the ETI programme funds. Several institutions paid graduate
students to provide ICT support, and UI had a postgraduate student available for specific
project administration. These participants were paid for work done, however; these were
not full-time positions.
Intangible inputs such as knowledge, skills, and experience are referenced in the section
Success Factors and Constraints.
21
The Programme Results
This section sets out and analyses evidence for the results of the ETI at output and
outcome levels. It takes the ETI’s planned results as the main frame of reference.
Results frameworks
The five expected outcomes for the ETI as a whole have already been listed. These
programme-level results were broadly drawn and not accompanied by an overall
performance framework with indicators and means of verification (MOVs). Only the first
expected outcome had indicators. This was intentional. There was conscious adoption of
an ‘emergent’ approach in which detailed, programme-level results were not defined in
advance, and this approach reflected the devolved and innovative nature of the
programme.
Each institution produced a logical framework as part of its submissions for Part B
around the end of 2009. Each contained a ‘vision’ and a set of ‘long-term outcomes’.
With one exception (UCM), the outcomes were not accompanied by indicators or MOVs.
Below the level of long-term outcomes were outputs – generally one per project –
representing the principal deliverable, such as: ’a selection of UDSM courses are
available on Moodle’; or ‘distance learning tutorials delivered through radio broadcast
and mobile phone’ (UI).
Output targets were set for quantity and timing. The quality of the outputs was not predefined through indicators or any other means. This is not unusual; quality standards are
often difficult to define in advance. This does, however, pose challenges for evaluation.
Quality was addressed in the ETI – but mainly retrospectively through peer or external
expert review, or in some cases through self-assessment against a quality framework.
Part A
Part A was the scoping phase of the programme and appropriately had a very simple
results framework, a set of four intended outputs, as follows:
1. A model for engaging institutions in the development of effective, integrated ET
strategies;
2. Seven comprehensive ET strategies and, within them, a number of projects designed
to put the strategies into practice;
3. A network of researchers and practitioners17 established across the participating
institutions; and
4. A coordinated research programme, comprising local-level research activities
conceptualized and developed under the auspices of an overarching set of questions.
The evaluation of Part A came to the following conclusions about these outputs:
A model for engaging institutions in the development of effective, integrated ET
strategies

17
There was very good engagement between the ETI programme support team and the
institutions. The evidence from key informants, and direct and indirect observation of
The original formulation referred only to researchers, but this was later amended to include practitioners.
22


the interaction between the two groups, points unequivocally to a very constructive
relationship, one based on respect and trust;
Part A of the programme took longer than expected. Some institutions had difficulty
creating ET strategies and project proposals. The approval phase was also lengthy,
particularly where project proposals were farmed out by the funders for external
assessment. This generated some frustration; and
Most of the resources provided by the support team were very helpful. The main
exception was the Research Toolkit, which was not much used.
Seven comprehensive ET strategies, including Part B project proposals



The project proposals were sound, although some might have been too ambitious;
There was a need to convert the proposals into full project plans with explicit
expected outcomes and more detailed monitoring and evaluation; and
The enveloping ET strategy documents were still work in progress at most
institutions. They needed to be revisited through a more inclusive process of
engagement to achieve full institutional buy-in.
A coordinated research programme, comprising local-level research activities
conceptualized and developed under the auspices of an overarching set of questions




The research component did not develop as expected;
Few pure research projects were framed by the institutions;
Some of the important research questions would be covered by the evaluation
process;
The main reasons for the initial lack of momentum in the envisaged research
component were as follows:
o There was insufficient interest and/or capacity in research to galvanize
participation in a coordinated programme;
o The hunger for the practical implementation projects tended to eclipse the less
tangible benefits offered by research; and
o The support team lacked a shared vision for research.
A network of researchers established across the participating institutions


This had not begun to take form by the end of Part A; and
A network – or networks – needed to be open to all players in the ETI, not just
researchers.18
There were no explicit expected outcomes for Part A. Nevertheless, the participating
institutions reported benefits stemming from the Part A processes. The principal ones
were as follows:
 Transversal teamwork;
 Better understanding of ET’s potential; and
 More experience in project planning and proposal writing.
Representatives of the institutional teams met at the first of three annual interinstitutional workshops in Johannesburg in February, 2010, optimistic about the work
they were about to undertake. This was further evidence that Part A had provided an
effective, supportive environment for project identification and scoping, and that the
institutions were prepared to undertake Part B through these projects.
18
Hence the decision to reword this output to include practitioners.
23
Part B
Part B results are analysed principally through the lens of the five expected outcomes of
the programme.
1. ET strategies in place and operational in all institutions;
2. Improved teaching and learning practices in some areas of every institution through
the application of ET; with replication (or momentum towards it) in other areas;
3. At least three institutions achieve, or are on course for, significant improvements in
productivity through the application of ET for teaching and learning;
4. A community of practice established around ET for teaching and learning, with active
participation from people in all the institutions, and some from other African higher
education institutions; and
5. New transferable knowledge of how to develop capacity in the use of ET for teaching
and learning in African higher education. Dissemination of that new knowledge
beyond the ETI players.
ET strategies
First expected outcome: ET strategies in place and operational in all institutions.
An assumption of the programme was that, for its benefits to be maximized and
sustained, they would need to be institutionalized. This called for institutional ET
strategies within which the projects would be clearly located. If ET is well positioned in
an institutional strategy, it sends a message about its importance to members of the
institution. If the strategy is backed by resources and is implemented, progress is more
likely to continue after the projects are completed (i.e. sustainability is built in).
The first expected outcome envisaged the adoption and operationalization of strategies
by the institutions. Three indicators were chosen as evidence of this outcome:
 Budgets for investments in ET for teaching and learning are significantly increased in
the majority of the institutions;
 ICT infrastructure strategies clearly linked to the ET strategies in at least three
institutions; and
 Institutional management information systems include data relating to ET for
teaching and learning; evidence that this data is being used in decision making.
The road to this outcome began in Part A. The central Part A planned output was: Seven
comprehensive ET strategies, and, within them, a number of projects designed to put
the strategies in practice.
According to the ETI proposal, these strategies should present ‘a coordinated vision and
programme of action for future educational technology activities and investments at the
institution’ whether funded by the PHEA or not. The strategies were clearly intended to
be strategic instruments owned by the institution, independent of and transcending the
ETI.
At the same time, the ETI proposal stated that the ET strategies would ‘lay out a
programme of investment by the PHEA in identified educational technology activities’ up
to 2012. As such, the strategies were intended to include detailed planning for ETI
projects.
These roles were potentially complementary. However, the Part A evaluation concluded
that institutions had found it difficult to focus on both at the same time. Only two
institutions – KU and UCM – included non-ETI projects in their ET strategies, even
24
though all the institutions had other ET initiatives running. Given the time constraints of
Part A, the ETI orientation took precedence over the wider role of the strategies.
Development of a long-term institution-wide strategy requires an iterative multistakeholder process for which there were insufficient time and resources in Part A.
The Part A evaluation concluded that the ET strategies were still work in progress at
most institutions. This was not a major concern at the time. It was better that the ET
strategies stayed fluid at that early stage of ET development. There would be
opportunities later for lessons derived from the ETI to be incorporated into the
strategies. Delay in finalizing strategies would also allow new developments in
technology to be embraced. For example, since the draft ET strategies were completed in
late 2009, mobile phones have emerged more strongly than many expected as potential
delivery and interaction devices.
Both the Part A and interim evaluations recommended that the ET strategies be revisited
at some point before the end of the programme through an inclusive process of
engagement to achieve full institutional coverage and buy-in. This had not taken place
by the end of June, 2013. In subsequent months, a concerted effort was made by the
support team to engage the institutions in ET strategy revision. This effort seems to
have been most effective at UCM, where the revision process involved senior
management. Revision activity took place with the support team at UI, and
independently at UJ, but neither involved senior management. Revision activity was
scheduled to take place at KU and UDSM in the near future.
There are two ways that strategy for ET can be presented and contextualized. There can
either be a standalone ET strategy, as was advocated in the ETI, or ET can feature in an
institution-wide strategy or a ‘sector’ strategy for ICT or teaching and learning. Most
institutions had strategies or plans – either institution-wide or sectoral – that included
references to ET. These however did not go much beyond broad statements of intent,
and were not backed up by specific implementation plans or resources. Most of these
strategies and plans pre-dated the ETI.
None of the institutions at the end of 2012 had formally adopted the ET strategy
developed in Part A of the ETI or any other stand-alone ET strategy. Even UCM, which
had the creation and adoption of both an ET policy and strategy as the objectives of an
ETI project (UCM1), did not secure their ratification. The UCM policy and strategy were
however used by the ETI team to guide their own progress. There is a sense at UCM that
the ET goalposts were constantly moving, and that ratification of a formal strategy might
impede rather than support progress.
Several programme coordinators at the other institutions said that they often referred to
the strategies and occasionally discussed them with senior managers. There seems,
though, to have been a tacit if not explicit understanding among most members of the
coordinating teams that further work on the strategies was not an urgent priority.
Despite the absence of comprehensive strategies, there are examples of objectives being
set and decisions made for ET by senior university management. The most significant
example is probably UCM, where centralized funding for ICT – including ET – was agreed
for the first time by apex management in 2011, and accompanied by explicit
encouragement of online course development. The new spending on ICT and ET is
funded from a student levy. The UCM rectorate has also committed itself to funding
production of regular updates of digitized distance learning materials. This seems to
have been a result of the intense interest in ET created by the ETI in parts of UCM’s
Beira campus. The process of developing an institutional ET policy (UCM1) is credited by
the programme coordinator with being the crucible for enhanced interest in ET among
apex management at UCM. The contribution of ET to an ongoing transformation of the
25
UCM Centre for Distance Education (CED), with clear economic benefits to this private
institution, no doubt also played a part.
In the final year of the ETI at UI, the coordinator credited the programme with
influencing the development of IT policy and the proposed merger of the IT and Media
Units at the university.
There are examples of operational decisions taken through the influence of the ETI,
below the level of apex management, sometimes with its agreement. An important one
is the adoption of templates and quality assurance frameworks for online courses.
One or two faculties and departments – such as the CED at UCM and the Law faculty at
UJ – are also developing informal strategies for ET. However without the legitimacy and
coverage provided by an institutional strategy, these gains may be at risk.
There are examples of progress being made with ET policies – as distinct from strategies
– with support from the ETI. UCM was the only institution to develop an ET policy as a
full ETI project. This policy has not yet been formally adopted by the university’s highest
court but, as reported above, is already operational in parts of this highly dispersed
institution.
ETI support team members also facilitated the development of ET/e-learning policies at
UDSM and UJ. In both cases, these policies had been initiated prior to the ETI under
Carnegie Corporation programmes. Progress through the approval committees at both
institutions has been slow, and neither has yet been formally adopted.
Improved teaching and learning practices
Second expected outcome: Improved teaching and learning practices in some areas of
every institution through the application of ET; with replication (or momentum towards
it) in other areas.
Improved teaching and learning was the central objective of the ETI and was to be
achieved through a range of interventions. This, the second of the five expected
outcomes, refers to the most direct route to improved teaching and learning: the
application of ET.
An assumption of the ETI programme was that the deployment of new ET-enabled
teaching and learning products of sufficient quality would represent an improvement in
teaching and learning practice. Without a full impact assessment, which is beyond the
scope of this evaluation, this assumption cannot be tested. However, as the assumption
is accepted by the stakeholders, the deployment of new ET-enabled teaching and
learning products of sufficient quality will be taken as a proxy for the outcome.
The definition of ‘sufficient quality’ is of course important, and one that is difficult to
define. We must also be aware of the possibility that ET-enabled teaching and learning
products of poor quality could represent a worsening of teaching and learning practice.
In this section, we look at the outputs of the implementation projects – those that
sought to develop and deploy online courses and other technology-enabled teaching and
learning products. All these projects had the complementary objective of building
capacity among key groups – teaching staff, technicians and students – to produce
and/or use these products. Progress in both the product (the ‘tangible outputs’) and
capacity dimensions is mapped in this section.
26
Teaching and learning products
The 22 implementation projects were summarized earlier in the report (see the subsection The programme, in the Introduction). Table 1 sets out findings for the 20
implementation projects that had teaching and learning products as their tangible
outputs.
Table 1: Summary of implementation project outputs
Implementation
Tangible outputs in the Part B
projects
submissions
KU1: Digitization of
A database of past examination
Past Examination
papers.
Papers
Examination papers Web portal,
integrated into the institutional
LMS.
Past examination papers digitized,
with metadata.
KU2: Postgraduate
Research Methodology
Course
KU3: Online Executive
Masters in Business
Administration (MBA)
Programme
KU4: Chemistry and
Communication Skills
e-Learning Modules
KU6: Digitization of
Theses and
Dissertations
35-hour ‘University Postgraduate
Research Methods’ online course
produced and deployed in at least
50% of postgraduate programmes.
Curriculum and course modules
created – including case studies –
and launched with 25 students.
Implementation and marketing
plan.
Five course modules in digitized
format, available both via CD and
on the institutional LMS.
Pilot digitization of five theses.
Phase 2 proposal for further
digitization based on the above
outputs.
Policy on copyright implications.
MAK1: e-Content
160 notional hours of e-courseware
across five faculties.
MAK3: e-Portfolios
A portfolio structure/template
created.
Software installed for the
construction of e-portfolios by
students.
Staff and student training.
Portfolio piloted in two colleges.
Moodle deployment on the server
in Beira.
Masters e-content developed
across three faculties (Tourism, IT,
and Economics).
UCM2: e-Learning
Status of outputs (as at
December, 2013)
Achieved.
The target of 20,000 exam
papers digitized was
exceeded (26,520).
Note: It was deemed more
appropriate to integrate
the exam papers portal
with the KU electronic
library system rather than
the LMS.
Achieved.
Achieved.
All 13 modules are online
and have been through a
quality improvement
process.
Achieved.
Achieved.
Note: The pilot was
reduced to the thesis title,
author, supervisors and
abstract, due to policy not
yet approved by KU.
Achieved.
A total of ten courses and
an online Library Guide
were externally reviewed
as part of the quality
improvement process.
Achieved.
Achieved in IT.
Partially achieved in
Economics.
Limited progress in other
areas.
12 full courses were
27
Implementation
projects
Tangible outputs in the Part B
submissions
UCM3: Centre for
Distance Education:
Learning Materials
Digitization
Digitization and improvement of
distance learning materials.
UCM4: OER Health
Sciences
OER policy.
OER online repository.
UDSM1: Online Course
Migration and
Improvement
Up to 200 courses migrated to
Moodle.19
Five courses improved before
migration.
UDSM2: Computer
Science Interactive
Courses
Seven courses with interactive
materials developed – five with
advanced simulation/animation.
Computer Science curriculum
revised.
Status of outputs (as at
December, 2013)
submitted to Saide for
external evaluation, a
fraction of the courses
developed and deployed
on the LMS, totalling 150
hours notional hours (well
in excess of the target of
30 notional hours).
Original objectives
substantially exceeded.
Scope expanded to include
the development of online
distance courses
(ongoing).
More than four courses are
now being offered online
using Moodle and more
than 40 video lessons have
been designed to support
various programmes.
Original objectives not
achieved.
Limited revised objectives
achieved.
Original objective for
course improvement
substantially exceeded.
103 courses were
uploaded in Moodle. Of
these, 40 online courses
were externally reviewed,
as part of the quality
improvement process, with
five undergoing a second
round of review.
The quality of 96 courses
(out of 103) was
improved.
Three e-learning
newsletters were
produced.
Achieved (eight courses
were completed, reviewed,
revised and improved).
Note: Computer Science
curriculum 80% complete,
but put on hold due to
institutional realignment of
departments with different
colleges.
19
It was agreed that the original target number (200) of courses to be migrated be reduced, as following the
LMS audit, only 75 courses were found to be fit for purpose i.e. worth migrating. In total, during the life of the
project, 103 courses were uploaded to Moodle, of which 96 courses were improved.
28
Implementation
projects
UEW2: Enhancing the
Quality of Teaching
and Learning through
the Use of a Learning
Management System
Tangible outputs in the Part B
submissions
Moodle established.
Twelve courses deployed on
Moodle.
UI1: Capacity Building
and Digital Content
Development
UI2: Open Courseware
Development for
Science and
Technology
Twelve digital learning packages.
Ten units of open courseware.
Status of outputs (as at
December, 2013)
Objectives substantially
exceeded.
42 full courses built on the
UEW Moodle server were
submitted for external
evaluation although many
more (69) were developed
by staff and are being
used.
Project extended to include
the deployment of ten
units of open courseware
(completed) released onto
the institutional OER
repository, which was
developed and populated
as part of the project.
Objectives substantially
exceeded.
40 courses produced.
Second batch of 75
courses being developed.20
For UI1: More than 13
courses were developed
and submitted for review
by an external evaluator,
who provided feedback.
UI3: Tele‐classroom
Teaching in the
General Studies
Programme
Tele-classroom infrastructure
developed (with UI funds).
Lectures for two courses delivered
through tele-teaching.
Audio-visual materials produced on
DVD and mounted on LMS.
UI4: Use of
Educational Radio and
the Mobile Phone for
Tutorials in Distance
Learning
Scripts and voicing for 50 courses.
Platform for mobile learning
developed.
Piloting of mobile phone tutorials in
two faculties.
For UI2: The ten course
packages were successfully
completed.
Ten lectures were devised,
captured and distributed
via DVD. All the videos are
in excess of the target of
five minutes and average
approximately 15 minutes,
although some are 50
minutes long (average
duration: 16 minutes).
Achieved, including
expanded scope of mobile
learning concept: 100
radio lectures were
broadcast (via Diamond
FM) and distributed via the
UI intranet – well in excess
of the target of 50.
Four courses/lectures were
developed and distributed
20
UI fed back consolidated course numbers for UI1 and UI2 as these two projects often worked closely
together.
29
Implementation
projects
Tangible outputs in the Part B
submissions
UJ1: The
Departmental
Educational
Technology Initiative
30 hours of electronic material
across four departments or
faculties (Law, Computer Science,
History, and Arts).
UJ2: Educational
Multimedia and
Simulations Project
30 hours of e-course content
containing simulation in four areas:
Electronic Microbiology Laboratory
(approx. ten hours),
Electronic Pharmaceutical Analyst
(approx. six hours),
Digital Herbarium of Medicinal
Plants (approx. four hours),
Theatre of Development Virtual
Classroom and Community
(approx. ten hours).
UJ3: e-Learning
Fellowships
Development of 24 courses (one
per e-learning fellow) to be
mounted on Moodle.
Status of outputs (as at
December, 2013)
using a third-party
specialized interface app
for smart phones.
Courses in Law achieved.
Courses in Arts and
Computer Science partially
achieved.
Revised History objective
(the creation of a digital
archive) achieved.
In excess of 80 hours of
material are available in
the courses ‘Law of
Equity’, ‘International
Humanitarian Law’,
‘Criminal Law II’, ‘Agency
and Hire Purchase II’,
‘Introduction to Computer
Science’ and ‘Academic
Research Writing’.
30,000 local history
artefacts were scanned
and uploaded to the
institutional repository.
Electronic Microbiology Lab
and Digital Herbarium
achieved.
Others partially achieved.
The 80 hours of material
produced exceeded the
planned 30 hours.
It should be noted that
some of the materials were
developed outside the
institution when it was
found the in-house
developers lacked
sophistication.
Achieved.
Most of the e-learning
fellows developed full
academic courses for the
Moodle platform, while the
remainder developed
electronic tools that
supported specific teaching
requirements of their
discipline.
Table 1 tells us that, of these 20 implementation projects with teaching and learning
products as their tangible outputs:
 Five substantially exceeded their targets;
 Eleven fully achieved their original targets; and
 Four partially achieved their targets.
30
Replication effects
The expected outcome envisaged replication effects stemming from the areas of ETI
focus. Replication included the expansion of the scope of the projects beyond the areas
of focus envisaged in the original plans. This resulted in the exceeding of project targets
for course modules, as in UCM3, UDSM1, UEW2, UI1, and UI2. This replication by the
project teams was substantial at four of the seven institutions.
Replication by members of the institutional project teams also took place informally at
several institutions. The following are the main examples discovered by the evaluation
(these are not included in Table 1):
 The UEW2 project team helped the university’s distance education personnel convert
courses for use on Moodle;
 Members of the KU3 and UJ1 project teams helped others, in the faculties of
Business and Law respectively, to develop online courses;
 Members of the UCM2 team coached colleagues on the university’s Nampula campus;
and
 UDSM assisted two associate teaching colleges to develop online courses.
Although these examples are few in number, they are significant because they point to
the leveraging of awareness and appreciation beyond the ETI’s boundaries.
Replication of capacity building for online course production was a notable feature of the
programme. Institutions increasingly assumed responsibility for capacity development
themselves, either using local consultants or more often their own internal resource
people.
Beyond the numbers
These findings about the scale of outputs are an important dimension to the
programme’s results. There is a consensus among the stakeholders that this is a good
set of results. It is easy to find examples of multi-project programmes where a
substantial proportion of projects fail to deliver their principal outputs.
Clearly, however, the picture is more complex than the one presented by Table 1. There
are other dimensions that need to be considered in fully assessing these results,
principally the following:
 How intrinsically challenging, in complexity and scope, the outputs were;
 The time taken to achieve the outputs; and
 The quality of the outputs.
Challenge
The ETI projects were not intended to be, and were not, business as usual. Every project
to some extent broke new ground for the personnel involved and in most cases for the
institutions. Some of this new ground was incremental – the application of learning from
previous ET activity at the institution – as, for example, a new approach to e-course
development at MAK and UDSM, or the new-look e-learning fellowships at UJ. Other
projects, such as MAK’s e-portfolios or the mobile learning pilot at UI, took a leap into
the unknown.
All the institutions had projects for the development of online courses. The institutions’
experience in this field varied. Some had almost none, while others lacked a working
model for online course development and deployment. The development and deployment
of online courses in both these circumstances were challenging. They required project
management capabilities, advocacy, team building and incentivization, and extensive
capacity building in ET among academic and technical staff. The design and mobilization
31
of online courses involves pedagogical skills that are often taken for granted but
underdeveloped among academic personnel.
At UDSM, KU, and UJ, a core objective of one of their implementation projects was the
application of multimedia in the form of simulation and animation to illustrate key
processes. The main focus was on science – including computer science – courses, where
difficult concepts can be presented simply with multimedia, or it is possible to simulate
experiments and processes that could not be conducted in situ in the absence of
equipment or the right experimental environment. The teams involved in these projects
found that the technical capacity requirements were particularly exacting – well beyond
their original assumptions.
In terms of scale, UDSM had the most courses to develop or improve, a challenge
somewhat mitigated by the fact that the institution had only two projects. UEW’s and
UI’s online course projects became much more ambitious in the second half of Part B.
UEW, UI, and UCM had to establish, for the first time, an LMS, which was not always
straightforward. In the case of UEW, it took time – mainly because of server
procurement challenges – which diverted attention from online course development.
UDSM and MAK had the task, not of establishing an LMS for the first time, but of
weaning the institution off other, less sustainable, LMSs.
One of UI’s projects – the tele-classroom (UI3) – was technically very complex from the
outset. This project proved to be too challenging for the project timeline, mainly because
of the lengthy procurement process for the infrastructure.21 Another (UI4), which
included the application of mobile telephony for delivering tutorials to students, took on
greater complexity as the project progressed.
For three institutions, the central objective of one of their projects was the design,
execution, and reporting of a major piece of empirical research. This was particularly
challenging for UEW and UCM, institutions that had very little experience of this area of
work. MAK, as an institution, had longstanding experience of research; the challenge
here was mainly for the personnel involved, most of whom had little experience,
particularly in qualitative research. However because they were supported by people at
the institution who had such experience, the project moved forward at a reasonable pace
to a satisfactory conclusion.
MAK’s e-portfolio project (MAK3) was innovative in the sense that there were few models
and no published material on their particular application of the concept. The major
intrinsic challenge was getting clarity about how the instrument would be applied in
several different contexts. In one of the contexts – MAK’s human resource function – the
project work did not go forward.
Relatively speaking the remaining three projects held fewer intrinsic challenges:
 The e-learning fellowships (UJ3) had been tried before under the Carnegie
Corporation funding. They were substantially adapted and expanded under the ETI,
but this was an organic not a disruptive process; and
 The two digitization projects at KU (KU1 and KU6) were relatively straightforward in
concept.
All the projects faced issues in project management – but many of the factors that
challenged them were extrinsic and not intrinsic to the project design and scope itself.
21
The infrastructure, costed at around US$300,000, was to be funded by the university. This required a lengthy
procurement process.
32
Time
Almost all the projects took longer than planned. Without the no-cost extension, about
half would have failed to complete within the parameters of the ETI programme.
There were several factors behind these overruns. Some related to the project design
and management, and some were extrinsic. The most common factor was the difficulty
encountered by the project personnel in finding time to focus on online course
development.
These factors are explored more fully below. The issues to surface at this point are to
what extent and how the overruns actually mattered; and what lessons can be drawn
from this for future project management in this area.
The overall ETI timeline was relatively generous. Most of the projects were originally
scheduled to be completed well before the June, 2012 programme end date. This would
have allowed for self-evaluation and knowledge sharing within the institutions and even
dissemination outside – activities that were explicit both in the programme objectives
and in the individual project plans. These elements of the programme have been the
main casualty of the overrun. There has been very little effective self-evaluation, and
consequently little knowledge sharing and dissemination except through the medium of
case studies at a few institutions.
Another effect of the prolonged activity timelines, in some institutions, was a loss of
momentum and a knock-on effect on levels of commitment and morale. This is not
possible to measure, but it clearly was an issue in several of the projects, according to
programme coordinators and the ETI support team alike.
However, the approach of the end of the programme appears in most cases to have
galvanized teams to regain momentum and make up time. The use of retreats to focus
intensively on project outputs is a device that was successfully employed by three
institutions.
There are no universal solutions to the underlying causes of the overruns. However,
there are lessons to be learned for the planning of projects. One is that the scale and
scheduling of project needs to take into account the difficulties likely to be faced by team
members that are working in the margins of their regular jobs. This should not
automatically lead to an expanded timetable. Yet the difficulties faced by team members
in these circumstances need to be acknowledged and monitored.
There also needs to be a thorough assessment of external risks, leading to the design
and implementation of a risk strategy with built-in monitoring. Risks and assumptions
were included in the logical frameworks but were not followed through with risk
strategies.
Quality
The quality of outputs like teaching and learning products is an important but often
elusive dimension. It is difficult to accurately pre-define quality, particularly in areas of
innovation. In many programme evaluations, outcomes are taken as a proxy for quality
of outputs. There is a common assumption that, if the outcomes are satisfactory, it does
not matter if output quality is not assessed. In reality, it does matter, because
satisfactory outcomes can stem from favourable extrinsic factors, masking poor quality
outputs from an intervention.
In the ETI implementation projects, the quality of the outputs was a central issue. ETenabled teaching and learning, either for distance or blended education, has to prove
33
itself in a world where there is still substantial scepticism about distance and blended
learning. The quality of these outputs was not pre-defined, but it was clear to the
stakeholders that ways had to be found both to pursue and validate it.
The issue rose up the agenda as the programme progressed and the first products began
to emerge. The most common method of quality assessment and assurance in the early
stages of the programme was ad hoc review by peers or external experts. The internal
reviewers were mostly people with experience in developing conventional teaching
products, but not in the application of ET. There were simply too few – sometimes no –
people with this type of experience in these institutions.
In 2011, some institutions began to develop quality frameworks with ETI support team
facilitation. This was an eclectic, iterative process. The support teams did not introduce a
single preferred model, but encouraged the institutions to develop their own, based on a
small number of third-party models. UDSM was the first to develop and apply a quality
framework systematically, mainly because it had pre-existing online courses to evaluate
as part of the migration process. Most other institutions began to work with frameworks
at a later stage.
In the first half of 2012, the support team designed a quality improvement process for
online courses and offered it to the institutions. They were invited to present for external
review online courses that were at an advanced stage of development. In total, 136
courses were presented for a first round of review. Table 2 breaks down these numbers
per institution.
Table 2: Number of courses submitted for review under the quality
improvement process
KU
19
MAK
11
UCM
6
UDSM
40
UEW
42
UI
14
UJ
4
The review process was managed by the support team but mostly conducted by external
specialists. There was no cost to the institutions. The process provided feedback to the
institutions on four areas:
 Course design;
 Activities;
 Student assessment; and
 Technology.
Course content was not reviewed though the quality improvement process. This was left
to the institutions.
Each of the five external experts was given courses from two to three institutions to
review. In addition to giving feedback on specific courses, the reviewers produced
summaries of their findings. An analysis of these summaries suggests that the following
generalizations can be made:
 Most of the reviewers said or implied that the quality of the courses overall was
good. One described the quality as average for ‘first effort deliverables’;
 The courses universally produced access advantages, for students and teachers alike,
through the provision of text-based content in electronic form in the LMSs;
 More use could be made of the opportunity ET offers to transform teaching and
learning practice;
34




Insufficient student engagement was provided through forums and course
evaluation;
Reviewers were divided on whether course navigation was satisfactory. Broken links
were a common problem. Two of the reviewers suggested that more uniformity of
structure would be an advantage;
Visual aids were used well, but good quality multimedia was inconsistent; and
There were insufficient reflective pauses and time indicators.
Following the review, the institutional programme coordinators were expected to reengage with the course developers to improve the products. Smaller samples of revised
courses were then further reviewed to assess the degree of improvement. The general
impression was that improvements had been made, although further improvements were
recommended.
The online course evaluation instrument used for the quality improvement process was
being reviewed in the final months of the programme. It is intended that it will have
wider application.22
Three institutions – KU, UDSM, and UJ – had projects with the central objective of
including multimedia in online courses. It was pointed out that mastery of this type of
multimedia turned out to be much more challenging than the teams had anticipated.
Overall, the institutions themselves – particularly KU and UJ – were less satisfied with
the quality of these products than others; a view endorsed by the specialist consultant
deployed under the ETI for multimedia.
With the exception of ten courses and an online Library Guide at MAK (a spin-off from
the MAK1 e-content project), the other products of the implementation projects were not
submitted for external review under the quality improvement process.
The KU digitization projects are simple in concept and had fewer quality issues than the
others. The principal ones were in the quality of the pre-existing products being
digitized: the examination papers and theses. KU was aware of this and was intending to
apply quality control to the examination papers that are accessible through KU1.
Quality is always to some extent context- and even user-specific. Assessments of quality
should wherever possible include user perspectives. With the ETI, little reliable user
feedback was available at the time of the evaluation, partly because some of the
products had not yet been deployed for use by students; but also because, where they
had been, evaluative research among users had not been conducted systematically by
most institutions.23 The only systematic user feedback that was available to the
evaluator came from UI, UDSM, and UEW. This generally conveyed positive messages
about the online courses.
The support team and the institutions engaged seriously with quality assurance and
improvement, particularly in the last 12 months of the programme. However, there has
been great reliance on external expert review – not a sustainable option, particularly if
externally funded. There is a consensus that it is important to institutionalize quality
assessment. Some institutions had, or were intending to, set up quality assurance units.
As part of its quality initiative, the support team initiated a dialogue with some of these
units in 2011. The direct dialogue between the support team and the quality units was
not sustained, but follow-up in June, 2012, pointed to progress in at least three
institutions in the integration of quality assurance for online courses with the work of the
units, although none could be said to have come close to institutionalizing it.
22
The review instrument is to be made available under an open licence on the Saide website.
The summative evaluation guidance advocated a survey of a sample of students who had engaged with ETI
courses.
23
35
Action research and systematic user monitoring are complementary processes for quality
improvement. Most institutions have engaged in action research to some degree. The
institutional teams readily acknowledge that online courses and some of the other
outputs are work in progress – evolving products that will be improved and augmented
as the personnel and institutions gain more experience. They need to invest in
systematic user monitoring to optimize the improvement process.
Capacity building for teaching and learning
It would have been impossible to improve teaching and learning practices through the
application of ET without human capacity building. At the start of the programme, none
of the institutions had a critical mass of people with competence in ET for teaching and
learning – either in the development of products or in their use. All but one of the
implementation projects had substantial capacity building among their planned outputs.
Despite the attention paid to the tangible outputs, by the end of 2012 all the
coordinating teams had reached a consensus that increased capacity in the application of
ET to teaching and learning was the most important result of the ETI.
Much of the planned capacity building was delivered through workshops. They were held
at intervals, allowing for the application in product development of the acquired skills
and knowledge before another workshop offered more advanced learning or, where
necessary, reinforcement of previous learning. Support, mostly by e-mail, was provided
between workshops. A good example (see Table 3) is the e-content project at MAK,
which recorded a total of 585 person-training days.
Table 3: Capacity building in the e-content project at Makerere University
Type of training
Number of Length of
people
training
attending
Planning activities – training of projects’ team members and
17 1 day
unit representatives on the vision, objectives and planned
activities of the project (Wednesday 21 April, 2010)
LMS 1 workshop, on e-content project design and
28 4 days
development (June, 2010)
Graphics design and interface for the Web (July, 2010) –
15 5 days
practical sessions
LMS workshop 2 – supporting effective courseware
24 2 days
development for Moodle (August, 2010)
Video, image, voice design for e-content – practical sessions
14 5 days
Quality improvement workshop (for the university)
51 1 day
e-Content quality improvement engagement workshop
20 4 days
(October, 2011)
Layout design for Web developers – practical sessions
18 2 days
Evaluation of e-content framework workshop (February,
17 3 days
2012)
Open educational resources (OER) deployment workshop
12 1.5 days
(June, 2012)
Follow-up workshop for deployed courses and online Library
9 3 days
Guide final development (September, 2012)
This process of incremental capacity development in ET was central to the ETI model and
was strongly endorsed by the institutional teams as a valuable feature of the model.
Where it worked well, staff improved progressively, and few dropped out. A contrast was
drawn with previous attempts at these institutions – in all cases externally funded – to
36
develop capacity of staff collectively in ET. In those cases, there had been training but
no follow-up. This had generally not worked well according to the ETI coordinating
personnel.
The ETI model did not always work optimally, however. Intervening product
development did not always take place for various reasons, mainly difficulties in finding
time, and lack of incentives.
In the early stages of the programme, most capacity development was facilitated by the
ETI support team. Increasingly, institutions took over capacity development themselves,
using either local consultants or more often their own internal resource people. They
used workshops but also small-group, and one-to one, coaching and mentoring by the
institutions’ own personnel. This form of capacity development is difficult to disentangle
from online course development itself.
Within the ETI, UJ had a unique approach to capacity development. Its e-learning
fellowships project (UJ3) had the programmed capacity building of teaching staff in ET at
its heart. In each of the three years of the ETI, eight teaching staff were trained,
coached and mentored, and given the opportunity to apply the learning on a specific
project that was central to their teaching work. Inter-disciplinary collaboration was
encouraged, as was the revitalizing of a community of practice.
The majority of the institutions were able to reach many more staff than they had
originally planned. There were different models for this expansion. Mostly, the transfer of
skills and knowledge happened organically, such as at UCM where one-to-one coaching
took place between interested partners. The two most structured approaches were as
follows:
 UDSM used cascading to reach around 400 staff; and
 At UJ the ‘graduate’ e-learning fellows helped the next batch, both on a one-to-one
basis and through an online community of practice.
Within every institutional team there was a consensus that the development of online
courses was harder than they had expected. A frequent comment on the experience was
that the ETI had ‘opened their eyes’ to the complexity of developing these products.
Some project leaders delayed the organization of workshops because they were not
convinced they were strictly necessary, but discovered for themselves that online
courses were a bigger challenge than they had thought. A UJ representative said staff
were previously ‘just dumping material’ online; after the first Part B workshop, they
realized they had to ‘go back to school’ to learn to teach.
One of the reasons many participants found online course development difficult was
because they had never been trained in learner-centred education. That is not to say
they were ineffective teachers, but there are some aspects of effectively developing and
delivering teaching materials, whether for children or higher education-level students,
that need to be learned and practised. This is a hurdle that many of the ETI projects
faced. Fortunately, the ETI support team had anticipated this and built it into the
capacity building that was advocated and offered from the outset.
As with the tangible outputs of the implementation projects, the quality of capacity
building was not pre-defined and, in the absence of systematic feedback, has been
difficult to assess. However there are two positive indicators of capacity building:
 Very few people dropped out of the projects in Part B, unless they left their
institution altogether; and
 The staff encountered in facilitated group discussions24 during the evaluation were
generally very confident and committed.
24
One facilitated group discussion with course developers was held in each of five institutions.
37
Although effective capacity building is probably not the only factor behind these
indicators, these results would be unlikely if capacity building had been weak.
In the eyes of the ETI coordinating teams, the group that had benefited most from
capacity development was teaching staff. Technical staff were not mentioned so
frequently, yet they did receive significant exposure to training, often by attending the
same workshops as their academic colleagues. This was encouraged by the Saide
support team as a means of promoting sustainability.
Multimedia
One area where substantial technical training was needed was in the development of
multimedia and its integration into online courses. The application of multimedia was the
central objective of projects at three institutions. These project participants received at
least two rounds of technical training, mostly from a consultant contracted through the
ETI.
The technical learning curve for multimedia is generally much steeper and longer than
for text-based e-content and the use of plug-in applications like news forums and chat.
In the case of animation, if appropriate OER are not available, it requires people with
programming, instructional and design skills. At the beginning of the ETI, none of the
institutions had people in-house with the full set of skills and experience to produce
quality animated learning objects.
At first, this was not fully understood by the institutions. At least one team was under
the impression that it would be possible to train significant numbers of teaching staff,
relatively quickly, to create animated learning objects. One of the workshops provided by
the ETI multimedia consultant was attended by over 50 people – teaching staff and
technicians alike. This was useful for awareness-raising purposes, but impracticable for
skills training.
Capacity for multimedia is still a significant challenge, especially at UJ and KU. UJ, for a
period, relied for the application of multimedia on external expertise, partly funded by a
parallel Carnegie Corporation programme. An ETI-funded consultant was brought in later
to develop internal capacity but found that it was difficult to get traction, with a lack of
consistency in the people attending what was intended to be a progressive series of
workshops.
KU also looked at outsourcing the production of multimedia, but it was shown to be too
expensive. Although several of the KU in-house technicians then received training in
multimedia, all but one subsequently left the institution.
Of the three institutions with multimedia projects, UDSM is currently best equipped to
continue the work. It has a centralized team of staff in the Centre for Virtual Learning,
with a good combination of technical and pedagogical skills. However, few have the
technical capacity needed for advanced multimedia. A gap would open up if any left.
Multimedia is not just about programming and design skills. Teaching staff need to know
how to integrate multimedia with learning objectives. This is not a type of training that
technicians could be expected to provide.
A solution to the capacity gaps for multimedia is likely to be elusive for these
institutions. Some people with multimedia skills are always likely to be attracted by the
greater financial rewards outside universities. Moreover, multimedia is constantly
evolving and skills need regular updating.
38
One possible solution is a shared resource, which might be feasible in cities like Nairobi
with several higher education institutions, or even between two to three institutions in
East Africa. Some support could also be provided online.
Other capacity building for teaching and learning
Specialized training was provided for radio scripting at UI, database design for resource
digitization at KU, and engagement with e-portfolios (both for staff and students) at
MAK. Depending on the assessment of the e-portfolio pilots, this may need to be
extended.
An area of need among teaching staff that has not yet been fully addressed is online
facilitation. A small number benefited from a course provided by UCT CET, and the ETI
support team was looking at the possibility of expanding access to this type of training.
Systematic, large-scale training for students in Moodle and other aspects of e-learning
was not built into the ETI projects. UDSM alone managed to achieve this. It reached far
beyond its target of 200 students and delivered training to most of the first batch of
students – more than 3,520 – registering for courses or modules on Moodle.
The consensus among the teams is that student competence in engaging with these
products cannot be taken for granted. This is a gap that may need to be filled, although
it is far less of a challenge than building the capacity of online course developers.
Student learning outcomes
Very little data on student learning outcomes of the ETI was available at the time of the
evaluation. The only quantitative survey data was from UI and UDSM.
In the UI survey,25 43% of respondents said that the quality of the learning experience
was ‘very much better’ or ‘much better’ as a result of the application of ICT; a further
40% said it was ‘slightly better’; and 28% said ICT-facilitated teaching and learning had
been ‘a great help’ – and a further 59% ‘some help’ – in reaching higher levels of
achievement.
In the UDSM survey,26 74% of the students said there were learning benefits in using
Moodle. The majority said Moodle was ‘very useful’ for announcements, resources,
assignments and quizzes; but fewer than 25% said that interactive elements – chat and
discussion boards – were ‘very useful’.
At UEW, evaluative research of the e-course development process was qualitative. In the
case of students, this was obtained through facilitated group discussions. The draft
research report concluded that, although there were many shortcomings in the quality of
the ETI outputs, instructional practices and the general pedagogy of academics were
undergoing a paradigm shift in favour of learner-centred learning and authentic
assessment, thanks to the blend of learning theory, technology, and the nature of online
courseware development.
The other evidence of student outcomes stems from facilitated group discussions
conducted by the external evaluator, and the observations of members of the ETI
coordinating teams reported to the evaluator during interviews. A facilitated group
discussion was held by the evaluator with a group of students at each of four
institutions. The consensus in these groups was that the experience of online courses
25
26
Contained 385 returned questionnaires.
In all, 230 students were surveyed.
39
developed under the ETI was positive. No participant expressed serious disappointment
or wanted the staff involved to revert to the previous mode of teaching.
Reservations among the students in the facilitated group discussions tended to be in two
areas: disruption to access because of power and network outages; and weak
participation by fellow students. With the latter, there is a challenge of lack of critical
mass. The facilitated group discussion participants reported that there was usually a
vanguard of students who engaged with chat and discussion boards; but typically they
would get discouraged and reduce or stop their engagement when few others
participated.
Not all the online courses developed under the ETI had been deployed by the time of the
evaluation; but, where they had, there was a consensus among the developers that
student engagement – particularly with the more interactive elements – had been at a
lower level than anticipated. In the piloting of the courses in KU4, for example, it was
reported by the team that ‘very few students participated’ in voluntary activities such as
forums and quizzes. Most, though not all, the MAK1 developers concluded that it was
difficult to get students to engage in voluntary activities.
For some, student training was the answer. Others emphasized the importance of
creating incentives or sanctions around engagement. One course at MAK made
participation in a forum assessable and thereby generated a very high level of
engagement.
The examination paper digitization project at KU (KU1) generated a high level of student
engagement. The examinations database site had had more than 283,499 site visits by
the end of December, 2013, for instance. Students in a facilitated group discussion were
unreservedly positive about being able to access the papers online, thereby avoiding
bottlenecks in the library. Library staff members were also very positive. Student
complaints were reduced, space was freed up, and there was increased equity in access,
especially for distance education students.
Productivity improvements
Third expected outcome: At least three institutions achieve, or are on course for,
significant improvements in productivity through the application of ET for teaching and
learning.
Improved productivity implies the more effective and efficient use of resources. It can
mean: greater volume or improved quality of output for a given set of resources; the
same output for fewer resources; or some combination of these variables.
Productivity improvements were not explicit objectives for most of the ETI projects. The
principal exceptions were the examination and thesis digitization projects at KU (KU1
and KU6), and the digitization of distance education materials at UCM (UCM3). The teleclassroom project (UI3) was designed to achieve substantial student access benefits; but
the project did not develop as originally planned.
At UCM, the digitization of distance education materials (UCM3) has produced substantial
savings in transaction costs. It has enabled the university to plan further expansion of
distance education student numbers, which will produce unit cost savings.
KU is already experiencing productivity gains through savings in staff time in retrieving
exam papers, and a significant increase in the number of students who can access
papers at any given time.
40
The other implementation projects focused principally on teaching and learning quality,
and staff capacity development. These aspects have potential productivity benefits. At
the time of the evaluation, however, it was too early to assess the full productivity
benefits through improved capacity and quality, as staff deployment on the online
courses was intense because the courses were mostly still in their pre-pilot or pilot
stages.
A community of practice
Fourth expected outcome: A community of practice established around ET for teaching
and learning, with active participation from people in all the institutions, and some from
other African higher education institutions.
A community of practice has been defined as a ‘group of people who share a concern or
a passion for something they do and learn how to do it better as they interact
regularly’.27
The ETI aimed to generate a network and community of practice activity, and to
leverage it to enhance, extend and sustain the benefits of the individual projects. The
envisaged network or community of practice28 would operate within, between and
beyond the seven institutions.
Communities of practice rarely establish themselves spontaneously. They usually require
facilitation and resourcing, and busy people need strong incentives for regular
participation. The three inter-institutional workshops in Johannesburg were the stage for
increasingly vigorous networking between the institutions. In most cases, though, this
networking did not continue in a meaningful sense between workshops, despite attempts
by the ETI support team to facilitate this. For example, an online forum focusing on
Moodle was established after the first inter-institutional workshop, but failed to generate
any significant interaction.
It is not surprising that networking did not gain momentum immediately after the 2010
workshop. Many of the projects did not get under way until a month or longer after that,
so there was not much of substance to discuss in the immediate aftermath of the
workshop.
The instances of face-to-face and significant virtual exchange outside the interinstitutional workshops were as follows:
 Participation in an ET advocacy workshop at UI by two academic staff from UJ;
 Participation of the MAK ETI programme coordinator in the Conference on the
Application of Information and Communication Technologies to Teaching, Research
and Administration at UJ in September, 2010. The coordinator presented a paper on
the importance of systematic approaches to the planning of ET.29 There was a
proposal for a return visit to MAK, which did not materialize;
27
Communities of Practice: Learning, meaning, and identity. By Etienne Wenger, Cambridge University Press,
1998.
28
This evaluation does not distinguish between a network and a community of practice. Both are different
from networking, which can take place without a network or community platform.
29 Tito Okumu: ‘The Importance of Systematic Approaches to the Planning of Educational Technology’ – Fifth
International Conference on Application of Information Communication Technologies to Teaching, Research,
and Administration, 26–29 September, 2010, Jos, Nigeria (AICTTRA 2010). Proceedings, Volume V / Edited by
Prof. E. R. Adagunodo, Prof. J. O. Adeyanju, Prof. C. T. Akinmade, Dr. G. A. Aderounmu, Dr. A. O. Oluwatope;
41



Informal visits by MAK ETI personnel to KU and to UCM, neither involving ETI funds;
Online teaching, by one of the ETI participants from KU, on a UCM course, using
UCM’s Moodle platform and WebEx for video conferencing; and
Collaboration by UI in mobile learning (project UI4) with the University of Ife (not an
ETI participant).
New, more intensive efforts were made by the ETI support team at the March, 2011
inter-institutional workshop. By then, there was a much clearer rationale for networking
because the teams had made progress and had more information and experience to
exchange. There was explicit encouragement of exchange visits where a ‘business case’
could be made.
Planning for collaborations was set in motion at the inter-institutional workshop, with
proposals grouped into a number of thematic clusters:
1. Research on factors impacting on the use of technology;
2. Research on the impact of the ETI on the institutions;
3. Themed case studies;
4. Sharing of best practice and learning from one another on mobile technology and
other media; and
5. Quality improvement.
However, the only one of these clusters that produced sustained inter-institutional
networking and some collaboration was the first. This was the multi-site research
initiative (described under the fifth outcome, below). The second cluster generated a
series of ideas and responses from about half the institutions. These clusters were
strongly facilitated by a member of the support team and the external evaluator,
respectively.
Cluster 3 went forward but without any inter-institutional networking or collaboration. A
necessary condition for collaboration around case studies would have been a
combination of common experience and shared interest. Every institution was developing
online courses, but there was little interest in a case study around the mainstream
products. The greatest interest was around the exceptional projects – e-learning
fellowships, e-portfolios, m-learning – where there was no common experience.
Cluster 4 initially produced a proposal for a workshop from participants interested in mlearning; but enthusiasm waned rapidly, facilitation was not provided, and the workshop
never moved beyond initial conceptualization.
Cluster 5 – quality improvement – was taken forward by the support team as was
reported above, but also without any networking and collaboration.
Several ETI coordinating team members said they had discussed visiting another
institution to compare notes and learn, but had not followed through. When asked why
not, the reasons most frequently given were lack of time or funds. The former is
plausible, the latter less so as the ETI support team made it clear in 2011 that additional
funds would be available if a good business case were made. Of course, developing
business cases takes time, but the ETI had relatively simple approvals procedures.
Some of the programme coordinating team members referred to two other factors that
they said inhibited people from collaborating: a desire to protect their intellectual capital,
and a lack of confidence in exposing it – and sometimes a combination of the two.
organized by Obafemi Awolowo University, Ile-Ife, Nigeria, in collaboration with University of Jos, Nigeria, and
Carnegie Corporation of New York, USA.
42
There was a general feeling among the programme coordinating team members that
these factors were not insurmountable. They did not however see collaboration as an
urgent priority – there was always something more urgent – and they had the
impression that the ETI support team shared this perspective. Some said that, if they
had been explicitly encouraged to factor collaboration into the project deliverables, it
would have happened.
Networking tends to be short-lived unless it is sustained by facilitation, usually in the
context of an organized network or community of practice. Where face-to-face
interaction is infrequent, networking needs an online presence, which must compete with
many others offering similar benefits. The ETI did not place much emphasis on this.
Whether this outcome would have been fully achieved if more emphasis had been placed
on, and more resources directed towards, creating and sustaining an online platform is
impossible to say. It is not likely though that the creation of a community space alone
would have seen the ETI team members participating in large numbers. The benefits
were not clear, and there were significant inhibitions and competing claims on their time
to be overcome.
In 2011/12, UCT CET and Saide, with Carnegie Corporation funding, developed the
concept of an online network for ET practitioners and researchers in Africa. In
September, 2012, an online seminar that centred on UJ’s experience with e-learning
fellowships was piloted. This strongly facilitated event attracted about 25 active
participants, including nine UJ e-learning fellows, past and present. The online network
currently operates through a Facebook page under the banner of e/merge Africa.30 It
had 220 members at the end of June, 2013, with four of those being members of ETI
programme coordinating or project teams. By the end of December, 2013, membership
had grown to 304, about 14 of whom were participating members of the ETI programme
support or project teams.
The ETI did succeed in extending the boundaries of collaboration within some of the
institutions. All the institutions had projects that spanned more than one department or
faculty. Most of these institutions reported that this generated cross-departmental
collaboration, which was very unusual for them and was a positive experience, bringing
new perspectives to their work.
UJ went further and revived a – mostly online – ET practitioners’ forum, originally
established under Carnegie Corporation funding in 2006. This was said to be very active,
with ordinary members and moderators alike offering solutions among themselves. This
would have been a particularly useful instrument at UJ, where face-to-face meetings can
be very difficult to set up because of security concerns.
New transferable knowledge
Fifth expected outcome: New transferable knowledge of how to develop capacity in the
use of ET for teaching and learning in African higher education. Dissemination of that
new knowledge beyond the ETI players.
The ETI was intended to leave a legacy not just for the participating institutions but also
other parties interested in the application of ET for teaching and learning in higher
education in Africa and beyond. This wider legacy would primarily be knowledge of how
to develop capacity in the use of ET for teaching and learning; but also of the reality on
the ground: what use is being made of ET in African higher education institutions, and
what factors influence that.
30
The Facebook group can be found at www.facebook.com/groups/137699809709364.
43
This research and knowledge agenda was intended to operate both within the projects
and parallel to them. It was intended that the knowledge would be generated and shared
in two overlapping ways:
 Informally and reciprocally, through networks and communities; and
 More formally through the conduct and dissemination of research, including
evaluation.
The previous sub-section looked at networks and communities. This sub-section assesses
progress in research.
While much of the activity in the ETI – particularly the implementation projects – had
objectives that remained constant throughout, the research landscape in the programme
changed considerably. When the ETI was mooted, there was general awareness –
confirmed by desktop research – that relatively little was known about the development,
application or use of ET in sub-Saharan Africa, particularly outside South Africa. The ETI
proposal pointed out that the predominant research literature in ET stemmed from
Western countries and that ‘the particular contextual and cultural issues that impact
upon ICT-supported teaching and learning initiatives in sub-Saharan Africa remain
relatively unexplored and undocumented’.
The creation and dissemination of new knowledge about the application of ET in African
higher education institutions was an intrinsic part of the rationale for the ETI at its
inception. The intention was to comprehensively embed a research dimension in the
programme. This was captured in one of the original ETI objectives: to ‘Research and
report on educational technology activity in African universities by means of a long-term
project’.
Three strands to the proposed research dimension can be identified from the Part A
programme documents.
1. The first strand would be a set of research activities at the level of the institution –
some as a facet of implementation projects, others as independent projects, but
linked conceptually. Within this strand, there would be three types of research:
o Preliminary research to inform the development of a project and the strategies
adopted in its implementation;
o Action research during a project, which would document the cycles of planning
and implementation in order to provide direction for the next cycle, to capitalize
on successes and avoid errors or less desirable outcomes; and
o Evaluative research to provide a summative assessment of a project – both
process and results.
2. The second strand was to be a three-year research strategy designed to complement
and inform research work at institutional level, and to ensure that local-level research
activities were conceptualized and developed under the auspices of a common,
overarching framework. This framework would be created prior to commencement of
the projects, and would include the mapping of research questions. However, the
proposals for research also emphasized that the strategy should not impose or direct
research at the local level, and that the research questions – apart from an
overarching question31 – should emerge from engagement with the institutions in
Part A and be shaped by their priorities; and
3. The third strand was to be a collaborative one. The intention was to ‘encourage
research partnerships between the seven institutions or other relevant groups, and
promote collaborative projects that [would] benefit from such a joint endeavour’. The
aim was to ‘move away from the “individual champions” models to one that supports
31
The proposed question was: ‘How do higher education institutions in the PHEA ETI shape, enable, or
constrain the uptake and implementation of educational technologies to address their teaching and learning
challenges?’
44
communities of practice, both within and across universities’. This links to another of
the ETI’s objectives: ‘To promote collaborative knowledge creation and
dissemination’.
First strand
Most parts of the first strand were pursued in the ETI, although not consistently across
the institutions.
Independent research
As explained earlier in the report, of the 26 projects, four were independent research
projects (see Table 4). The two at UEW were firmly linked to the institution’s single
implementation project (UEW2). The ones at MAK and UCM were much less so, although
at MAK steps had been taken to have the research influence future ET policy and
strategy.
Table 4: The four independent research projects in the ETI
Projects
Outputs in the Part B
Status of outputs (as at
submissions
December, 2013)
MAK2: Research on the
Research report with
Achieved.
Influence of Gender on
recommendations for
One paper has been
Perceptions of Staff and
change and improvement in presented at an
Students on the Use of
both policy and practice
international conference32
Educational Technology at
and two further publications
Makerere University
are reported to be in the
pipeline.
Recommendations are
before the Change
Management Committee.
UCM5: Research into
Research report and
Data collected.
Adoption of e-Learning in
dissemination
A draft research paper has
Central and Northern
been submitted but not yet
Mozambique
reviewed.
UEW1: Baseline Study on
Research report
Final report received.
the Current State of
Research undertaken too
Educational Technology at
late to be used as baseline
UEW
data for UEW2.
UEW3: Investigating How
Research report
Final report received.
Academics/Students Use
Projects approaching
Web-based Approaches to
finalization.
Enhance Teaching and
Learning
Only the project at MAK was completed at the time of the evaluation. The interim
evaluation in March, 2011, found that the other three independent research projects
were substantially behind schedule and facing significant challenges. The challenges had
largely been overcome at UEW by the end of 2012, and the projects were approaching
finalization. At UCM, the data collection had been completed, but there was less certainty
about finalization.
In sum, Table 5 tells us that, of the four independent research projects:
32
Ruth Nsibirano and Consolata Kabonesa: ‘Time to Change: Perceptions on Use of Educational Technology
for Teaching in Makerere University’ – 18th Biennial ICSD Symposium, July, 2013, Kampala, Uganda.
45


One substantially exceeded its targets; and
Three partially achieved their targets.
The research projects present a mixed picture as far as quality is concerned. MAK’s
research outputs have been through several iterations involving internal and external
review. The quality of the final product is reflected partly in the fact that a paper had
been presented at an international conference (18th Biennial ICSD Symposium, July,
2013, Kampala, Uganda).
The research at UEW and UCM posed greater quality challenges. Their research
pathways were troublesome. At UEW, the quantitative research for UEW1 was seen to be
flawed, prompting an increased reliance on qualitative data, an area where the
institution had even less experience. Intervention by senior management at a late stage
provided external expert support, which led to the completion of a report.
UCM5 also experienced difficulties in both execution and analysis. Behind these
difficulties lay an extreme reliance on the research project leader, who left UCM midway
through the ETI period. UCM did well to capacitate others to continue with the work, but
the report had not been drafted at the time of the evaluation. Report writing seems to be
the weakest aspect of research capacity in these institutions.
With research projects like these, use is more fluid than the development and
deployment of online courses. Reaching the right people through dissemination is the
first hurdle, and even if that is achieved, there remains the issue of whether people will
engage with and apply the knowledge generated. It is too early to answer the second
question, particularly with audiences outside the author institutions. However, at MAK
and UEW at least, some of the conditions seem to be in place for the research to be
taken up in policy and strategy. MAK’s research has been presented to the Change
Management Committee and Planning Department. The ETI programme coordinator is a
member of the Change Management Committee and is making efforts to promote the
findings of the research. At UEW, the VC is closely involved and is likely to take this
forward.
Preliminary research
It was hoped that the institutions would engage in preliminary research to provide them
with baseline data for the implementation projects and the ET strategies. UEW was the
only institution to conduct preliminary research (UEW1), and there the data collection
and analysis were completed 18 months behind schedule – too late to have any influence
on the implementation project (UEW2).
Action research
Action research is a process for continuous improvement involving the monitoring of,
and reflection on, the implementation of change. It was encouraged within the ETI
programme, for example, through the six-monthly reports. Workshops in action research
were offered, and in one case (UCM) taken up. The action research approach was
adopted in varying degrees by all the institutions. It was practised most visibly by the
programme coordinating teams at UCM, UDSM, UJ, and UI. UCM’s and UDSM’s highly
focused coordinating groups applied action research to the management of their projects
through everyday interactions. UJ and UI practised it through their coordinating groups,
which met less frequently.
Action research in the ETI has produced greater awareness of weaknesses and capacity
needs. Many people in the ETI teams became aware – and shared that awareness – that
learner-centred education, the effective application of ET, and some other key activities
such as research, project management, and monitoring and evaluation, are more difficult
46
than they had anticipated and that they had unmet capacity needs in most of these
areas. There is now much more realism about capacity among the ETI personnel, and
action research has contributed to that.
The UCM programme coordinators believe that the adoption of action research
contributed to a change of culture in parts of that institution. It identified strengths, as
well as weaknesses, thereby helping to build confidence.
UJ applied action research in particular to the e-learning fellowships, with each batch
benefiting from learning harvested from the previous one.
At UEW, the team transferred its learning from the ETI to the parallel Carnegie
Corporation programme, giving it a new direction with emphasis on capacity
development for e-learning.
Action research also took place within the ETI support team. There was an explicit
recognition that this programme could lead to a step-change in understanding about how
to support capacity development in ET. The team members carried their developing
knowledge from one institution to another and also to work outside.
Overall, the application of action research – structured or unstructured, conscious or
unconscious – has been a positive factor in the promotion of learning, innovation and
improvement in the ETI.
Evaluative research
Monitoring and evaluation are integral processes in any programme. If well designed and
implemented, they can lead both to improvement in the course of a project and wider
learning, as well as accountability when the project draws to a close.
At the level of the programme as a whole, there has been substantial investment in
external evaluation. The main products have been two formative evaluations – in 2010
and 2011 – and the current summative report.
The devolved part of the monitoring and evaluation system contained three main
elements:
 Six-monthly reviews and reports at each institution, building on more frequent,
project-level reviews. The reports were to chart progress against activity and output
milestones; and also to highlight challenges experienced in the reporting period;
 Summative evaluations of each project; and
 A summative evaluation of the programme at each institution.
A detailed guide was provided for the summative evaluations, including model
questionnaires – two of which were made available as online surveys – and discussion
topic guides for use in facilitated groups (see Annex E).
The six-monthly reporting was consistent. It focused chiefly on deliverables; the sections
on challenges were often well constructed although short on detail. The interim
evaluation (March, 2011) signalled that ‘the summative exercise in 2012 will need more
systematic and penetrating evaluative research by the institutional teams’.
All the institutions included evaluations in their project plans. However, most of these
project evaluations had not been completed six months before the end of the ETI
extension, or focused predominantly on the quantity of deliverables. None had made use
of the model surveys. At the end of June, 2013, one programme-level summative
evaluation report (MAK’s) had been completed. Four more had been received by the end
of November.
47
The lack of progress on evaluation at the institutions is partly explained by delays in the
project timetables, which left little time for evaluative work. In some cases the projects
themselves were not complete. However, it also points to weak evaluation capacity
among the teams, reflecting weak research capacity in general. Discussions with the
institutions during the final evaluation visits suggest weak appreciation of the rationale
for summative evaluation, especially of outcomes. This may have been fuelled by the
lack of clearly defined outcomes in project plans and in the institution-level logical
frameworks.
There are two areas where useful evaluative research has been produced by the
institutions. UEW3 evaluated the single implementation project around Moodle and
online courses. The research was originally very limited in scope – based on two pilot
courses, principally relying on diaries by the course developers. These diaries were not
completed as planned, but instead the research focus was broadened as more courses
were developed, and the research methodology switched mainly to focus group
discussions. The report – ‘Using Moodle for Teaching and Learning at University of
Education, Winneba’ – should make a useful contribution to policy, strategy and practice
at UEW because it has an engaged stakeholder audience. It may also offer learning to
wider groups.
The other significant evaluative research by the institutions was the case studies. The
initiative emanated from the 2011 inter-institutional workshop, where several efforts
were made to boost the research activity around ET.
Eight topics were originally proposed,33 with the intention of publishing the case studies
as a compilation. The initiative was coordinated and supported by members of the
support team.
By the end of December, 2013, nine case studies34 had been completed: one from MAK,
two from UDSM, one from UEW, two from UI, a joint case study from KU and UDSM, and
two by members of the ETI support team. The case studies have been compiled for
publication by Saide into a booklet titled: Unlocking the Potential of ICT in Higher
Education: Case studies of the Educational Technology Initiatives at African Universities.
The case study titles were as follows:
 ‘The Design and Feasibility of Adoption of a Special Purpose e-Portfolio in an African
University: The case of Makerere University’;
 ‘Developing and Using Animations and Simulations to Teach Computer Science
Courses: The case of University of Dar es Salaam’;
 ‘The Experience of Course Migration from the Blackboard to the Moodle LMS: A case
study from the University of Dar es Salaam’;
 ‘Using the Moodle Learning Management System for Teaching and Learning at the
University of Education, Winneba’;
 ‘Designing and Developing e-Content in Higher Education: The University of Ibadan
model’;
 ‘Students’ Acceptance of Mobile Phones for Distance Learning Tutorials: A case study
of the University of Ibadan’;
 ‘Determinants of Successful Diffusion of Technology Innovations in Higher Education
Institutions in Africa: A social network approach’;
 ‘An Investigation of the Deployment of the Moodle Virtual Learning Environment at
Eight African Universities‘; and
33
34
One from MAK, two from UCM, two from UDSM, two from UI, and one from UJ.
From a slightly different breakdown of contributors from originally planned.
48

‘Embedding Quality Improvement in Online Courses: A case study of seven African
universities’.
In several cases, a version of the case study has also been submitted to an international,
peer-reviewed journal for publication and has either been published or is under review.35
The two studies at UI have also been presented at conferences and seminars.
In addition, case study research was conducted on the e-learning fellowships at UJ and
presented at e/merge 2012.36
UCM completed a case study ‘Taking Education to the People (TETTP) – Models and
Practices Used by Catholic University of Mozambique in its Distance Education Program’
and presented it at the Distance Education and Teachers’ Training in Africa (DETA) 2013
conference. At the time of the final evaluation the case study was being prepared for
formal publication. The university was also working on a case study focusing on the use
of technology within its CED (Centre for Distance Education).
With the exception of three, the case studies had a single-project focus. There is a
consensus that these case studies and the work on e-learning fellowships represent good
examples of single-project studies, documenting and reflecting on the project processes,
their outputs and, to a limited extent, their early outcomes. These studies are providing
a profile for ET at the institutions, and may, through wider dissemination, attract further
attention. Because of their high contextual specificity, they are unlikely to further
understanding of ET processes more generally.
The case studies have helped develop the capacity – particularly in research writing – of
the contributors. The process took significantly more time and support than was
originally planned.
Second strand
The second strand of the research dimension – a three-year research strategy with an
overarching framework and research question – did not materialize as planned. This was
predominantly an ETI Part A issue. The methodology – and even the rationale – of the
overarching framework and research question were not fully resolved in Part A, mainly
because of the limited interest in the research agenda proposed by UCT CET that was
35
Joel S. Mtebe and Hashim W Twaakyondo: ‘Developing and Using Animations and Simulations to Teach
Computer Science Courses’ – Proceedings of International Conference on e-Learning and e-Technologies in
Education (ICEEE), 2012: 240–246; Joel S. Mtebe and Hashim Twaakyondo: ‘Are Animations Effective Tools for
Teaching Computer Science Courses in Developing Countries? The case of University of Dar es Salaam’ –
International Journal of Digital Information and Wireless Communications (IJDIWC) 2(2), April, 2012: 202–207;
Hashim Twaakyondo and Mulembwa Munaku: ‘Experience of Course Migration from Blackboard to Moodle
LMS’ – International Journal of Computing and ICT Research, 6(2), December, 2012: 33–45; Ayotola Aremu,
Olaosebikan Fakolujo and Ayodeji Oluleye: ‘Designing and Developing e-Content in Higher Education’ – under
review at Research in Learning Technology: The Journal for the Association of Learning Technology (ALT-J);
John Kandiri and Joel Mtebe: ‘Determinants of the Success of Diffusion of Technology Innovations in Higher
Education Institutions in Africa: A Social Network Approach’ – under review at The Electronic Journal of
Information Systems in Developing Countries (EJISDC); Brenda Mallinson and Greig Krull: ‘An Investigation of
the Deployment of the Moodle Virtual Learning Environment at Eight African Universities‘ – under review at
International Journal of Education and Development using Information and Communication Technology
(IJEDICT); and Andrew Mwanika, Ian Munabi and Tito Okumu: ‘The Design and Feasibility of Adoption of a
Special Purpose e-Portfolio in an African University’ – under review at International Journal of Education and
Development Using ICT (IJEDICT).
36
e/merge 2012 defined itself as ‘the fourth virtual conference on educational technology in Africa’.
49
apparent among the institutional teams. With few research projects being developed by
the institutions and no strong interest emerging in wider research questions, the
foundation for an overarching framework was lacking. The interim evaluation concluded
that there was insufficient interest in the overarching research programme for there to
be any value in reviving it, in what remained of the ETI.
Despite the lack of action on this strand, there remained pockets of interest, in the
institutions, in research beyond the four independent projects. The interest grew as the
projects progressed. Subject matter for research became more tangible and the desire
for recognition grew.
This interest was harnessed not only by the case studies but also by several other
initiatives. Most of these were independent pieces of work by individuals or small groups
at single institutions. They are described below in the sub-section Other research.
Third strand
One initiative, led by a member of the ETI support team, succeeded in bringing several
institutions together in a collaborative research venture. This was the only substantive
output under the proposed third strand. The multi-site research initiative was one of
several collaborative activities proposed by the support team at the 2011 interinstitutional workshop. Unlike the case studies, it was not evaluative research focused
exclusively on ETI project work, but had the wider objective of ‘researching factors
influencing the uptake of technology for teaching, learning and assessment in the seven
institutions’.
Initially, all institutions showed an interest and the principal investigators all initiated the
research project in their institutions. However, because of capacity constraints, two
institutions were unable to complete their projects. The completed multi-site research
report – titled ‘Factors Influencing the Uptake of Technology for Teaching, Learning and
Assessment at Five African Universities’ – is therefore based on results from five
universities: KU, MAK, UDSM, UEW, and UI.
ETI team members at these five institutions engaged in a range of research activities:
research proposal development, primary data collection, analysis, and reporting. There
has been collaboration, leading to some common research questions and data coding.
The multi-site research initiative has been strongly facilitated by the ETI support team
member, who brought the separate outputs together into a single report, including also
contributions from UJ and UCM, which had already carried out independent research
projects.
Progress was much slower than anticipated. However, the stakeholders at KU, MAK, and
UI who were interviewed by the evaluator were satisfied that this had been a good
opportunity for capacity development and a rare example of inter-institutional
collaboration.
The significance of the multi-site research is that it makes a contribution to knowledge
on technology uptake at seven African universities, with institutional, individual, and
pedagogical practice within these institutions potentially being informed by the findings
of the institutional studies.
Other research
The most substantial additional piece of research was another case study at UDSM titled
‘The Use of Technologies to Support Teaching and Learning: The case of UDSM’. It was
designed to contribute to the multi-site research report ‘Factors Influencing the Uptake
of Technology for Teaching, Learning and Assessment at Five African Universities’
50
(mentioned above). This work grew from an earlier initiative by the ETI support team
aimed at producing case study-type work based on ETI projects for a special edition of
Distance Education,37 the Journal of the Open and Distance Learning Association of
Australia. Two papers – not as many as envisaged – by ETI participating institution
members and others, were included:
 S. E. Adewumi, J. Dooga, D. C. J. Dakas, D. I. Yakmut and T. J. Mafwil: ‘The eLearning Fellowship Program at the University of Jos, Nigeria’ – Distance Education,
32(2), 261–268; and
 ‘Joel S. Mtebe, Hilary Dachi and Christina Raphael: ‘Integrating ICT into Teaching and
Learning at the University of Dar es Salaam’ – Distance Education, 32(2), 289–294.
Papers stemming from ETI project work were also presented by ETI team members at
numerous conferences, including the 5th, 6th, 7th and 8th e-Learning Africa conferences;
11th International Educational Technology Conference; Euro-Africa ICT Research Forum;
the online conference e/merge 2012: ‘Open to Change’; First African School on Internet
Governance; 18th Biennial ICSD Symposium; 2nd e-Learning Update Conference;
Distance Education and Teachers’ Training in Africa, DETA, 2013 Conference; and 8th
International Conference on e-Learning.
The ETI has provided expert support, and in one case funding, for PhD research by ETI
team members at KU, MAK, and UDSM.
Capacity building for research
Capacity building for research has not been as systematic or substantial as for the
development of teaching and learning products. It has largely responded to expressed
demand. Formal training included:
 Workshops on action research and research methods at UCM; and
 Training at MAK for twelve research assistants in MAK2.
Other research capacity development has tended to be one to one or simply learning by
doing. The multi-site research initiative also provided some capacity development
through collaboration.
The ETI has not left a large footprint on research capacity at these institutions. However,
a small number of individuals – perhaps around ten in all – have acquired valuable skills
and confidence mainly through their work and the support they were given by peers and
the ETI support team.
Other outcomes
Four of the five expected outcomes, identified for the ETI early in the programme, have
been addressed above. An institutional collaboration exercise identified several other
possible outcomes, some of which have been covered alongside the expected outcomes.
A few of these outcomes stand apart and are assessed below, along with one of the
original ETI strategic objectives not covered so far.
Core institutional systems
One of the ETI strategic objectives was: ‘Get core institutional systems to work so that
they support teaching and learning more directly.’
37
Special Issue: Distance Education for Empowerment and Development in Africa. Volume 32, Issue 2 (2011) –
Guest Editors: Neil Butcher, Colin Latchem, Monica Mawoyo and Lisbeth Levey.
51
To the extent that this objective was aimed primarily at the institutions’ LMSs, it can be
said to have been achieved. Considerable attention was paid to mounting and/or
improving Moodle systems (whether pre-installed or not) at all the institutions.
The ETI support team commissioned an ‘audit’ of Moodle installations at a sample of the
institutions. It discovered inconsistencies in the operation of Moodle, stemming mainly
from a lack of – or lack of access to – policies and guidelines. The audit led to the
production of guidelines, which were distributed to the participating institutions. By the
time of the evaluation, the programme coordinating teams at all the institutions felt that
Moodle was working well for them.
However the wording of the objective suggests that the LMSs were not the only target of
this objective. It seems that other systems such as student registration, human
resources and budgeting were originally intended to be supported by the ETI. In the end,
this did not materialize except in one minor instance: KU5.
KU5 was designed to produce a functional specification and a request for proposals for a
harmonized electronic executive management information system. This low-budget
project had delivered the functional specification by the time of the evaluation. However,
the survey on which it was based had attracted a very low response, and any supplier
would have had to undertake another round of user research. No progress had been
made in management take-up (over a year after the specification was produced) of the
KU5 functional specification and the request for proposals had not materialized. This
project cannot therefore yet be defined as a success.
Other capacity development
Capacity development for teaching and learning, and for research have been addressed
above.
There were no other areas where formal training was given. However, capacity is
developed through practice, especially if accompanied by feedback. Programme
coordinating and project team members reported that their project planning and
management skills had been developed in this way. A lack of project planning capacity
emerged as an issue in Part A of the programme. Most of the institutional teams needed
and received considerable hands-on help with this from the support team. Project
management – which involves, for example, assigning roles, supporting team members,
monitoring performance, managing stakeholder relationships, controlling finances, and
addressing the unexpected – was also a challenge for some, and the support team
helped with this too.
All the teams said they now felt more confident in these areas – an important legacy that
the ETI support model has left.
Increased access
Increased access to teaching and learning was not an express objective of the ETI, and it
has not happened on a large scale. Most of the applications of ET are for blended courses
or existing distance education courses. However, there are a few exceptions:
 At UCM, the ETI has led to new distance education courses aimed at under-served
markets. This is also the case at KU with its Executive MBA;
 At UCM, the numbers that can be accommodated by existing distance education
courses – there has been an expansion from 2,500 to 8,000 in the lifetime of the ETI
– have increased partly because of efficiencies from digitization (UCM2); and
52

At UI, more people have access to educational content through the radio
programmes developed under UI4, although these are mostly extra-mural listeners –
people not enrolled on UI’s courses.
Increased awareness of and demand for ET
The ETI was not intended to take place behind closed doors. It was implicit in the
programme that the projects should not just develop capacity and deliver ET to teaching
and learning products in a number of defined areas, but also demonstrate to wider
audiences – staff and students alike – the potential of ET, thereby promoting awareness
and demand.
There is qualitative evidence of this effect, although to different degrees and in different
ways across the programme.
The UDSM Centre for Virtual Learning has used a newsletter, funded by its ETI funds, to
spearhead its advocacy of ET. There had been three editions of the newsletter at the
time of writing the current evaluation report.
The UCM team reported that ET had become a regular agenda item at the university’s
Rectorate Council – a contrast with the pre-ETI period when ET hardly figured. Student
representatives, at their most recent six-monthly meeting with the rector, placed ET
issues at the top of their ‘shopping list’. The demonstration effect of the ETI had been
mostly felt in Beira, where it acquired a high profile, but staff at other centres had also
requested help in developing online courses.
The programme coordinating team at UI has been the most systematic of the institutions
in its advocacy work. It conducted a thorough awareness campaign at the beginning of
the ETI, targeting deans and other senior staff. In 2012, it delivered a series of
presentations, faculty by faculty, including demonstrations of new online courses.
The team at UJ reported that few staff now question the rationale for ET – a clear
indicator of progress, which they attribute to the ETI and Carnegie Corporation
programmes. However they were not satisfied with the number who were actively
engaging with ET. They felt that appreciation did not run very deep in most cases.
At MAK, the programme coordinating team found it difficult to have an impact on
attitudes to ET. The university is large and devolved, the team is small, and it reported
that there was considerable scepticism among staff and even students about the value of
investing in ET.
At UEW, wider awareness has not been made a high priority. The institution intended to
engage in promotional activity in 2013.
Individual projects at KU – particularly KU1 and KU3 – engaged in advocacy and
awareness activity, but this has not happened at the programme level. Project personnel
reported that there was little awareness of the ETI in the institution at large.
Enhanced institutional reputation
A successful programme can lead to an enhanced reputation for an institution. This can
be done through deliberate marketing of the achievements and their impact on the
institution – including dissemination of case studies – or more organically, by word of
mouth. The effect is not always easy to measure.
53
In the current evaluation, programme coordinating teams and senior managers were
asked for anecdotal evidence of enhanced reputation. Evidence was generally scarce,
although most institutions had something to report.
UCM had the most to report, although this may partly be a function of the team’s
heightened appreciation of the importance of reputational benefits for their private
institution, which relies on its performance in the educational market place. Two
examples stand out:
 The Mozambique National Institute for Distance Education has encouraged all
distance education providers to engage with UCM to see how it is applying ET to
distance education; and
 UCM has been invited to take part in a national project for capacity building in ICT for
higher education.
UDSM was invited to present its experience to stakeholders of a new World Bank-funded
project on e-learning in five Tanzanian universities. It was also invited to advise on
multimedia as the project moved forward. UDSM distributes its ET newsletter outside the
university, and this may have been a factor.
The UJ team believes that its university won the right to host an international ET
conference38 in 2010 because of a reputation for innovation earned through the Carnegie
Corporation and ETI programmes.
At UI, ETI team members are acting as mentors to project members in a World Bankfunded programme – Step-B – for girls’ education in science. The UI team also expects
its radio broadcasting to promote its reputation for innovation.
MAK’s programme coordinator has participated in several external conferences,
advocating ETI-inspired approaches to ET.
UEW believes it is now the fastest mover in ET among Ghanaian higher education
institutions, although it has no evidence yet that this is appreciated outside the
institution.
38
Conference on the Application of Information and Communication Technologies to Teaching, Research and
Administration – it is normally held at Obafemi Awolowo University, in Ile-Ife, Nigeria.
54
Success Factors and Constraints
This section identifies and assesses the main factors that have helped and hindered the
ETI in achieving its objectives. Some factors are intrinsic to the programme and some
extrinsic. Most relate to the institutions and their immediate environments. There are,
however, factors that relate to the programme as a whole. These concern the
programme design and the work of the ETI support team.
Programme design, management, and facilitation
The ETI as a whole was designed, managed, and facilitated by a team of education
specialists drawn from or contracted by Saide, in association with UCT CET. Neil Butcher
& Associates was contracted by Saide to manage the programme. During Part B, two
educational technology specialists (one from Saide and one from Neil Butcher &
Associates) were assigned to give comprehensive support to the seven institutions,
providing workshops, monitoring progress, and liaising with other sources of support
where necessary. They, in turn, were supported by other Saide staff members and
contract staff. Support in research was provided on a more ad hoc basis.
The participating institutions were given a budget ceiling and, within this, identified the
target areas for their projects. This led to considerable diversity. The shaping of the
projects – particularly those involving Moodle and online courses – was actively
facilitated by the support team, leading to a degree of harmonization, but it is clear from
the diversity of project documents that there was no blueprint. This was seen as a
positive feature of the programme.
Every institutional programme coordinating team felt that the ETI struck a good balance
between devolved project identification and management – local empowerment – and
direction from the support team. Several contrasted the ETI model with some other
externally funded projects, where the local actors had less control over the parameters
of the projects, and yet also less support – beyond funding – in carrying them out.
The ETI as a whole had high-level objectives but did not pre-define how these objectives
should be pursued. The openness of the agenda gave the institutions room to tailor the
projects to their specific needs and learning curves. The support team provided help by
e-mail, phone and, less frequently, face-to-face visits. There were geographical, financial
and availability limitations on the frequency of visits from the support team. However,
none of the coordinating teams expressed dissatisfaction with the availability of the
support team.
Project planning followed an emergent path. The project teams were not committed to
inflexible logical frameworks. Changes in deliverables were agreed if they could be
justified and were in the spirit of the programme at that institution. Substantial midprogramme changes took place at UDSM (both projects), UI (UI3 and UI4) and UCM
(UCM3, UCM4, and UCM5).
The institutions highly appreciated having a dedicated ET specialist as part of the
structure – someone who understood their programme environment, maintained
communication regularly, and acted as an advocate when necessary and appropriate.
They felt the support team was on their side. One programme coordinator said it was
like ‘having a brother’ to help them. This relationship, several said, had the effect of
strengthening their sense of accountability for reporting progress and delivering the
outputs.
55
The funders – the PHEA consortium – had close involvement in Part A of the ETI. They
approved individual projects and subjected some to further scrutiny by external
consultants. This approach delayed the start of two projects; and would have been
difficult to reconcile with the role of the ETI support team if it had continued in that way.
However, after the start of Part B, the funders played a much lower-key role, especially
after the closure of the PHEA consortium office in 2011. The support team reported to
them annually and negotiated high-level changes such as the no-cost extension; but
agreement to changes at project level was devolved to the support team. This enabled
quick and informed decisions.
Other aspects of programme design made a difference according to either the
institutions or the support team:
 There was a relatively long time frame,39 which allowed for overruns and changes of
scope;
 The ETI support team had a large and flexible enough central budget to provide
additional capacity development and support visits where necessary;
 The hands-on, real-time approach to evaluation was found to be helpful, with the
evaluator regarded as someone who understood the institutions, and who acted as a
formative resource and not an ‘inspector’. However, the evaluator was based in the
UK, and his availability for face-to-face support was restricted to his three- to fiveday visits (three each of these) to each institution, and to the three inter-institutional
workshops. While this represented more engagement than in a typical evaluation
programme of work, it was a limiting factor in addressing the major evaluation
capacity challenge that became apparent in the later stages of programme
implementation;
 The seven institutions worked to the same high-level agenda and time frame,
providing an element of competition that acted as an incentive, particularly towards
the end of the programme. This was also identified by some members of the teams,
though, as a factor that worked against collaboration;
 After an initial upfront payment to seed project activities, funds were released
against planned deliverables and not in advance. This was effective as an
accountability mechanism, with minimal disadvantages. It rarely led to problematic
delays, partly because some institutions were able to find internal funding for urgent
or supplementary interventions when ETI payments could not be released.
Institutions also often requested, and had approved, payments outside the normal
cycles of payment. As has been reported above, there was some flexibility in the
deliverables that could be funded, although these changes had to be negotiated with
the support team; and
 A condition of involvement in the ETI was that apex management should be involved
in the planning of the programme at institutional level at the outset. This lent profile
to the programme at the institutions and gave the participants a sense that they
were in the spotlight, particularly in the early and late stages of the programme
when apex management attention was greatest.
Financial resources
It is likely that some institutions would have accepted and found ways to spend more
funding than was available. There were limits to their absorptive capacity, however, and
as most struggled to deliver their outputs, the ETI budgets seem to have been broadly in
line with needs.
39
The only criticism of the length of the time frame came from MAK, which was the first to be visited in Part A.
Part A, for that institution, lasted 15 months, which, according to feedback, led to about half of the personnel
that had originally been attracted to the programme dropping out of the group.
56
Very few teams referred to problems due to shortage of funds. Where they did, it was for
specific, unforeseen activities such as the retreats at UEW and UI. In many cases,
internal funds were found for these activities.
Most projects were demonstration projects in nature. With interventions like these, it is
important to control the size so that they can be more easily evaluated and improved
before wider roll out.
The volume of funding available for central programme management and other
consultant activity – independent of the issue of availability of support personnel – was
mentioned by no-one as a limiting factor.
ETI support team
ETI support came chiefly in the following forms:
 A watching brief and occasional programme-wide visits;
 Technical workshops and advice relating to specific projects;
 The provision of models and tools for e-course structure and for quality assurance;
and
 Follow-up support by e-mail and phone.
The most visible intervention was workshops. The biggest cluster of workshops was for
online course development. Workshops were offered but not imposed. Institutions were
able to draw them down as either single interventions or a series.
There was universal satisfaction with the relevance and quality of the online course
development workshops. Some were not delivered in optimal circumstances – for
example, the length was reduced at the institution’s request; there were problems with
technical set-up; or they suffered from poor or inappropriate attendance. Nonetheless,
these setbacks do not detract from the intrinsic quality of the interventions. There was a
consensus that the early workshops, in particular, were valuable – they were essential
launching pads for the projects.
The models for online course structure and for quality assurance were of critical
importance. Local customization of these models was encouraged, leading to increased
ownership.
The typical workshop addressed both learner-centred teaching and more ET-specific
matters such as instructional design of online courses and use of Moodle. In the interim
evaluation, there were some criticisms of what was seen as the excessive generality of
the initial workshop format; but the teams increasingly acknowledged that the technical
content needed to be linked to pedagogical and contextual issues such as stakeholder
management.
This was a very big agenda and it is not surprising that some projects did not develop
rapid momentum after the delivery of the workshops. A few hardly moved at all until
there was another injection of external facilitation. The reasons for this early variation in
performance do not seem to be related to the nature of the support provided, which was
similar in design across the programme. It is more plausible that it related to local
factors – for example, incentivization or availability of key personnel (explored below). It
is possible, though not certain, that deeper situation analysis and needs assessment at
each institution in Part A or early in Part B would have uncovered some of these issues in
advance.
There were ETI support team workshops in other thematic areas. These experienced
more challenges than the typical Moodle/course development workshops:
57




Workshops in multimedia were provided to three institutions, but were initially
frustrated by a lack of clarity about their target group;
There were workshops for Moodle technicians, but these workshops were twice
hampered by technical set-up problems;
The OER workshop at UCM was not productive because of the poor attendance and
lack of institutional follow-through; and
A research methodology workshop at UEW failed to generate traction because of the
lack of experience and expertise of those who attended.
The model of support for the research projects was not optimal. Support for research
was somewhat ad hoc, particularly in the first year of Part B. This may have been a
contributory factor in the chronic research challenges faced by UEW and UCM. Support
for the multi-site research initiative since 2011 has been more consistent, although the
pace of progress has also been slow due mainly to other factors.
One aspect of support that enjoyed universal approval was the support team’s
administrative services. They were described as responsive, appropriately persistent,
friendly, and personal.
Inter-institutional workshops
The inter-institutional workshops were a substantial investment. The stakeholder
consensus is that they were money and time well spent. The first workshop took place
on the cusp of Part B, in February, 2010. There is no doubt that it served its primary
purpose of contextualizing the individual programmes and projects – giving the
institutions a perspective on what they were proposing to do and giving them a sense of
belonging to a collective effort. Some of the content sessions were said to have been
useful, particularly the one on OER.
It has been reported above that the first inter-institutional workshop did not make a
noticeable difference in two areas that were integral to the original ETI concept: the
proposed overarching research project, and inter-institutional networking and
collaboration.
The second inter-institutional workshop took place in March, 2011. Judging from the exit
questionnaires, ad hoc feedback from participants, and direct observation, the event
added greater value than the first. The main factor in this is probably timing. Participants
brought to the workshop at least a year of experience of working with ETI projects. They
knew much more about their strengths and weaknesses, and were able to share their
experience to date, and to engage more meaningfully in discussions about what was
needed in the final phase. As the participant base was very stable, they also knew one
another as individuals, which led to increasingly fluid interchange of ideas.
The final inter-institutional workshop, in March, 2012, reflected a confident mood among
the teams. There was clear satisfaction that most projects were drawing to a satisfactory
conclusion, and that no institution had dropped out or been left behind. The workshop
generated detailed planning for the final months of the programme.
Institutional leadership
Institutional leadership has the potential to affect a programme like the PHEA ETI in two
main ways.
One way is the effectiveness in general of leadership. A stable, respected leadership that
leads and manages the institution well, and encourages innovation and achievement, is
58
more likely to provide a conducive environment for demanding programmes like the ETI
than one that lacks these qualities.
It would be invidious for this evaluation to try to assess the effectiveness of institutional
leadership generally in the seven participating institutions. As far as stability is
concerned, most institutions had the same VC40 throughout the programme; and where
the leadership changed it was usually at the end of their full term of office. The
exception was MAK, which had three VCs during the programme, two of them in an
acting capacity.
The other way in which institutional leadership can affect a programme is in leadership’s
behaviour towards the programme itself and its domain. People follow the behavioural
cues of leaders. If they are seen to favour – and preferably get involved in – an area like
ET, it is both encouraging for the programme members and an inducement for others to
join or pay attention to it. It is even better if this support materializes as policy,
strategy, and favourable structures and processes.
The objective of apex management involvement at the start of the programme was
achieved. There was evidence of dialogue with a member of apex management at each
institution; and at several of them a member of apex management was present at some
time during the Part A workshops. The memoranda of agreement were signed by the
VCs, and accountability was pursued through this channel when needed.
Continuing apex management involvement varied across the institutions and over time
as the ETI rolled out. Unsurprisingly, there was no institution where it was hands-on
throughout the programme. Nevertheless, it was clear from the evaluator’s annual visits
that the ETI was on the radar of at least one member of apex management at every
stage. Considering the relatively modest external funding, this interest is an indicator of
apex management’s buy-in and commitment to ET, reflecting both the relevance of the
latter to apex management’s institutional strategies and the status of the ETI leadership
at each institution.
VC involvement in Part A of the programme was most intense at KU. The VC was a
member of a project team and was the chair of the ETI governance committee. The
weight and diversity of the VC’s responsibilities, especially at such a rapidly developing
institution, meant that the initial level of involvement was not sustained throughout.
In the later stages of the ETI, UEW’s VC was the most actively involved. This was mainly
in response to difficulties the programme was facing at UEW. The VC intervened directly
to replace some of the team members, to fund and attend the pivotal Kumasi e-content
development retreat for UEW2, and to challenge and support the quality of the draft
research reports.
MAK provides an interesting example of the changing effects of institutional leadership.
At the end of Part A, there was concern that the instability of the leadership environment
might derail the ETI programme. Institutional politics were preoccupying the academic
staff. Approvals for important proposals were difficult to get. A change of leadership at
MAK then turned a negative factor into a positive one as the new VC lent his support to
the goals of the programme. As a former dean of computer sciences, he made ICT
development – including the application of ET – a high priority. However, this
momentum was not sustained when instability entered the leadership domain again in
2012.
40
At UCM, the leader has the title of rector and the next level as vice-rectors. The current evaluation uses the
terms VC and DVC to refer to apex management (the top three to four executives).
59
Although direct VC involvement has been the exception, at the DVC level there was
regular dialogue with the programme at most institutions. At UCM, both vice-rectors in
2012 were using ET in their teaching.
Involvement at the next level – college principals, deputy principals, and deans – is
harder to map. Everywhere there were examples of deans’ involvement. At KU and UCM,
the deans of the Business School and of Economics were actively involved in ETI projects
(KU3 and UCM2). At MAK, a member of the ETI coordinating team was deputy principal
of the College of Education for much of the programme, although her involvement was
not as regular as before she acquired that position. The most systematic and widespread
involvement of deans and heads of department was, as was reported above, at UI where
the programme team conducted two rounds of extensive stakeholder consultation.
Although levels of apex and other senior management involvement varied, they were
probably high for a programme of this size. Most stakeholders regard it as a factor in the
programme’s success. Perhaps the clearest illustration of this is the turnaround in the
latter part of the programme at UEW.
Institutional structures, processes, and culture
This set of factors is a complex one. A university has many structures, and even more
processes, that might affect a programme like the ETI, which cuts across faculty
boundaries. Universities have complex cultures that affect the way people behave within
and towards the programme.
Probably the most potent area in all of this is the area of reward and recognition, both
formal and informal. A programme like the ETI is more likely to prosper where the
institution’s reward system supports the investment of time in its activities and
recognizes its achievements. This can operate informally – the attention of senior
management has already been mentioned – but it is more likely to function if there are
institutionalized incentives for the type of work and achievement.
Universities traditionally have oriented their reward and recognition systems towards
published research. This was the case at all ETI participating institutions. None had an
institution-wide mechanism in place for rewarding either course development or effective
teaching. This was mentioned by all the programme coordinating teams as a constraint
on the ETI. All had experienced difficulties in motivating people, particularly in the early
part of the programme when the learning curve was at its steepest.
There was recognition in the last months of the programme in most of the institutions
that the basis of reward needed to be broadened. UI, UDSM, and UEW were actively
discussing it or already implementing change. This is a positive factor for the future,
even if the ETI itself did not benefit.
Informally the picture is more nuanced. Irrespective of the formal reward system, there
may be kudos in being associated with a particular initiative, particularly if it has a high
profile. The ETI had this potential especially among younger members of staff such as at
UDSM. It may also have appealed to people looking for work opportunities outside the
university.
Some programme coordinating teams expressed surprise at the high levels of selfmotivation. Clearly this did not sustain activity throughout; all the institutions
experienced periods when progress was slow. The developers of online courses at all the
institutions eventually reached the point, though, where the learning curve began to
flatten and the benefits of ET – including time saved – became more tangible.
60
Although there were no institutional financial rewards for ETI work, staff were
compensated financially through the project in various ways, especially for weekend
work. UI introduced a team rewards package for results. Payment for work was a
contested aspect of the programme. Some beneficiaries voiced concern about the
message it sent. The compensation rates, however, were much lower than for some
other externally funded programmes such as the one at MAK sponsored by the Carnegie
Corporation. Give the amount of uncompensated work that was necessary, it is unlikely
that people would have participated for long for these financial rewards alone.
Procurement can seriously hamper progress in projects. It was not a major issue in most
participating institutions, partly because there was not much expensive equipment, other
hardware, or long-term consultancy involved. The only institution that had significant
challenges with ETI-funded procurement was UEW, where the nationally determined
procurement process had to be repeated because of a cost underestimate in the original
specification. KU experienced minor delays early in its digitization projects because the
sheer volume of procurement requests was creating bottlenecks.
UI’s tele-classroom project involved the purchase of costly and complex equipment,
which was to be funded by the university itself. There were no fast-track procedures at
UI for centrally funded procurement on this scale and the timeline proved to be too long
for the project to go ahead as planned under the ETI.
Other institutions faced due diligence issues about the engagement of particular
consultants, but found ways to accommodate them.
Consultation processes about change – such as new curricula – took longer at some
institutions than others. This is partly a reflection of institutional culture. UI is probably
the best example of a consultative/consensus-oriented institution where change takes
longer to initiate. In UI’s case, this was a positive factor, ensuring wide acceptance of
change.
UCM is at the other end of the spectrum. As a private university, it has to be
entrepreneurial to compete with the subsidized state sector. Decisions are made very
quickly, sometimes without a clear trail. This suits an innovative programme, but it can
have a downside in perceptions of exclusion in other parts of the institution. UCM needed
to exercise caution in this area.
UJ, for historical reasons, has a culture that implicitly encourages innovation and radical
solutions. This however does not guarantee acceptance of rapid change, as the ETI team
experienced.
Rapid change outside the programme can be a challenge. It diverts attention and can be
disruptive in other ways. MAK as an institution underwent the greatest change during
the ETI: change in leadership, but also in structure as faculties were transformed into
colleges, involving changes in personnel, procedures and other areas. This environment
was not conducive to traction in the ETI.
ICT infrastructure and electricity
ICT infrastructure and electricity at the institutions were far from ideal for a programme
such as the ETI. Slow Internet speeds, network and power outages, and shortage of
equipment for staff and students alike are discouraging, in particular for people trying to
develop and deploy online courses.
61
Internet speeds and cost improved significantly at all the institutions during the ETI,
thanks mainly to completion of various undersea cable projects. UCM, for example,
networked all of its Beira centres with fibre optic cable. UDSM switched to fibre optic
cable in 2011. Most institutions began to provide Wi-Fi access on their main campuses
during the programme, reducing dependence on desktops in computer laboratories.
These developments were not funded by the ETI.
The main problems in the later stage of the ETI were electricity and network instability.
Daily power outages were a serious obstacle to continuity of work at UI and UJ in
particular. Generators did not always provide a satisfactory alternative. At other
institutions, outages were more likely to occur weekly than daily.
External disruption
Disruption to people’s working lives are bad for a programme like the ETI. Work loses its
momentum; plans have to be revised; people are distracted and motivation and morale
weakened; and critical windows of opportunity for research are lost.
The ETI work at several institutions was disrupted by factors other than power and ICT
outages. The worst example of this was at UJ, which was affected by repeated bouts of
insecurity in the city and its surrounding areas. The university was shut down on
numerous occasions; but, even when it was open, people’s attendance was affected by
fear. This seriously disrupted project work, although the programme coordinating team
did its best to meet off campus to maintain a grip on the programme. Inevitably, though,
this put a brake on progress.
Strikes were another disruptive factor. This particularly affected MAK where, for
example, the piloting of the e-portfolios was seriously delayed by strike action. The
research project was also affected because the team was prevented by strike action from
launching a survey at a critical point in the project’s schedule.
ETI governance
Although it was not a formal requirement, the institutions were encouraged to put in
place some sort of governance structure and process for the ETI. This was reflected in
the ET strategy template. Governance should ensure accountability but also
effectiveness, through monitoring, advice and, where necessary, sanctions.
Arrangements for governance varied widely. UDSM probably had the most engaged
governance arrangement, operating consistently throughout the programme. Its Project
Steering Committee was chaired by the dean to whom the Centre for Virtual Learning
reported; and other members of the Project Steering Committee broadly reflected the
management structure in the areas of concentration in the ETI. In this way, efforts for
getting buy-in and decisions were streamlined. The Project Steering Committee met
regularly, requiring reports from the programme team. These reports were presented in
person by the institutional programme coordinator, with other programme participants
attending when appropriate. Reporting to the steering committee created an obligation
to reflect. The committee in turn gave informed feedback.
Most significantly, the Project Steering Committee at UDSM upwardly revised the project
output targets. Although this challenged the team, especially in UDSM2 – the multimedia
project – the steering committee’s decision was respected by the programme team
members. UDSM’s governance system for the ETI seems to have been the most effective
model.
62
KU had a formal governance group – the M&E Committee – chaired by the VC. It met
regularly and received briefings from the programme coordinator. However, it only met
members of the projects once in the lifetime of Part B, and was seen by some of its
participants as being distant from the programme – more a policing body than a critical
friend.
MAK’s programme was notionally overseen by the university’s director of planning. In
practice, there was little engagement for most of the period. A new director, appointed in
mid-2012, had had almost no engagement with the ETI by the time of the evaluation
visit in November.
Governance of the ETI at UEW lacked clarity for much of its life. The programme was
notionally overseen by the Externally Funded Projects Office but its personnel functioned
more like team members, offering advice and logistical support. In these circumstances,
it was more difficult for it to hold the programme managers to account. Late in 2011,
this arrangement was changed when a governance committee was set up by the VC,
prompted by concerns that the projects would not achieve their objectives. The
committee, led by the VC, called for fortnightly briefing. This committee in turn reported
quarterly to an External Grants Committee, which took a closer interest than before in
the ETI. This level of oversight gave the programme at UEW a sense of purpose and
urgency, which had a highly significant effect on its performance.
At the other institutions, no formal governance or oversight arrangement was in
operation. Contact with senior management took place as needed, usually initiated by
the programme coordinator. For getting permissions, this worked well; but it was not a
satisfactory long-term arrangement. It was most likely a factor in problems not being
picked up, for example, with UCM4 and UCM5. No matter how effective and engaged the
programme coordinator, it is important to have an external perspective, especially on
difficult decisions to be made about performance issues.
Selection of team members
An issue close to that of governance is that of team member selection – particularly the
project leaders – for the ETI. Project leaders were selected in a variety of ways. At most
institutions, the programme coordinator’s was the main voice in the decisions. There
were two main exceptions.
At UJ, project and sub-project leaders were appointed by senior management, without
the programme coordinators having a voice. This proved to be problematic, with two or
three sub-projects operating beyond the influence of the coordinating team.
At UI, team consensus played the dominant role in leader selection. Here seniority
subordinated itself to expertise, experience, and enthusiasm – with positive effects.
Management and support within the institutional programmes
The building blocks of ETI organization were the project teams.41 These teams were not
meant to be autonomous. They were intended to work as part of an overall programme,
facilitated by, and interacting through, a programme team led by a coordinator. In this
way, a supporting and supportive environment would be created, synergies between
projects picked up, and relations managed with the wider institutional community.
41
In one or two cases – e.g. with KU5 and KU6, and UCM5 – projects were managed at times by individuals on
their own. This may have been a factor in the problems these three projects faced.
63
Issues such as liaison with governance, sensitization, advocacy, and the development of
strategy for the future were meant to be the concern of the programme coordinating
team.
There was no single model structure for coordination in the programme. Each institution
found its own way of doing it. Styles of management and teamwork also differed. There
was no intention to prescribe a particular model or style, and the current evaluation will
also not attempt to do so. It is important though to point to examples of practices that
seemed to work well in their own context, and others that might with hindsight have
worked better.
UDSM’s two projects were located firmly in the Centre for Virtual Learning; and although
other people were involved, Centre for Virtual Learning personnel drove the projects,
meeting almost daily. In this way, progress was easy to monitor and challenges were
faced, even if they were not always overcome. There was an open book on the capacity
of the personnel and their limitations.
At UCM, there was a core group of three in a wider programme coordinating team. This
core group was also active in three projects, and met on a daily basis. Synergies
between the three projects were identified and exploited and action research took place
regularly. There was a strong contrast for much of the time at UCM between this set-up
and two other projects – the OER in Health Sciences (UCM4) and (initially at least)
research projects (UCM5) – where the core group’s influence was not as great. These
two projects did not move forward satisfactorily for much of Part B. There were
underlying problems, and it is likely that more regular communication across the
coordinating team would have kept a spotlight on them.
UJ had a three-tiered model. There was a programme coordinating group of four to five
members, which saw some change of personnel. Most members of this group were also
involved in an ETI project. They met regularly to discuss both operational and strategic
issues. A type of action research was at work here too. There were notionally two further
tiers at UJ: meetings of the coordinating group with representatives of all three projects;
and meetings of each project team, sometimes with the designated liaison member of
the coordinating group. This worked well in the early part of the programme at UJ, but
less so in the last 18 months owing to security and other challenges. Some sub-projects
worked with relative autonomy. This explains in part the slackening of momentum in
some of the work at UJ.
MAK’s model was less structured. The project teams were notionally autonomous.
However, when coordination and external advocacy were needed, they were provided by
the ETI coordinator, who was also a member of one of the project teams (MAK1). As
head of the e-Learning Unit, he had both the time and competence to provide technical
guidance where it was needed.
At UEW, there was little structure until the last year. The programme team met on an ad
hoc basis – more to compare notes than to coordinate. Challenges were addressed by
project teams rather than by the programme group. This was corrected in the last year
after the creation of the new governance committee.
At UI, the coordinating group consisted of the project leaders, plus the coordinator and
other people who were brought in as and when necessary. It worked in a collegiate
manner and was ultimately effective.
The projects at KU operated independently of one another – more than at any of the
other institutions in the ETI. Occasionally, personnel from two projects met informally.
The programme coordinator worked without the benefit of a coordinating team. Her role
seems mainly to have been to serve as a communication channel between the projects
64
and the governance committee. This resulted in missed synergies. For example,
although the two digitization projects shared some resources, there was little
coordination between them.
It is difficult to generalize about the style and quality of project management and team
work. There are examples of projects that would have benefited from more systematic
project planning and implementation. The research projects in particular required
meticulous planning and management. There were challenges in this regard at UEW and
UCM.
Other projects, though, clearly benefited from a lack of rigidity in project planning. For
example, two of UCM’s projects (UCM2 and UCM3) traced an evolutionary path,
pragmatically sidestepping seemingly intractable problems, taking advantage of new
pockets of interest.
Local support frameworks
For most project participants, it was their first substantial experience of developing
online courses or other ET applications or producing major pieces of research. Whatever
the quality and quantity of formal training, they needed regular technical support – in
instructional design, use of Moodle, application of multimedia, and research design and
analysis – as they progressed. The ETI support team members – all based in South
Africa – offered periodic support but were not on hand all the time. A local support
framework was needed as well. Most projects worked well as small, mutually supportive
groups, but they usually lacked the complete set of know-how – and often the
confidence – to work effectively without further support.
It was the programme coordinators’ responsibility to build the framework, by identifying
support needs, locating the support within – or occasionally outside – the organization,
and brokering the relationships that would activate it. It took time for some coordinators
to appreciate the importance of a support framework. With hindsight, the process of
establishing a support framework might have benefited from facilitation in Part A – or
early in Part B.
UDSM – whose projects were both about online courses – was fortunate in having a
centralized support framework, with a good mix of technical competencies, already in
place in the form of the Centre for Virtual Learning. This is also where the ETI
management was located. At first, there was a risk that the Centre for Virtual Learning
would keep too much of the programme to itself, leading to challenges for wider buy-in.
In fact, with the expansion in the number of courses under development, the Centre for
Virtual Learning personnel established good working relationships with a wider range of
departments, and a more balanced programme emerged. Centre for Virtual Learning
personnel provided in situ support for their academic counterparts, and an all-day
support desk for staff who preferred to work alone.
MAK also has an e-Learning Unit, which combines technical and pedagogical
competencies but on a very small scale. It was able to provide adequate support to the
online course project (MAK1), but will be challenged to provide adequate support to any
post-ETI expansion.
At UCM, the support framework was devolved and informal. It mainly took the form of
what the programme coordinating team call a ‘peering’ process, whereby personnel who
had already developed online courses coached and mentored others who were earlier in
the journey. This system seemed to work well on a small scale. The coach/mentors were
highly motivated by their first achievements in developing online courses; and, as
lecturers in the IT department, they combined both pedagogical and technical
65
perspectives. Unfortunately, most of these staff left UCM in the later stages of the
programme, undermining the effectiveness of this mechanism.
At KU, the Institute for Open, Distance and e-Learning has sufficient technical expertise
in most of the relevant areas and even runs courses in the necessary skills. The
university has a good number of academic personnel who have experience in designing
learner-centred courses for distance education. It even runs training in pedagogy, which
is rare in African higher education institutions. However, none of this was leveraged at
first by the ETI projects. It took outside intervention by a member of the support team
to convene the technical resources needed by some project teams.
At UEW, UI, and UJ, support was provided on an ad hoc basis. At times this was effective
enough, but is probably not a sustainable approach in the long run.
The only institutions then with units tasked to provide the mix of support needed for ET
for blended teaching and learning were KU, MAK, and UDSM. It is likely that a dedicated
ET unit of some kind will be an important component of any strategy for further
development. At the time of the evaluation UI was actively planning to establish one.
Project management and processes
There is a large amount of data available about project processes. The evaluation will
highlight a small number of issues that seem of general relevance and interest.
At three institutions – KU (the Executive MBA project), UEW, and UI – slow progress in
developing online courses was transformed by taking team members away on retreats –
residential workshops of three days or more. All the informants – programme
coordinators, project leaders, and team members – spoke very positively about the
experience. Not only did the retreats break the back of the work deficit, but relationships
were reinforced not least with the technical support personnel. Interestingly these three
institutions had less structured coordination processes than the others, which may have
made the retreats’ contributions so critical.
At least three institutions – UCM, UI, and UJ – employed graduate students in ICT
support roles. This was seen as a pragmatic solution to skills gaps in this area. At MAK,
however, reliance on students to do mainstream technical work in the e-portfolio project
was a major factor in delay because of their inconsistent availability.
Many projects operated outside normal departmental and faculty boundaries. This
sometimes raised legitimacy issues. MAK and UJ reported this as a limiting factor. UI
generally, and KU with the research methodology project (KU2), overcame this through
systematic advocacy and sensitization.
At UDSM, legitimacy of the actors was less of an issue as they operated from a wellestablished unit. However, they too adopted a consultative approach with departmental
heads, and sought the participation of course owners in order to facilitate the migration
of courses from Blackboard to Moodle. They turned down an offer of funding for the
automatic transfer of these courses, which would have been much quicker but less
satisfactory from an ownership point of view.
At some institutions – MAK especially – a high turnover in lecturers assigned to courses
posed challenges for commitment to online course development. Some courses had dual
direction, which was a problem if only one course director was working with ET. These
concerns were not satisfactorily addressed.
The ETI was dependent chiefly on the programme’s intellectual and social resources at
each institution. No amount of financial resource or external support would have been
66
able to make up for deficits in those areas. Although there were considerable variations
in ET experience and expertise, the level of commitment and professionalism among the
institutional programme teams was generally high. In the long run, this was the more
important factor.
Change of scope and direction
In a few cases, the scope or direction of ETI projects changed to make them more
challenging. Examples included the following:
 UCM’s digitization of distance education materials (UCM3);
 Mobile phones for distance education at UI (UI4); and
 Both of UDSM’s e-courseware projects.
In UCM3, the principal objective shifted from the simple conversion of paper-based
materials to CDs for easier delivery to distance education students, to a more
fundamental improvement of the quality of the materials prior to digitization. This was a
result of a growing awareness of what quality means from a student’s point of view. The
UCM programme coordinator attributes this change in perspective to the ETI.
The mobile phone component of UI’s distance education project also changed
fundamentally. The original proposal was to use mobile phone messaging and voice
services to complement tutor interaction. UI soon realized the potential of mobile phones
for delivering more complex learning interactions, and developed a mobile delivery
platform for piloting them. This inevitably changed the momentum of this part of the
project.
UDSM’s ETI governance committee decided early in the life of the two projects that their
scope could be expanded. Targets for online courses to be produced or improved – as
opposed to migrated – were doubled. A curriculum review was also set in train within
one of the projects. This considerably increased the workload of the programme
personnel.
Changes mid-project, provided they are adequately resourced, are to be welcomed
because they reflect constructive use of action research and strategic management.
Availability of key personnel
All institutional programmes suffered from personnel availability issues of one sort or
another. No participant was dedicated full time to the ETI; in fact most had positions
that involved substantial commitment to other work.
Even with the most motivated people, there were occasions when they could not find
time to take ETI work forward, especially when it meant working with others with the
same constraints but different timetables. These were not optimal conditions, and led to
delays. In the case of UCM4, it brought the project to a halt from which it never
recovered.
The institutional programme coordinators in particular – because they were chosen for
their leadership skills – were usually in great demand elsewhere. Several had
unrealistically heavy workloads. UCM’s coordinator was such an example, partly because
little work in three of the projects was delegated. UI might have been in a similar
position but for the high degree of delegation to the project teams.
67
Even if it had been possible to create and fund full-time project posts, however, this
would not have been welcomed. It was important to have the participants working within
the normal fabric of the university and not in bolt-on projects. This was important for
credibility during the programme and for sustainable mainstreaming in the future.
Some degree of formal workload alleviation might have offered a solution to this
problem. If not, it was even more important that the institutions adequately recognized
this type of work in their reward systems.
There was also a longer-term version of the availability issue. Several programmes
experienced the loss of key personnel – either temporarily or permanently.
 At UDSM, both the leader of one of the projects and the programme coordinator went
overseas in the final year; but they left orderly handovers and this does not seem to
have affected progress;
 KU and UI both lost more than one project leader unexpectedly but managed to find
successors without much disruption;
 At UJ, one of the project leaders (who was also a member of the programme
coordinating team) left in 2011, and the programme coordinator in 2012. Although
the former programme coordinator maintains contact, these departures hampered
the teams in the final stage of the ETI at UJ;
 UCM and UEW were not so fortunate. Two of the project leaders – one at UCM and
one at UEW – were absent for lengthy periods (at UCM eventually on a permanent
basis) but did not capacitate any personnel to take over their work; and
 At UCM, five out of six of their online course developers left the university – a
particular problem as they were being used as coaches for others.
All the enterprises suffer from availability issues. They can usually be mitigated, though
not eliminated, through the implementation of risk mitigation strategies such as the
empowering of deputies.
Synergy with other programmes
Most institutions had ET programmes running parallel to the ETI. Ideally these
programmes should have been managed either by the same people or in close liaison
with one another. In that way they could have shared resources, applied learning from
one another, and avoided duplication of effort.
Most institutions captured synergies from parallel projects and programmes with ET
dimensions.
 UDSM and UJ both made innovative use of parallel programmes to fill resource gaps
in the ETI;
 Carnegie Corporation funds were used to engage a multimedia expert at UJ, who was
able to provide inputs to the ETI;
 The e-learning fellowships at UJ were originally funded by a Carnegie Corporation
project, which the ETI has further developed;
 UDSM made use in its ETI projects of capacity development – both long and short
term – from projects funded by the World Bank and a Finnish government agency;
and
 MAK, UEW, and UI applied lessons from the ETI to new or continuing projects funded
by the African Development Bank, the World Bank, and the Carnegie Corporation.
At KU, it was not easy for the evaluator to get information about parallel ET projects and
programmes, which left the impression that these interventions operated in silos,
without much harmonization or cross-fertilization.
68
Synergy between programmes would be addressed by an actively managed institutionwide ET strategy.
69
Conclusions
This section of the report assesses the results of the ETI programme, identifying what
worked well, and what might have benefited from different conditions or approaches.
The principal objective is to provide possible lessons for future, similar programmes.
Although the five expected outcomes of the ETI can be seen as mutually reinforcing, the
second outcome – improvements in teaching and learning practice – absorbed most
resources in time and money, and could be interpreted as the central or pivotal area of
the programme. This outcome is addressed first, together with the productivity outcome,
with which it is closely related.
Improved teaching and learning practices and productivity
By the end of the programme, significant progress had been made in improving teaching
and learning practices.
 The institutions had developed, or extended their use of, a sustainable LMS – Moodle
in all cases;
 About 250 online course modules had been produced or improved, as well as other
ET products such as digitized exam papers and educational radio programmes. The
standard of the online courses was assessed by the majority of external reviewers as
generally good. From the limited student feedback available, the indications were
promising. Students expressed a strong appetite for engaging with online courses
and other digital products and were expecting improved learning outcomes from this
engagement;
 About 120 staff had been intensively trained in online course production, more than
1,300 academic staff had received training in the use of Moodle and/or other digital
teaching and learning products, and more than 3,520 students (mainly at UDSM) had
received training in the use of Moodle and/or ET products other than online courses.
Most institutions were delivering this capacity development themselves on a
significant scale by the later stages of the programme, rather than relying on
external trainers; and
 Two institutions were already experiencing significant productivity gains from
digitization projects. All the projects offered the potential for productivity gains in
terms of greater quality and access for a given set of resources.
The numerical targets for ET products and capacity building set in the original project
plans were exceeded overall. Four institutions substantially exceeded their targets for at
least one project. Although there was variation in overall progress among the
institutions, every institution completed at least one project with objectives fully met or
exceeded.
These findings represent a successful overall result for improved teaching and learning
practices. Although the numbers are modest compared to the overall scope of teaching
at these institutions, the progress was made against very low baselines for the use of ET
in all the institutions. The ETI was the largest programme in ET for teaching and learning
at every institution during the programme implementation period. It can therefore be
credited as the main vehicle for helping the institutions ascend the learning curve and for
achieving momentum in ET product development.
Success factors
70
There is ample evidence that the ETI’s model for support to the component projects, and
the quality of delivery of this support, made a major contribution to the success of the
projects. The model was characterized by two main features:
 Devolved responsibility for project identification and planning; and
 Regular, incremental capacity-building, technical inputs, and other types of tailored
support.
Project identification and planning during what was known as Part A of the programme
was substantially devolved to the institutions. The funders set out only very broad
parameters, and these were interpreted flexibly by the ETI support team. Institutions
were given help where necessary with these processes. For example, they were
encouraged not to be too ambitious in the number of projects and in the scale of
research. The advice was offered, not imposed. This approach generated a strong sense
of ownership: the institutions saw the projects as ‘theirs’.
The support in Part B was largely delivered through two ET specialists, each assigned to
three to four institutions. These personnel were based in South Africa, but made regular
visits to their assigned institutions and also mediated other forms of support. The visits
usually had multiple purposes, including the facilitation of workshops, hands-on help
with technology, progress monitoring and assessment, mentoring, and troubleshooting.
Tools and frameworks for course development and quality assurance were offered and
tailored to local needs.
This model was universally appreciated. The support team members were regarded as
valued friends by the institutions. The institutional teams had an incentive to be selfreflective and open about their needs, as this attracted the necessary support.
Capacity building in this model was incremental and cumulative, designed to be
delivered in ‘absorbable’ packages. The capacity-building road map was loosely drawn at
the planning stage and was responsive to experience and need. The incremental support
was designed to give time for the institutions to apply the new knowledge and skills in
the development. The capacity building delivered by the support team was regarded as
of high quality.
Several other features of the programme design and execution also promoted successful
results.
• The time allocated to the programme, including the no-cost extension, was generous.
It allowed for overruns and changes in scope and direction. The large majority of the
project timetables turned out to be over-optimistic. The contingency time, plus the
extension, was however enough in almost all cases. Where it was not, it was due to
overarching external factors, particularly the work discontinuities at UJ;
• The funding model was also effective. Funding was sufficient for the work, both
planned and unplanned, allowed for additional support. Given limited absorptive
capacity, there would probably have been diminishing or even negative returns on
substantially more funding. The processes for presenting business cases and
accountability for funding were disciplined but not burdensome. Funding was mostly
released in a responsive manner. Funds were released against planned deliverables
and not in advance. This was effective as an accountability mechanism, with minimal
disadvantages. It rarely led to problematic delays, partly because some institutions
were able to find internal funding for urgent or supplementary interventions when
ETI payments could not be released. As has been reported above, there was some
flexibility in the deliverables that could be funded, although these changes had to be
negotiated with the support team;
• The visible and meaningful involvement of apex management with any institutional
change programme is generally accepted to be an important success factor. It sends
a signal to the participants and the wider institution that the programme itself is
valued, and also that the performance of the programme team matters. The ETI
71
•
•
•
passed this test. Apex management from every institution was involved in Part A to
an acceptable extent. This was a central strategy in the facilitation of Part A: a
condition of participation. As Part B progressed, involvement varied, but at every
institution the ETI coordinating team had regular access to at least one member of
apex management. At UEW, the involvement of the VC in the later stages, when
institutional reputation was at risk through underperforming projects, was critical to
the satisfactory completion of the work;
Although the intended community of practice of researchers and practitioners did not
take off, useful networking did take place at the annual inter-institutional workshops.
The networking may not have led to many lasting changes, but it helped people to
feel part of a bigger enterprise. Because the institutions had to report on progress,
the workshops injected elements of competition and accountability;
The ETI was designed as a programme. There were potential synergies between
projects. Support from outside the project units had to be negotiated. Sustainability
factors, including the development of future ET strategies, needed to be put in place
or planned. There were opportunities for learning and dissemination during and after.
While not all these programme-wide elements were effectively pursued, partly
because they were not fully mapped out, it is unlikely that much progress would have
been made in these areas if the role of programme coordinator had not been created
and filled with people with motivation, vision, respect and the right competencies. In
most institutions, the coordinators performed well. They formed the essential cogs in
the programmatic machine; and
The multi-institutional nature of the ETI provided a comparative and competitive
perspective. Competition was not confined to the inter-institutional workshops. The
seven institutions worked to the same high-level agenda and time frame, generating
an element of competition, which acted as an incentive, particularly towards the end
of the programme. No-one wanted to be the weakest link.
Challenges
All the institutions faced multiple challenges in developing ET products and capacity. The
development of ET products is difficult and time-consuming, more so for people who had
to learn how to do this from a low base of knowledge, skill and experience. All the
participating institutions underestimated the length and incline of the learning curve. All
the projects overran their planned timetables, and the majority needed the no-cost
extension to complete the development of their online courses and other ET products.
This generally left insufficient time for evaluation and knowledge sharing, which were
weak points in the programme. These now need to be addressed by the institutions
themselves.
The external assessments of the online courses identified weaknesses. The main areas
for improvement were the following:
• More emphasis on course redesign for the online medium rather than simple
digitization of text-based material; and
• More opportunities and incentives for student interactivity and collaboration e.g.
through forums.
Addressing the first of these deficits may require more capacity building. The latter could
be addressed by more uniformity of course structure with built-in requirements for
interactivity and collaboration.
The application of rich multimedia to online courses was a big challenge. Two of the
three projects with multimedia as their central focus made faltering progress. A more
comprehensive package of support than the ETI was able to provide for multimedia is
needed.
72
Quality assurance in the programme mainly came from outside the institutions. This is
not a sustainable option. The institutions need to integrate ET for teaching and learning
into their wider systems for quality assurance.
Capacity gaps became apparent in several areas, including effective approaches to
teaching and learning. None of these was a total surprise to the support team. However,
the depth and universality of these gaps was a factor in the late-running, and in a few
cases underperformance, of projects. A more systematic skills audit in Part A of the
programme, when the nature of the Part B projects was becoming clear, might have
been worthwhile.
The incremental model of capacity building did not always work optimally. There was
sometimes little or no progress between increments, and some capacity building had to
be repeated. The reasons for these deficits were usually grounded in institutional
challenges such as the lack of incentives and absences of key personnel.
Although self-motivation was high among some project participants, and teams were
incentivized in a number of ways, including through competition and reputational
concerns, this was not always sufficient to maintain momentum, particularly in the
middle period of the programme. No institutional incentive systems recognized the
development of capacity and products in ET for teaching and learning. Even if
participants were not seeking monetary and status rewards for their commitment to the
programme, the lack of recognition was not helpful.
ET strategies
Strategy has a role in both planning and implementing, and sustaining improvements in
teaching and learning. Although, at the beginning of the ETI, some institutions had ICT
and overall institutional strategies that referred to ET for teaching and learning, none
had an agreed road map for its development. One institution – UJ – had a draft ET
strategy, but this was not adopted by the institution during the life of the ETI.
Draft ET strategies were produced by each institution under Part A of the programme,
following a template provided by the ETI support team. These strategies addressed
institution-wide issues but confined their focus of implementation almost exclusively to
ETI-funded projects. Most institutions had other, parallel ET initiatives that were not
included in these draft strategies.
Very little progress was made with the ET strategies after Part A. None was
comprehensively reviewed during Part B of the programme; and none had been formally
adopted by June, 2013.
This did not in general impede the ETI teams in effectively pursuing ETI projects and in
some cases – such as UCM, UDSM, and UI – taking steps to prepare the wider university
environment for further progress in ET. Two programme coordinating teams, however,
reported legitimacy challenges to their projects, which inclusion in an adopted strategy
would probably have avoided.
There are complex institutional issues surrounding the development of ET in the longer
term that require strategic approaches, for example, to incentives, infrastructure,
technical support, and prioritization and synergies between parallel programmes. The
chances of sustainability of the progress made through the ETI would be increased by
the effective design, adoption and implementation of ET strategies.
73
The draft ET strategies need to be revisited, updated in the light of experience since
2009, and institutionalized. At the time of the October–November, 2012, evaluation
visits, all programme coordinators agreed the time was ripe for revisiting the strategies.
Apex management at MAK, UEW, and UI expressed a strong interest in this. Progress in
revision, with help from the ETI support team in two cases, had been made in three
institutions by November, 2013.
Community of practice
A community of practice has been defined as a ‘group of people who share a concern or
a passion for something they do and learn how to do it better as they interact
regularly’.42
The intensive networking that took place during the three ETI inter-institutional
workshops was widely appreciated by the participants. It provided perspective and a
sense of a wider common purpose. Some sharing of practice took place during the
workshops. However, there were only two instances of sustained interaction or
collaboration during the ETI: the multi-site research initiative on the uptake of ET
(conducted in terms of the third strand of the programme’s research dimension), and the
design of the summative evaluation framework. These two instances required strong
facilitation by members of the support team.
The expected community of practice therefore did not take shape in any significant
sense. The reasons it was unsuccessful are not easy to isolate. There were several, and
they probably worked in combination against the investment of necessary time and
resources:
• Although the concept of a community of practice was popular among the institutions,
there appears to have been a lack of appreciation of the benefits of collaboration;
• The building and maintenance of a community of practice usually needs active and
consistent facilitation. At successive inter-institutional workshops, intensive efforts
were made to get such a community going, but these efforts did not generally extend
to the periods between the workshops. There was also no tailor-made virtual
platform on which the community’s online interaction could have been built, although
other vehicles might have been used;
• Competition was identified by some informants as a factor that worked against
collaboration. The institutions were not always willing to share the products of their
hard-won achievement; and
• There was limited scope for face-to-face collaboration between the institutions.
Despite the availability of technology for virtual communication, participants in the
ETI worked together best when face to face. The progress made in the interinstitutional workshops and the visits of the support team to the institutions
demonstrated the continuing importance of face-to-face communication. ETI
participants had to find time and other resources to organize and conduct visits to
other institutions for this type of interaction. Compelling reasons would have been
needed to use these scarce resources. In their eyes, there were more pressing
demands, particularly on their time.
The e/merge Africa initiative, led by UCT CET, with Saide involvement in the
development stage, may provide the platform for the community of practice envisaged
by the ETI. It saw the active participation of several members of the UJ e-learning
fellowship group in its pilot activity in 2012. Virtual networking continued through a
Facebook group, with 220 members (four of them members of ETI programme or project
42
Communities of Practice: Learning, meaning, and identity. By Etienne Wenger, Cambridge University Press,
1998.
74
teams) at the end of June, 2013. By the end of December, 2013, membership had grown
to 304, about 14 of whom were participating members of the ETI programme support or
project teams. e/merge Africa is a useful vehicle for exchange of information, although
there is no evidence yet of collaborative spin-offs. The initiative was on the threshold of
a full launch as the ETI came to a close.43
Research
The ETI was expected to lead to ‘New transferable knowledge of how to develop capacity
in the use of ET for teaching and learning in African higher education… [and]
Dissemination of that new knowledge beyond the ETI players’. The agenda that was
originally set for this research dimension to the ETI was elaborate and ambitious.
Although it was acknowledged by most stakeholders that there were wide gaps in
knowledge about ET in African higher education institutions, this agenda overestimated
the interest and, particularly, the capacity in research among the university personnel
attracted to the ETI.
The small number of independent research projects and the lack of baseline research are
primarily consequences of decisions taken by the institutions in Part A of the
programme. There seem to be two main reasons why research did not resonate in Part
A. One is this lack of research experience and expertise among the majority of personnel
attracted to the programme at the institutions. There was insufficient appreciation of the
possibilities and how to go about leveraging them. Several institutional personnel said
that the research resources deployed in the Part A workshops were too difficult for
people to engage with.
The other side of this coin is that there was such a great appetite for the implementation
projects that they tended to eclipse the less tangible benefits offered by research. These
benefits were particularly elusive because the research would have been difficult to
locate in a department or faculty – the traditional location for academic research where
support and recognition are usually housed.
It is possible that other personnel with greater interest and capacity could have been
attracted to the programme, but there seems to have been no appetite for this among
the stakeholders as momentum in the implementation projects grew.
The second strand to the research dimension, the pan-institutional research project, was
an elaborate model, with a strong academic dimension. There was no consensus among
the ETI support team as to its rationale. Even with greater research capacity and interest
to work with, it may have been a conceptual mistake to combine research and
implementation from the outset. There was no compelling reason to do this. It was also
probably premature. The more logical timing for the type of research envisaged in strand
two was later in the programme, or even on conclusion of the programme, once the
teams had more experience with the ET domain.
The devolved evaluation agenda too, particularly the scale of the summative evaluations,
was over-ambitious. For it to work well, it would have needed prior capacity
development generally in the teams, sufficient hands-on support from the external
evaluator and/or the ETI support team, and probably a skilled evaluation focal point in
the institutions. The capacity building was limited to short sessions at the interinstitutional workshops. The external evaluator was based in the UK, and his availability
43
And in January, 2014, the network’s convenor, Tony Carr, won a Dewey Winburne Community Service Award
for the e/merge Africa peer network (see the report in University World News, at:
www.universityworldnews.com/article.php?story=20140117144523531).
75
for face-to-face support was restricted to his three- to five-day visits (three of these) to
the institutions. The support team members did not have evaluation support high on
their agendas. None of the institutions had a skilled focal point. Devolved evaluation also
needed its own breathing space. This space was restricted by the project overruns.
Against this background of under-achievement of the original agenda, the ETI produced
several pieces of research. Three of the four research-based projects had concluded or
were near conclusion. One had led to conference presentation, with publications in the
pipeline. Several single-project case studies, and a number of multi-project case studies,
had also been produced and published and/or presented at conferences. Numerous
papers had been developed and presented at conferences.
These pieces of research have been produced with considerable effort. They will
primarily be of use to the institutions concerned. Most of them offer limited scope for the
transfer of knowledge.
Two further research activities however may address the ‘transferable’ element in the
outcome: the multi-site research (conducted in terms of the third strand of the
programme’s research dimension), culminating in the completed report ‘Factors
Influencing the Uptake of Technology for Teaching, Learning and Assessment at Five
African Universities’, and the current external summative evaluation.
Wider awareness and appreciation of ET
By the time of the evaluation – in fact in some cases much earlier – there was evidence
of outcomes from the ETI in addition to the five central ones assessed above. Wider
awareness and appreciation of the potential benefits of ET for teaching and learning was
an implicit objective of the programme and an explicit objective of several of the
projects. Some projects built systematic advocacy and sensitization into their planning.
This was done at the programme level by some institutions, most thoroughly at UDSM
and UI. Results of advocacy activity have not been measured, but there is enough
qualitative evidence to suggest that wider awareness and appreciation on a significant
scale has been a positive outcome of the ETI. This is important because in all cases the
ETI projects operated selectively. Large areas of the institutions were not direct
beneficiaries of the online courses and other ET products.
The extent to which the institutions have enhanced their reputations through awareness
and appreciation of progress generated by the ETI is less easy to gauge and
demonstrate. There is anecdotal evidence that this has happened in at least three
institutions.
Sustainability
The progress that these institutions have made in online course development is likely to
be broadly sustainable owing to the demand-side momentum for these products and
approaches. The same applies to the products of the digitization projects. It would be
difficult for any of the institutions to abandon the application of ET products like these
unless it was in crisis.
The best prospects for sustainability and continued forward momentum are where
improvements have become institutionalized in one way or other, for example:
• By being linked to economic viability, as with the improvements in distance education
at UCM;
76
•
•
•
By being driven by strong teamwork and a robust technical support unit, as at
UDSM;
Where apex leadership has identified itself strongly with the progress, as at UEW;
and
Where there has been a broad and systematic process of awareness raising and
sensitization, as at UI.
Critical mass may play a part. At three institutions (UDSM, UEW, and UI) at least 40
online course modules each were produced, providing broad-based buy-in to the use of
ET.
A small number of personnel have gained valuable experience and increased capabilities
in research. Because they are relatively isolated, however, the sustainability and
replicability of these benefits is not assured.
Sustainability would also be more likely where institutions captured and applied lessons
from the successes they achieved and the challenges they faced. This has not yet been
done systematically.
77
Annexes
Annex A: Acknowledgements
This evaluation principally serves as a mirror, reflecting what the main protagonists in
the ETI felt about what they achieved and how. They – the core institutional programme
teams and the support team – were the principal interlocutors for the evaluation. Their
appetite for self-reflection and candidness ensured that the evaluation was able to
produce sufficient plausible findings on which to base the conclusions in the current
report. It would be invidious to single out named individuals – but they know who they
are.
Hundreds of other people were consulted, surveyed and observed during the evaluative
work over the four years of the programme. Listing them would serve no useful purpose.
Finally, it is important to acknowledge the role of the senior management of the seven
participating institutions in their important initial engagement with the ETI, and in their
continued support for the programme.
78
Annex B: Abbreviations and acronyms
Approx.
CD
CED
CET
DVD
DVC
ET
ETI
ICT
IT
KU
LMS
MAK
MBA
MEd
m-Learning
MOV
OER
PHEA
Q&A
Saide
UCM
UCT
UDSM
UEW
UI
UJ
UK
US
VC
Approximately
Compact Disc
Centre for Distance Education
Centre for Educational Technology
Digital Video Disc
Deputy Vice-chancellor
Educational Technology
Educational Technology Initiative
Information and Communication Technology
Information Technology
Kenyatta University
Learning Management System
Makerere University
Masters in Business Administration
Masters in Education
Mobile Learning
Means of Verification
Open Educational Resource/s
Partnership for Higher Education in Africa
Questions and Answers
South African Institute for Distance Education
Universidade Católica de Moçambique (Catholic
University of Mozambique)
University of Cape Town
University of Dar es Salaam
University of Education, Winneba
University of Ibadan
University of Jos
United Kingdom
United States
Vice-chancellor
79
Annex C: Key documentary sources
ICTs and Higher Education in Africa: Status reports on information and
communication technologies (ICTs) in higher education in eight African countries.
Commissioned for the PHEA ETI. Laura Czerniewicz (ed.), 2009.
Effective Technology Use in African Higher Education Institutions: A proposal for
Phase Two of the PHEA Educational Technology Initiative. Saide and CET at UCT,
April, 2008.
PHEA ETI institutional draft ET strategies and project plans
PHEA ETI Research Toolkit
MAK and KU Strategic Plans
PHEA ETI Programme Annual Reports
Support team workshop and visit reports
PHEA ETI institutional six-monthly progress reports
PHEA ETI institutional case studies
PHEA ETI multi-site research initiative progress reports
Effective Implementation of Technology Innovations in Higher Education Institutions:
A survey of selected projects in African universities. Research for the award of the
Doctor of Philosophy in Management Information Systems of the School of Business,
Kenyatta University. John Kandiri, 2013
PHEA ETI institutional summative evaluation reports
80
Annex D: ETI projects
KU1: Digitization of Past Examination Papers
KU2: Postgraduate Research Methodology Course
KU3: Online Executive Masters in Business Administration (MBA) Programme
KU4: Chemistry and Communication Skills e-Learning Modules
KU5: Executive Information System Specification
KU6: Digitization of Theses and Dissertations
MAK1: e-Content
MAK2: Research on the Influence of Gender on Perceptions of Staff and Students
on the Use of Educational Technology at Makerere University
MAK3: e-Portfolios
UCM1: ICT Policy, Use Policy and Strategy Development
UCM2: e-Learning
UCM3: Centre for Distance Education: Learning Materials Digitization
UCM4: OER Health Sciences
UCM5: Research into Adoption of e-Learning in Central and Northern
Mozambique
UDSM1: Online Course Migration and Improvement
UDSM2: Computer Science Interactive Courses
UEW1: Baseline Study on the Current State of Educational Technology at UEW
UEW2: Enhancing the Quality of Teaching and Learning through the Use of a
Learning Management System
UEW3: Investigating How Academics/Students Use Web-based Approaches to
Enhance Teaching and Learning
UI1: Capacity Building and Digital Content Development
UI2: Open Courseware Development for Science and Technology
UI3: Tele‐classroom Teaching in the General Studies Programme
UI4: Use of Educational Radio and the Mobile Phone for Tutorials in Distance
Learning
UJ1: The Departmental Educational Technology Initiative
UJ2: Educational Multimedia and Simulations Project
UJ3: e-Learning Fellowships
81
Annex E: Summative evaluation guidance to ETI institutions
May, 2012
Introduction
The PHEA ETI is a pioneering programme which needs to be evaluated comprehensively
– mainly for what we can learn about the value programmes like this can create and how
to maximize their potential.
From a practical point of view, because it is a multi-location programme with diverse
objectives and contexts, the evaluation needs to be conducted at the level of your
institution as well as across the institutions. Apart from the practical issues, you should
also benefit from conducting your own evaluation.
The ETI is more than the sum of its projects. At each of your institutions, there will be
outcomes and processes that cut across and go beyond the projects. These need to be
captured in a summative evaluation at what I call the ‘programme level’ in your
institution. The figure below, which you will remember from the Johannesburg
workshops, recalls the three levels of evaluation.
Saide has decided that the summative evaluations in your institutions should be more of
a managed process than originally intended. This recognizes the reality that designing
and planning your own summative evaluations from scratch would be a lengthy process;
and as the evaluation questions are mostly the same across the institutions it doesn’t
make sense to reinvent the wheel seven times.
This note provides guidance both on the process of the programme-level evaluation and
the evaluation framework and tools. It also suggests a common framework for the
project evaluations.
You should begin to prepare for the summative evaluation process right away by
circulating this note and calling a meeting of your team to discuss the way forward.
Background Q&A
What is a summative evaluation?
A summative evaluation is an overall assessment of a project or programme, conducted
around the time it draws to a close. Typically the evaluation covers the whole results
chain:
 The outputs – quantity and quality;
 The outcomes – changes the programme has helped to bring about; and
 The inputs, activities and processes that led to the outputs and outcomes – how
effective and efficient they were.
82
A reminder of the results chain:
Inputs
Activities/processes
Outputs
Outcomes
Why conduct a summative evaluation?
There are several reasons:

It serves evaluation’s two main purposes:
o Accountability. Stakeholders – including those in your own institutions –
will want to know whether the time and other resources invested in the
programme have been worthwhile; and
o Learning. A good summative evaluation will also identify lessons for the
future: strengths to be built on, and weaknesses to be addressed.

A summative evaluation which tells a good story can also be a valuable advocacy
tool for future investment and partnership.

Evaluation is research – in fact it is the main type of research that is being
conducted within the PHEA ETI at the moment. The summative evaluation will
therefore be a further opportunity to develop your research capacity.
Aren’t summative evaluations usually conducted by external consultants?
This was true in the past. But current thinking stresses the value of guided selfevaluation. Although, as external evaluator, I will be summing up the PHEA ETI
programme as a whole, you are in a much better position to produce a nuanced
evaluation for your institution; which in turn will form a building block for my evaluation.
Who should lead the summative evaluation?
The summative evaluation is the responsibility of the ETI coordinator. It is an essential
part of programme management. Some practical aspects of it can be delegated to
another team member.
We are planning to evaluate our projects – isn’t this enough?
Evaluating each of the projects is important. These evaluations should provide the
project’s direct stakeholders with an assessment of the outputs, activities and processes
that led to them, and in some cases also the outcomes – the changes they have helped
to bring about.
The PHEA ETI however is intended to be more than the sum of its projects. It is designed
to usher in changes – e.g. in attitudes to and demand for educational technology (ET),
capacity, policy and strategy – which reach beyond the specific objectives of the
individual projects.
It is these wider changes – and why and how they happened – that we want to capture
with your summative evaluation.
83
Your project evaluations will contribute to the summative evaluation of the PHEA ETI
programme at your institution. Each project has a unique set of activities and outputs. In
some cases, the anticipated outcomes from a project are also unique to that project –
such as the benefits of better access to exam papers as a result of digitization. These
unique features should be evaluated at the project level.
However there are some outcomes – such as changes in attitudes and enhanced staff
capacity – that are the result of the PHEA ETI as a whole. It makes sense to research
these across the range of stakeholders. It also makes sense to evaluate common
processes – such as advocacy within your institution and programme management – and
external influencing factors across the PHEA ETI and not compartmentalized in each
project.
Won’t this involve us in a lot more work at a time when we are very busy?
The summative evaluation will involve some survey work – mainly on outcomes. But that
should reduce the need to do outcome evaluation at project level, such as in staff
capacity enhancement and improvements in student learning experience.
Other good news is that many of the other ingredients of the summative evaluation are
already in place or in the pipeline.
 The findings and conclusions from your project evaluations will form a major part of
the summative evaluation;
 The conclusions of your case studies and other research can also be incorporated.
The case studies and research facilitated by Monica are mostly evaluative and
therefore cover some of the ground intended for the summative evaluation; and
 Your previous six-monthly reports trace the activities and processes and the learning
from them.
I have clarified with Saide that your summative evaluation will replace the final sixmonthly report on the PHEA ETI that is due at the end of June, 2012, so there will be no
duplication of effort. The summative evaluation will therefore be the product that will
release the final tranche of payment. Any specific final reporting requirements not
covered in this guidance note – such as how to provide access to your products
(deliverables such as courseware and research reports) – will be clarified at a later date.
When do we need to complete the summative evaluation at our institution?
The main activity in the PHEA ETI will be completed by the end of June, 2012. Your
summative evaluation of the programme at your institution should be completed by 31
August, 2012.
You should have evaluated all your projects by 30 June.
I will be carrying out field work – mainly visits to each of your institutions – for the
overall final evaluation in October and November, 2012.
The evaluation process
Step 1 Clarify which primary data44 you will collect at project level, and which at
programme level through the summative evaluation.
44
Primary data is information that does not already exist and that you go out and get through surveys,
observation etc.
84
Each project must be evaluated. You should make decisions about what to evaluate at
project level.
Element in
the results
chain
Outputs
What the element
means
Evaluate at project level?
These include new elearning products;
research; completed
e-learning fellowship
programme etc.
Yes – definitely
Outcomes
Changes that the
project and its
outputs have already
started to generate.
These should be evaluated at project level if the
expected changes are unique to the project.
Activities,
processes
and external
factors
We are not just
interested in results.
We are interested in
how they happened.
However, if other projects are designed to
generate the same type of outcome – e.g.
enhanced staff capacity in e-learning; improved
learning experiences for students – it would
make sense to combine the research on shared
outcomes through the programme-level
summative evaluation
Much of this information will be project-specific
and can be captured from the project team
itself, either by one-to-one interviews or focus
groups, or a combination of the two.
However, there will be activities, processes and
external factors that will be common to more
than one project, and some – such as
programme coordination and advocacy within
the institution – that have taken place above the
level of the projects and were designed to
benefit all. You should make decisions about
which activities, processes and external factors
to research through your project evaluations and
which to research at the programme level
through the summative evaluation.
Step 2 Finalize your preparations for the evaluations at project level.
Project evaluations will be tailored to the objectives of each project. As mentioned at
Step 1, they should cover outputs, and those outcomes and activities/processes/external
factors that are specific to the project.
The typical evaluation questions at project level and means of answering them are set
out in the model project-level evaluation framework below.
ETI project-level evaluation framework
Evaluation questions
To what extent have the project’s expected
outputs been achieved in terms of the planned
quantity and timing?
What is the quality of those outputs? How
Means of answering the
questions
Project records.
Project team knowledge.
Quality assessments.
85
Evaluation questions
does the quality match up to expectations?
What changes – outcomes – have the project
and its outputs generated?
How sustainable do you think those outcomes
are likely to be?
What were the project activities and processes
– and external factors – that drove good
results?
What factors help to explain any less-thanoptimal results, such as delayed outputs,
quality weaknesses, or poor uptake of
products by end users?
What are the main lessons to carry forward
for future similar projects?
Means of answering the
questions
User/beneficiary survey.
User/beneficiary survey.
Observation by project team.
Focus group with project team and
key stakeholders (e.g. ETI
governance group).
Focus group/interviews with project
team and key stakeholders (e.g.
ETI governance group).
Focus group/interviews with project
team and key stakeholders (e.g.
ETI governance group).
Focus group/interviews with project
team and key stakeholders (e.g.
ETI governance group).
If you decide to research outcomes at project level, you should use any of the outcomeoriented model surveys at Annexes b–d that are relevant, adapting and augmenting
them in line with the project context. You may also find Survey VI (Annex g) useful in
guiding your project team/key stakeholder focus group.
Many of you are already well advanced with the planning of your project evaluations. If
you need any more advice on them, please get in touch with me as soon as possible.
As mentioned above, you should have evaluated each project by 30 June.
Step 3 Agree on your programme-level summative evaluation framework.
You should start your programme-level evaluation planning with an evaluation
framework. This consists of the key evaluation questions and the means of answering
them.
Annex a contains our proposal for the framework at this level. You can add evaluation
questions if you feel strongly about them, but we recommend that additions be kept to a
minimum.
Step 4 Plan to capture the data.
As you can see from the framework in Annex a, some of the data will consist of:
 Highlights from the project evaluations; and
 Project records.
This data can be gathered quickly and easily once the projects have been completed and
evaluated around the end of June.
But some primary data – about outcomes and wider processes and external factors – will
need to be collected through surveys of staff, students and senior management. These
will need advance planning. You should start this as soon as possible.
The surveys that need to be conducted to answer the programme-level evaluation
questions are likely to be the following:
I.
Interviews and/or focus groups with staff direct beneficiaries. (Evaluation
questions 1a, 1h, 1l, 4–6.)
86
II.
Sample survey (self-complete or interviews) of students who have engaged with
ETI teaching/learning products. (Evaluation questions 1a–1e, 1l.)
III.
Sample survey of wider group of staff. (Evaluation questions 1b–1e, 1l.)
IV.
Interviews with senior management. (Evaluation questions 1a, 1b, 1e, 1i–1l, 2, 8,
9.)
V.
VI.
Mini case studies of improved productivity – if you can identify any. (Evaluation
question 1f.)
Focus group with the project team and probably a small number of other key
stakeholders – such as members of the ETI governance group. (Evaluation
questions 1h, 1j, 1l, 2, 4–9.)
It is essential that you complete Surveys I–III. I cannot capture this data for
you. In an exceptional case, if you cannot complete the other surveys – and
therefore also fail to do the overall analysis and write the report – before my
visit in October/November, I can facilitate Surveys IV and VI and possibly help
with Survey V when I am with you. BUT please do not take this as the norm!
As you can see, several evaluation questions can be addressed by a focus group with
your ETI team. Some members of your team will also be direct beneficiaries and should
be included in Survey I as well.
Some of the questions in Survey III will also be included in Survey I.
Model questionnaires or topic guides45 for each of these surveys can be found at Annexes
b–g.
As pointed out at Step 1, there is a trade-off between survey research you do at project
level and what you do at programme level. So, for example, if you have answered
evaluation question 1e (see Annex a) about improvements in teaching and learning
through project evaluations, you may not need to conduct Survey II – sample of
students – for the programme-level evaluation. The same might apply to Survey I,
although I think this is less likely.
The main things you have to decide about data capture are the following:
 Which type of people to include in each of the surveys;
 If a sample is to be used,46 what size that sample should be, and how to select the
sample;
 In the case of Survey I, whether to employ one-to-one interviews or focus groups –
or a combination. (Focus groups add value because of the interaction between
participants; but they are more difficult to organize and need to be facilitated with
skill.);
 In the case of the Surveys II and III, whether to use paper or online questionnaires,
or telephone or face-to-face interviews; and
 The logistics of the surveys:
o When and where;
45
A topic guide is a list of questions for a focus group or a semi-structured interview. It is similar to a
questionnaire except that the questions do not need be asked in strict order. This allows the focus group or
interview to follow a natural flow; and you can add follow-up questions that are not in the guide if they seem
relevant.
46
In some cases – such as in Survey II if only a small number of staff can be categorized as direct beneficiaries
– you may include everyone in the survey.
87
o
o
Who will manage them; and
Who will carry them out/facilitate the interviews and focus groups (probably
better if they are done by people not centrally involved in the ETI).
You should prepare a plan for data capture so everyone involved knows what needs to
happen, how and when. It is no different from any research exercise. You should
complete this by 31 May. You are welcome to ask for my support with your data capture
planning. Please share your data capture plan with me when it is completed.
Saide has decided to offer a fixed allocation of US$3,000 per institution for honoraria for
the work on the evaluation. This will cover any payment to survey personnel.
Step 5 Get the data.
Capture the primary data.
Gather in relevant data already available, in particular from the following:
 Project evaluations;
 Project records;
 Public knowledge;
 Previous six-monthly ETI reports;
 ETI case studies; and
 The research facilitated by Monica.
Data capture should be scheduled for June and July to allow for analysis and reportwriting in August.
Step 6 Do the analysis and write the report.
Once the data is gathered in, you should take each evaluation question and see what the
data tells you.
In assessing outcomes, you will need to consider the baselines: what was the state of
things in the area in question (e.g. attitudes to educational technology, staff capability)
before the ETI kicked off? You may find your submission for participation in the PHEA ETI
is useful for recollecting the baselines for some aspects of the programme.
The evaluation report should ideally consist of the following sections:
1. Executive summary – about 2 pages;
2. Introduction: brief description of the PHEA ETI, generally and at your institution – no
more than 3 pages;
3. Evaluation methodology – about 1 page;
4. Findings:
a. Outputs (evaluation questions 3, 4);
b. Outcomes (evaluation question 1);
c. Means of achieving the outputs: inputs, activities and processes (evaluation
questions 5, 7);
d. Influential external factors (evaluation question 6); and
e. Sustainability (evaluation questions 2, 8).
5. Conclusions and recommendations (including evaluation question 9); and
6. Annexes: e.g. questionnaire formats, people interviewed.
The report should be between 20 and 35 pages, excluding annexes.
88
Step 7 Disseminate and apply the knowledge the evaluation has promoted.
You should circulate your evaluation report, although few people will read it. What you
should also do is produce a presentation, and possibly a short article, for internal use
e.g. at staff meetings and institutional seminars.
My overall evaluation of the ETI may also be useful, but it will not be available until the
end of 2012, so it is best not to wait for it before discussing lessons learned in your
institution.
89
Annex a: PHEA ETI institutional programme-level summative evaluation
framework
1a
1b
1c
Evaluation questions
Improved staff capabilities
– e.g. in ET for teaching
and learning, research,
project management:
 Staff covered by the
projects;
 Transfer and multiplier
effects to other staff.
Change in awareness and
attitudes to ET – e.g.
greater demand for
engagement among staff in
the wider institution.
Awareness of strategy and
policy.
Increased demand for ET
among students.
1d
1e
1f
1g
1h
1i
Increased access to
teaching and learning.
Which
outcomes
has the ETI
contributed
to and to
what
extent?
Improved quality of
teaching and learning.
Practical examples of
improved productivity.
ETI research products
reach wider audiences e.g.
through publication,
acceptance for inclusion in
conferences etc.
Meaningful and sustained
knowledge networking
takes place:
a. Within the institution;
b. Between the ETI
institutions;
c. Outside the ETI
institutions.
Practical new expressions
of top management
commitment to ET for
teaching and learning e.g.:
a. Approval of strategy;
Data sources and instruments
Interviews or focus groups with staff
beneficiaries of the ETI.
Sample survey of students engaging
with ETI products.
Interviews with senior management.
Sample surveys of wider staff.
Sample surveys of students engaging
with ETI products.
Interviews with senior management.
Sample surveys of wider staff.
Sample surveys of students engaging
with ETI products.
Desktop
Or project
research/observation. evaluations if
Sample surveys of
they have
wider staff.
comprehensively
Sample surveys of
covered these
students engaging
important
with ETI products.
questions.
Sample surveys of
wider staff.
Sample surveys of
students engaging
with ETI products.
Interviews with
senior management.
Project evaluations.
Mini case studies.
Desktop research/project records.
Public knowledge.
Interviews or focus groups with staff
beneficiaries of the ETI.
Desktop research.
Focus group with ETI team and other
key stakeholders.
Public knowledge.
Interviews with senior management.
90
1j
1k
1l
2
3
4
Evaluation questions
b. Operationalization of
strategy;
c. Allocation of funding.
Enhanced institutional
reputation.
New funding commitments
from outside bodies.
Any unanticipated
outcomes.
How sustainable and extendable are
these outcomes likely to be?
What factors and actions are likely to
sustain and extend them?
To what extent have the objectives for
the project outputs been achieved –
quantity, timeliness, quality, target
group uptake?
What, if any, other outputs have been
achieved?
To what extent have the objectives for
those outputs been achieved?
5
What factors and processes, internal to
the ETI, have made most difference to
results (the outputs and outcomes)?
6
What factors external to the programme
have significantly influenced the results –
both positively and negatively – and
how?
7
Were the programme inputs appropriate
and sufficient – and were they used
efficiently and effectively in the different
aspects of the programme?
8
What are the lessons from the ETI for
future initiatives?
9
In retrospect, how relevant has the ETI
Data sources and instruments
Interviews with senior management.
Focus group with ETI team and other
key stakeholders.
Interviews with senior management.
Public knowledge.
Interviews or focus groups with staff
beneficiaries of the ETI.
Focus group with ETI team and other
key stakeholders.
Sample surveys of wider staff and of
students.
Interviews with senior management.
Focus group with ETI team and other
key stakeholders.
Interviews with senior management.
Highlights of project evaluations.
Project records.
Six-monthly ETI reports.
Interviews or focus groups with staff
beneficiaries of the ETI.
Focus group with ETI team and other
key stakeholders.
Project records.
Six-monthly ETI reports.
Interviews or focus groups with staff
beneficiaries of the ETI.
Focus group with ETI team and other
key stakeholders.
Relevant highlights of project
evaluations.
Project records.
Six-monthly ETI reports.
Interviews or focus groups with staff
beneficiaries of the ETI.
Focus group with ETI team and other
key stakeholders.
Relevant highlights of project
evaluations.
Project records.
Six-monthly ETI reports.
Focus group with ETI team and other
key stakeholders.
Relevant highlights of project
evaluations.
Focus group with ETI team and other
key stakeholders.
Interviews with senior management.
Focus group with ETI team and other
91
Evaluation questions
been to the needs of your institution?
Data sources and instruments
key stakeholders.
Interviews with senior management.
92
Annex b: SURVEY I – Staff direct beneficiaries
1. How much potential do you think ICTs have for improving access to teaching and
learning at [your institution]?
a Very much b A fair amount c Very little d None at all
2. How much has your view on the potential of ICT for improving access to learning
changed in the last 3 years?
a Very much b A fair amount c Not much d Not at all
3. What factors have led to any change in your view?
4. How much potential do you think ICTs have for improving the quality of teaching
and learning at [your institution]?
a Very much b A fair amount c Very little d None at all
5. How much has your view on the potential of ICT for improving quality changed in
the last 3 years?
a Very much b A fair amount c Not much d Not at all
6. What factors have led to any change in your view?
7. In what ways have you engaged with the PHEA ETI?
8. How would you describe your experience of this engagement? [Probe any
extreme words]
9. In what areas have you acquired new capabilities/competencies – or significantly
enhanced existing ones – as a result of the engagement?
10. How would you summarize your level of competence in these areas? [put an X in
the relevant column]
a) before engagement with the ETI
Competencies
Near fully
competent
Some
competence,
but far from
fully
competent
Lacking any
competence
Some
competence
but far from
fully
competent
Lacking any
competence
1
2
3
4
5
b) now
Competencies
Near fully
competent
1
2
3
93
4
5
11. What are the factors in this engagement that have most helped you to develop
your competence?
12. What have been the major challenges or disappointments for you?
13. [For respondents who have said they are still far from fully competent in any area
in Q10] What would help you acquire more competence in this area?
14. What other differences has your engagement made – for you, your students and
other people?
15. What are the factors that have most helped make those other differences?
16. What do you see as the benefits of the PHEA ETI programme as a whole to your
institution?
17. In what areas do you think it has made a difference so far?
18. How much of a difference do you think it has made in each of these areas?
Areas
Very much
A fair amount
A little
1
2
3
4
19. What in particular about the PHEA ETI do you think has helped it to make these
differences?
20. What will help to sustain and extend any improvements in teaching and learning
through the application of ICT at [your institution] over the next three years?
21. Have there been any negative effects of the PHEA ETI? If so, what?
94
Annex c: SURVEY II – Sample of students who have engaged with ETI products
1. What experience have you had of ICT-facilitated teaching and learning at [your
institution] over the last 2 years?
2. What difference has it made to you in terms of the accessibility of learning
opportunities?
3. How do you personally access ICT-facilitated learning opportunities?
4. How much has the quality of the learning experience changed as a result of the
application of ICT? If you cannot compare the ICT-facilitated learning with its
direct equivalent before, try to compare it with a similar learning experience
without ICT.
a Very much better b Much better c A bit better d No change e Worse
5. In what ways is the learning experience better (or worse)?
6. Has ICT-facilitated teaching and learning, in your view, helped you to reach
higher levels of achievement in the courses in question?
7. What differences have you noticed in teaching staff who have been involved in
the PHEA Educational Technology Initiative (PHEA ETI)?
a. Have you noticed any change of attitude?
b. Have you noticed a change in their effectiveness as teachers?
7. What differences have you noticed in technical staff who have been involved in
the PHEA ETI?
a. Have you noticed any change of attitude?
b. Have you noticed a change in their effectiveness as technical staff?
8. Has the PHEA ETI made any difference to your attitude or the attitudes of your
fellow students towards the use of ICT for teaching and learning?
9. Have you noticed any other change that could be attributed to the PHEA ETI?
95
Annex d: SURVEY III – Sample of wider staff
1. How much potential do you think ICTs have for improving access to teaching and
learning at [your institution]?
a Very much b A fair amount c Very little d None at all
2. How much has your view on the potential of ICT for improving access changed in
the last 3 years?
a Very much b A fair amount c Not much d Not at all
3. What factors have led to any change in your view?
4. How much potential do you think ICTs have for improving the quality of teaching
and learning at [your institution]?
a Very much b A fair amount c Very little d None at all
5. How much has your view on the potential of ICT for improving quality changed in
the last 3 years?
a Very much b A fair amount c Not much d Not at all
6. What factors have led to any change in your view?
7. How interested are you personally in applying ICT to teaching and learning?
a Very interested b Fairly interested c Not very interested d Not interested at
all
8. How much has your interest in applying ICT to teaching and learning changed in
the last 3 years?
9. What factors have led to any change in your interest?
10. Has your attitude to the use of ICT for teaching and learning changed in any
other way in the last 3 years? If so, in what way?
11. What factors have led to any change in your attitude?
12. How much do you know about the PHEA Educational Technology Initiative (PHEA
ETI)?
a Very much b A fair amount c Very little d Nothing at all
13. What do you think the PHEA ETI is aiming to achieve at [your institution]?
14. In what areas do you think it has made a difference so far?
15. How much of a difference do you think it has made in each of these areas?
Areas
Very much
A fair amount
A little
1
2
3
4
16. What in particular about the PHEA ETI do you think has helped it to make these
differences?
96
17. What will help to sustain and extend any improvements in teaching and learning
through the application of ICT at [your institution] over the next three years?
18. Do you think there have been any negative effects of the PHEA ETI? If so, what?
19. Are you aware of any strategy or policy for educational technology in the
institution?
20. If yes to Q19, how much difference do you think it has made so far?
a Very much
b A fair amount
c Very little
d Nothing at all
97
Annex e: SURVEY IV – Interviews with senior management
1. What are your views on the current ET strategy/policy in the institution, and how
effective is it?
2. What difference do you think the ETI has made to the strategy/policy and the
way it has been applied?
3. How sustainable are the benefits the ETI has brought to the area of ET
strategy/policy?
4. In what other areas in the institution has the ETI made a significant difference?
Can you point to specific examples? [Probe any outcomes that you think have not
occurred to the respondent, but be careful not to lead him or her to specific
answers.]
5. Which of these contributions by the ETI is the most important in your view?
6. How sustainable are those changes? Are they easily replicable or extendable?
What would it take to:
a. Sustain them?
b. Replicate or extend them?
7. Has the ETI made a difference to the institution’s reputation or standing outside
its walls? Can you give me an example that illustrates this?
8. Has the university management taken – or is it taking – steps to build on the
ETI’s contributions?
9. What are the lessons from the ETI for future initiatives?
10. In retrospect, how relevant has the ETI been to the needs of your institution?
98
Annex f: SURVEY V – Mini case studies of improved productivity
If you can identify any specific areas where the ETI has led to improvements in quality
and/or output from the institution – especially in courses – at no increase in costs, try to
present the evidence in a mini case study of a few paragraphs.
99
Annex g: SURVEY VI – Focus group topic guide for ETI core team and other key
stakeholders (we suggest two to three)
This is a key data-gathering instrument, particularly for findings about processes and
external factors in the ETI. It should not be rushed and could be adjourned if necessary.
The number of participants should ideally not exceed 10.
Questions about outcomes
1. [Refer to the evaluation matrix – Annex a] To which of the potential outcomes
listed as evaluation questions 1a to 1k has the ETI made a significant difference
in our institution? What is the evidence?
2. How sustainable are those changes? Are they easily replicable or extendable?
What would it take to:
a. Sustain them?
b. Replicate or extend them?
3. Let’s look closer at networking. Has the ETI generated any meaningful and
sustained networking:
a. Across departmental boundaries at your institution?
b. Between the ETI institutions?
c. Between our institution and other organizations or people outside the ETI
partnership?
4. Is there any evidence that the ETI has made a difference to your institution’s
reputation?
5. Which of these outcomes is the ETI’s greatest achievement?
6. Apart from the 1a to 1k outcomes, has the ETI made a significant difference in
any other area?
Questions about outputs
7. Which of the outputs from the projects do you think is the most valuable?
8. Apart from the original planned project outputs, have any other outputs or
products emerged from the ETI, such as case studies?
9. To what extent have the objectives – including quality levels – for those outputs
been achieved? Have any exceeded their objectives? Have any disappointed?
What are the reasons for the varying performance against objectives?
Questions about activities, processes, and external factors.
10. What factors and processes, internal to the ETI – including its management and
governance – have made the most difference to results (the outputs and
outcomes) – and how?
11. How have you found the process of project evaluation?
100
12. What factors external to the programme have significantly influenced the results
– both positively and negatively – and how?
13. Were the programme inputs appropriate and sufficient – and were they used
efficiently and effectively in the different aspects of the programme?
14. What are the lessons from the ETI at your institution for future initiatives?
15. In retrospect, how relevant has the ETI been to the needs of your institution?
101
Download