Approaches to Measuring Performance in Higher Education

advertisement
APPROACHES TO MEASURING
PERFORMANCE IN HIGHER EDUCATION
A SOUTH AFRICAN CASE STUDY
DRAFT DISCUSSION PAPER
(Still to be edited, do not quote)
CHET Policy/Change Dialogue
Victoria Junction Hotel
Cape Town
9-10 March 2004
Ian Bunting and Nico Cloete
February 2004
SECTION 1:
BACKGROUND
SECTION 2:
HIGHER EDUCATION TRANSFORMATION:
MEASURING SYSTEMIC PERFORMANCE
SECTION 3:
DETERMINING THE “FITNESS FOR PURPOSE”
OF HIGHER EDUCATION INSTITUTIONS
SECTION 4
“WELL-FUNCTIONING” HIGHER EDUCATION
INSTITUTIONS
SECTION 5:
POLICY TARGETS AND HIGHER
EDUCATION PERFORMANCE
SECTION 6:
PROGRESS AND CHALLENGES
2
SECTION 1:
BACKGROUND
Policy and Performance
In a recent book entitled: Transformation in Higher Education; Global Pressures and
Local Responses (Cloete, et al, 2002) the issue is raised as to the appropriate role of
the state and of state policy in an increasingly networked world where distinctions
between market and society, public and private, are blurring. The broad message
emerging from this volume is that the state, increasingly bound into global and local
networks, must itself become more responsive to differences in the higher education
landscape. The mode of responsiveness inherited from the liberation struggle –
deliberative policy participation – will no longer suffice. Instead, the state will have to
make use of information and information networks, and this will become more
technical as information technology proceeds apace.
Cloete et al (2002) demonstrate that in South Africa the post apartheid period started
with ‘symbolicy policy’, the prime intention being to declare a break with the past, and
to signal a new direction. The need to declare a break with the past implied that the
main items on the policy agenda had to reflect political priorities. This implied that the
new policy issues with respect to higher education in 1994 in South Africa were
mainly concerned with the need to create more equity and democracy in the sector.
The developments in Central and Eastern Europe after the changes of the late 1980s
and early 1990s show a similar pattern (Vlasceanu and Sadlak, 2001). The new
higher education policy signalled a break with the past through an initial emphasis on
the ‘de-ideologising’ of the curricula in higher education, as well as an attempt to
strengthen institutional autonomy. Because of the difficult fiscal situation faced by the
governments in those countries, neither of these two original policy aims improved
the position and functioning of the battered public universities. What followed was a
succession of symbolic policy attempts, none of which were co-ordinated with overall
state policy or with efficient allocation instruments. The result was a constantly
changing policy focus described as 'changing the changes' which led to policy fatigue
and scepticism, especially among the academic staff.
In contrast to symbolic policy, differentiated policy-making, means identifying and
agreeing upon particular institutional targets that prescribe the route each institution
is supposed to follow against broad systemic benchmarks, as well as the creation of
an environment of pressure and support necessary to facilitate progress along the
route. This type of policy-making has to be distinguished from ‘comprehensive
policy’, that is, a set of broad general principles and benchmarks for a whole sector
(such as those in the 1997 White Paper). Both before and after 1994, South Africa
traditionally concentrated on ‘comprehensive or grand policy’ thereby neglecting the
difficult priority decisions and differential levers that have to be designed to
implement it. Indeed, ‘comprehensive’ policy was the form that ‘symbolic’ policy took
in the immediate post 1994 period.
Policy differentiation is not necessarily aimed at creating institutional differentiation. It
can have quite the opposite intentions, namely to reduce differentiation. In contrast, a
comprehensive policy that is 'the same for all' often has highly differentiating effects,
as the application of the funding formula has demonstrated in the South African case.
One of the most important factors hindering the design of specific policy levers
3
between 1994 and 1999 was the absence of an up-to-date information system and a
common set of informational formats so that benchmarks can be constructed and
each institution’s performance can be compared – both with the performance of other
institutions in the system and with their own performance over time. A new national
higher education management information system (HEMIS) was introduced for the
2000 academic year, and this has aided policy analyses, including attempts to find
acceptable higher education performance models.
Systemic benchmarks, systemic evaluations based on quasi-experimental designs,
and a host of other sources and forms of information must be commissioned, coordinated, monitored, and responded to. If political consultation was the dominant
mode of distributed democracy before, then information and knowledge management
becomes the primary mode of accountable responsiveness in the network society.
Intervening to enforce ‘comprehensive or grand policy’ will increasingly only serve to
destabilise the sector.
A key aspect of differentiating policy is experimentation. Policy is a form of explicit
and deliberate governmental intervention. It is very important that policy-makers
design a policy in such a way (through experiments) that the effects of the
intervention can be assessed; in other words, that knowledge about which measures
and instruments work, and which do not work can be increased. It is risky to make
any statements, or come to any conclusions concerning interventions and their
effects, without using experimental or quasi-experimental research methods. Too
much is assumed concerning the effects of policies and policy instruments and very
little is actually known about these effects. No national system can prosper without
continual monitoring and research that creates the kind of information and analysis
around which collaboration most usefully occurs. This circulates information and
allows all the actors – government, society/market and institutions – to be
responsive.
The above discussion points to the fact that unidirectional comprehensive policy has
not worked in South Africa in the post-1994 period. Instead, a different notion of
higher education transformation, based on a more targeted, differentiated,
information-rich policy interaction between government, institutions and society has
to be developed.
In moving towards a mix mode of comprehensive and differentiated policy, the South
African government published in 1997 a White Paper on Higher Education
Transformation and in 2001 a National Plan for Higher Education. Both publications
indicated that the government would develop steering mechanisms involving
planning and funding to assist with the transformation of the public higher education
system.
These and other policy documents indicated that this steering through planning and
funding would involve a cyclical process of this kind:
♦ The Ministry of Education would, at the start of a planning cycle, assess the
performance of individual higher education institutions against goals and
targets contained in institutional plans approved by the Ministry.
4
♦ Institutional goals and targets would be confirmed or adjusted in the light of
the performance measures conducted by the Ministry.
♦ The goals and targets approved by the Ministry would determine, for each
public higher education institution, what totals and categories of students
would be funded by government in the national budget cycle to follow.
♦ The performance of institutions in achieving the same or revised goals and
targets would be measured by the Ministry at the start of the next planning
cycle.
If this cyclical process is to function as an effective steering mechanism for a higher
education system, then the ways in which institutional performance is to be
measured will have to be spelled out in clear and precise ways. One of the purposes
of this CHET discussion document is that of contributing to the South African debate,
by considering various models of performance measurement and raising various
methodological and technical issues. The document does not argue for the adoption
of a specific set of performance measures for public higher education, but records a
process of thinking about performance indicators that can contribute to institutional
improvement. Institutional improvement is understood not to mean only government
driven, but institutions, and stakeholder groups, having access to information that can
help shape the course of the institution.
Apart from South Africa, a very important issue is how do higher education
institutions in Africa break out of the country specific discourses of the moment which
make it impossible to have a meaningful cross country conservation. If a framework
for such a discourse is not developed, then aid and development in higher education
a part of the broad thrust of NEPAD will remain insulated pipe dreams.
It is also
hoped that it can serve as a basis for developing a broader, Africa wide, debate on
performance indicators.
Structure of Report
The discussion document begins with a section which outlines the main points raised
by the CHET publication of 2000 on measuring the transformation of the South
African public higher education system. The data used in the CHET 2000 book is updated wherever possible; the main purpose being that of discussing the methodology
employed in, and the main weaknesses of this approach. The purpose of this section
is not that of offering a new, more up-to-date, assessment of transformation in the
South African higher education system.
The next section moves to a report published in 2001 by the Minister of Education’s
National Working Group (NWG) on the restructuring of the South African higher
education landscape. The discussion focuses on the NWG’s attempt to define
indicators of the "fitness for purpose" of higher education institutions. The main text in
this section is derived from the published report. The discussion and highlights the
key features of the NWG methodology, and of some of the objections raised to it in
South Africa during 2002.
The third section considers a different model which accepts the basic NWG
methodology, but which uses different indicators and benchmarks. The main purpose
5
of the section is once again that of illustrating what the effects are of this set of
methodological assumptions and this choice of new indicators and benchmarks. This
model was discussed at an informal seminar in Cape Town in January 2004, and
range of criticisms were raised. These criticisms are summarized at the conclusion of
this section.
The fourth main section considers a further model which separates systemic
indicators and institutional indicators. The systemic indicators are based on a
variation of CHET 2000, and the institutional indicators on a methodology which
moves away from that of the NWG.
The final section offers a brief reflection on progress and challenges.
6
SECTION 2:
2.1
HIGHER EDUCATION TRANSFORMATION:
MEASURING SYSTEMIC PERFORMANCE
TRANSFORMATION GOALS
CHET published in 2000 a short book entitled Higher Education Transformation:
Assessing Performance in South Africa. The book offered an assessment of the
performance of the higher education system relative to the goals contained in the
1997 White Paper on higher education transformation.
The 1997 White Paper set four main transformation goals for the higher education
system. It also set, within the framework of these broad goals, a number of subsidiary
goals. These broad and subsidiary goals were set out by CHET (2000:4-7) in the
ways summarized in the table below.
TABLE 1: TRANSFORMATION GOALS OF THE 1997 WHITE PAPER
BROAD GOAL A:
A1:
Specific goals related to the size and shape of the higher education
system
Goal 1
Goal 2
Total student enrolments must increase.
The composition of the student body must over time reflect the demographic
reality of the broader South African society.
The participation rates of African, coloured and women students must increase.
Career-oriented programmes, particularly in science and technology, must be
expanded.
Postgraduate programmes at masters and doctoral levels must be expanded to
meet the needs of the academic labour market.
An environment must be created in which private institutions can play a role in
expanding access to higher education.
Goal 3
Goal 4
Goal 5
Goal 6
A2:
INCREASED AND BROADENED PARTICIPATION
Specific goals related to the student outputs of the higher education
system
Goal 7
Goal 8
Student throughput and output rates must improve.
The success rates of black students must improve.
A3: Specific goals related to staff equity
Goal 9
The number and proportions of blacks and of women on the staff of higher
education institutions must improve over time.
BROAD GOAL B:
RESPONSIVENESS TO SOCIAL INTERESTS AND NEEDS
B1: Specific goals related to responsiveness
Goal 10
Basic research must continue to develop and grow. Applications-driven research,
which addresses critical national needs, must be developed.
Goal 11 The development of basic and applied research must take place within the
framework of a national research plan.
Goal 12 The institutional base for research must be expanded. The research capacities of
the technikons and of the historically disadvantaged institutions must be
extended.
Goal 13 The graduates and knowledge outputs of higher education must to an increasing
extent meet the needs of a modernizing economy.
BROAD GOAL C:
CO-OPERATION AND PARTNERSHIPS IN GOVERNANCE
C1: Specific goals related to governance
Goal 14
System and institutional governance systems must reflect and strengthen the
7
values and practices of South Africa’s new democracy
The national governance system must be one which supports all institutions and
which encourages inter-institutional co-operation, particularly at a regional level.
BROAD GOAL D:
FUNDING
Goal 15
D12: Specific goals related to funding
Goal 16
Goal 17
Goal 18
The proportion of funds available for earmarked purposes must increase
A targeted programme of institutional redress must be implemented
Public funding of the higher education system must stabilize, but must be
distributed differently.
The CHET book used data available for the period 1997-1999 in its assessment of
the movement of the South African higher education towards the 18 goals listed
above. The book recognized at the time that this narrow time-frame did not permit an
adequate assessment to be made of the system, particularly since only 1998 and
1999 were strictly post-White Paper years. The book therefore stressed that new
assessments should be made when fuller data sets became available.
In the sections which follow, quantitative data up until the end of the 2002 academic
year are used to up-date the CHET 2000 assessments of the performance of the
higher education system. As will be seen, no attempt has been to up-date those
assessments which have to be based on qualitative judgments.
It is important to stress at the outset that the point of this up-dating of data is not that
of re-assessing the performance of the South African system. The main purpose is to
illustrate further what is involved in the methodology adopted by CHET 2000, and in
the use it makes of particular kinds of indicator.
2.2
Goal 1:
Goal 2:
Goal 3:
Goal 4:
Goal 5:
Goal 6:
SIZE AND SHAPE OF THE HIGHER EDUCATION SYSTEM
Total student enrolments must increase.
The composition of the student body must over time reflect the
demographic reality of the broader South African society.
The participation rates of African, coloured and women students
must increase.
Career-oriented programmes, particularly in science and technology,
must be expanded.
Postgraduate programmes at masters and doctoral levels must be
expanded to meet the needs of the academic labour market.
An environment must be created in which private institutions can
play a role in expanding access to higher education.
It is important to note that no attempt was made to find new qualitative data relevant
to Goal 6. No assessments or judgments were therefore made about the higher
education system’s move towards the achievement of this goal.
The indicators linked to Goals 1, 2 and 4 are straightforwardly quantitative ones. The
indicators of the system’s progress towards these three goals are taken by CHET
2000 to be changes which occurred in head count enrolments over a period of time.
The indicators for Goals 3 and 5 are more complex. In the absence of data about the
enrolment of age-cohorts in higher education, CHET 2000 assumed that changes in
head count enrolments by race and gender could serve as proxies for changes in
8
participation rates. CHET 2000 assumed further, because of a lack of data about the
employment tracks of graduates, that changes in overall masters and doctoral
graduates could serve as indicators of the system’s meeting the needs of the
academic labour market.
Graphs 1-3 which follow set out the data for the indicators linked to Goals 1, 2 and 3.
The key points which emerge from the presentation of these data are these:
♦ Total student enrolments in public higher education increased by a total of 78 000
(or 13%) in 2002 compared to 1997. These changes are sufficient to suggest that
Goal 1 was satisfied over this period.
♦ The proportion of white students in public higher education institutions fell from
47% in 1993, to 31% in 1997, and to 27% in 2002. The proportion of African plus
coloured students increased from 46% in 1993, to 63% in 1997, and to 66% in
2002. The proportion of female students increased from 43% in 1993, to 50% in
1997, and to 54% in
2002. This would support a conclusion that Goal 2 was satisfied over this period;
that the composition of the student body in public higher education institutions has
to an increasing extent begun to reflect the demographic realities of South African
society.
♦ If enrolment changes can serve as proxies for participation rates, then it could be
concluded that the participation rates of African, coloured and Indian students had
improved in 2002 compared to 1993 and 1997. This would satisfy Goal 3.
Graph 1
Head count enrolment totals in public universities and
technikons: 1993-2002
(thousands)
800
600
400
200
0
Total
1993
1995
1997
1999
2000
2001
2002
473
567
596
596
600
645
674
Graph 2
Head count enrolments by population group: 1993-2002
120%
100%
80%
60%
40%
20%
0%
1993
1997
1999
2000
2001
2002
White
47%
31%
28%
27%
27%
27%
Indian
7%
7%
6%
7%
7%
7%
Coloured
6%
5%
5%
5%
5%
6%
40%
58%
960%
60%
61%
60%
African
Graph 3
Head count enrolments by gender: 1993-2002
120%
100%
80%
60%
40%
20%
0%
1993
1997
1999
2000
2001
2002
M ale
57%
50%
48%
46%
46%
46%
Fe male
43%
50%
52%
54%
54%
54%
Graph 4 and 5 below set out the data for the indicators linked to Goal 4 above. The
key indicators used in CHET 2000 link growth in (a) technikon enrolments and (b)
science and technology and business/management enrolments to growth in careeroriented programmes. Graph 4 shows what share technikons had of total public
higher education enrolments between 1993 and 2002. Graph 5 deals with head count
enrolments within three broad fields of specialization. These are the fields of SET
(science, engineering and technology), business (which includes all management
studies and accountancy and finance), and humanities (which includes education,
the fine and applied arts, languages, and the social sciences).
Graph 4 shows that the share which technikons had of total higher education
enrolments did not increase over this period. Their proportion of head count
enrolments was 34% in 1993 and in 1997, and 32% in 2002. Graph 5 shows that the
share which students specializing in science and technology had of the head count
enrolments of universities and technikons increased from 19% in 1993, to 23% in
1997 and to 30% in 2002. It follows that if growth in technikon enrolments is to be a
key indicator of a change to career-oriented enrolments, then the higher education
system has not satisfied this part of Goal 4. However if growth in science and
technology programmes is an indicator of growth in career-oriented programmes,
then this part of Goal 4 has been met by the South African higher education system.
Graph 4
Proportions of public higher education enrolments in
universities and technikons: 1993-2002
120%
100%
80%
60%
40%
20%
0%
1993
1995
1997
1999
2000
2001
Univ e rsitie s
66%
67%
66%
65%
66%
66%
68%
Te chnikons
34%
33%
34%
35%
34%
34%
32%
10
2002
Graph 5
Head count enrolments by field of specialisation: 1993-2002
120%
100%
80%
60%
40%
20%
0%
1993
1995
1997
1999
2000
2001
2002
Humanities
57%
58%
54%
53%
47%
43%
44%
Business
24%
22%
23%
24%
25%
28%
26%
SET
19%
20%
23%
23%
28%
29%
30%
Graph 6 sets out data relevant to the indicators for Goal 5. It shows that masters and
doctoral enrolments grew from 31 000 in 1997 to 47 000 in 2002, an increase of 52%
over this period. If this is an adequate indicator of growth in the academic labour
market, then it could be claimed that the requirements of Goal 5 have been met.
Graph 6
Head count enrolments of masters and doctoral students:
1995-2002 (thousands)
50
40
30
20
10
0
Total
2.3
Goal 7:
Goal 8:
1995
1997
1999
2000
2001
2002
29
31
35
37
41
47
STUDENT OUTPUTS OF THE HIGHER EDUCATION SYSTEM
Student throughput and output rates must improve.
The success rates of black students must improve.
Graphs 7 to 9, on the page which follows, set out the data for indicators of student
throughput and output rates. The CHET 2000 assessment of the system’s
performance was incomplete; primarily because relevant data were not available.
Graph 7 relies on a calculation which divides what are termed “FTE degree credits”
by FTE student enrolments. In the South African higher education management
information system (HEMIS), FTE enrolments are derived by multiplying the head
count student enrolments in a course by a fraction which represents the proportion
that course has of a standard full-time curriculum. A degree credit total is derived by
multiplying the students passing courses by the same fractions used in calculating
FTE enrolled student totals. So dividing an FTE degree credit total by an FTE
enrolled total is a way of determining average student success rates for an institution
or higher education system. An indicator of improvements in student output and
11
throughput rates could therefore be based on changes over time in the proportions of
FTE degree credits to FTE enrolments.
Graph 7
Total (contact + distance) FTE degree credits as % of total FTE
student enrolments
69%
68%
67%
66%
65%
64%
Av e rage
1998
1999
2000
2001
2002
66%
66%
68%
68%
69%
Graph 8 presents a simple comparison between total graduates and total head
counts in the public higher education system, also between 1998 and 2002. Graph 9
below uses the same method as Graph 7 to determine average success rates for the
system by population group. These data can also be used as indicators of
improvements in student output and throughput rates.
Graph 8
Head count totals of enrolments and graduates 1998-2002
800
600
400
200
0
Enrolments
Graduates
1998
1999
2000
2001
2002
596
596
600
645
674
89
95
92
96
98
Graph 9
Degree credits as % of FTE student enrolments: 1999-2002
100%
80%
60%
40%
20%
0%
African
Coloured
Indian
White
Average
1999
57%
68%
76%
81%
66%
2000
62%
70%
74%
82%
68%
2001
62%
70%
70%
80%
68%
2002
64%
68%
70%
77%
69%
12
The main points about throughput and output rates which emerge from Graphs 7 to 9
are these:
♦ If FTE degree credits divided by FTE enrolments can be taken to be indicators of
student output, then Graph 7 shows that the output rates of the public higher
education system improved steadily over the 5-year period 1998-2002. The ratio
of FTE degree credits to FTE enrolments increased from 66% in 1998, to 68% in
2000, and to 69% in 2002.
♦ If total graduates can be used as indicators of student output, then Graph 8 also
shows that student outputs improved over this period. The graduate total
increased from 89 000 in 1998 to 98 000 in 2002. The graph shows however that
enrolments increased at a more rapid rate than graduates, which may indicate
that throughput rates have not improved, as is required by Goal 7.
♦ Graph 9 shows that the average success rates of African students (measured as
FTE degree credits divided by FTE enrolments) improved from 57% in 1999 to
64% in 2002. That of coloured students was 68% in both 1999 and 2002.
If these are appropriate indicators, then these three graphs suggest that the higher
education system is meeting the demands of Goals 7 and 8. The data suggest that
throughput and output rates are improving, even though the changes up to 2002
have been relatively small. An equity issue does nevertheless remain as far as the
success rates of black students are concerned. Graph 9 shows that the ratio of FTE
degree credits to FTE enrolments averaged 62% for African students, 69% for
coloured students and 80% for white students.
2.4
STAFF EQUITY IN THE HIGHER EDUCATION SYSTEM
Goal 9:
The number and proportions of blacks and of women on the staff of
higher education institutions must improve over time.
CHET 2000 used four indicators to measure the performance of the public higher
education system in relation to the goal of staff equity. These were changes by
population group and by gender in (a) the permanent academic staff and (b) the
permanent professional executive and support staff. Permanent staff are, for the
purposes of these calculations, staff who contribute to an institutional pension or
retirement scheme. Academic staff include both teaching and research staff.
Professional staff are those occupying posts which require incumbents to hold at
least a four-year higher education qualification. The data supporting these indicators
are set out in Graphs 10 to 13 which follow.
Graphs 10 and 11 show that the proportions of blacks (= Africans+coloured+Indians)
on the permanent academic and on the professional executive and support staff of
public higher education institutions increased between 1998 and 2002. The
proportion of black academic staff increased from 21% in 1998 to 34% in 2002, and
the proportion of black professional executive and support staff increased from 21%
in 1998 to 39% in 2002.
13
Graphs 12 and 13 show that the proportion of females on the permanent academic
staff did not change in 2002 compared to 1998. This proportion remained 39%. The
proportion of female professional executive and support staff of public staff did
however increase from 36% in 1998 to 46% in 2002.
If these are acceptable indicators, then it could be argued that the higher education
sector did over this 5-year period move towards the goal of racial equity in its
academic staff complement, and towards the goals of racial and gender equity in its
professional executive and support staff.
Graph 10
Permanent academic staff by population group: 1998-2002
150%
100%
50%
0%
1998
1999
2000
2001
2002
11%
17%
20%
22%
20%
Indian
6%
6%
7%
7%
8%
Coloured
4%
4%
4%
4%
5%
79%
74%
69%
68%
66%
African
White
Graph 11
Executive & professional support staff by population group:
1998-2002
120%
100%
80%
60%
40%
20%
0%
1998
1999
2000
2001
2002
10%
17%
21%
23%
23%
Indian
7%
5%
6%
6%
7%
Coloure d
4%
5%
6%
6%
8%
77%
72%
67%
65%
61%
African
White
Graph 12
Permanent academic staff by gender: 1998-2002
120%
100%
80%
60%
40%
20%
0%
1998
1999
2000
2001
2002
Fe male
39%
37%
38%
38%
39%
M ale
61%
63%
62%
62%
61%
14
Graph 13
Executive and professional support staff by gender: 19982002
120%
100%
80%
60%
40%
20%
0%
2.5
1998
1999
2000
2001
2002
Fe male
36%
44%
48%
47%
46%
M ale
64%
56%
52%
53%
54%
RESPONSIVENESS TO SOCIAL INTERESTS AND NEEDS
Goal 10: Basic research must continue to develop and grow. Applicationsdriven research, which addresses critical national needs, must be
developed.
Goal 11: The development of basic and applied research must take place within
the framework of a national research plan.
Goal 12: The institutional base for research, and the research capacities of the
technikons and of the historically disadvantaged institutions must be
extended.
Goal 13: The graduates and knowledge outputs of higher education must to an
increasing extent meet the needs of a modernizing economy.
Because indicators related to Goal 11 are difficult to formulate, no attempt has been
made to present new data on the performance of the system relative to this goal.
CHET 2000 used the totals of publication units approved for government subsidy as
indications of the extent to which the higher education system has moved towards
Goals 10, 11 and 12. Graphs 14, 15 and 16 which follow set out these data on
research publication outputs for the period 1994-2002, and new data for 1998-2002
on doctoral graduates.
The CHET 2000 analysis was based on publication unit data for 1995-1997. The two
conclusions which it reached were (a) that because there had been no increase in
these totals over these three years, the higher education system had not satisfied the
goal that basic research production should increase, and (b) because most
publication units over this period were produced by the historically white universities,
the system’s research base was not being extended into the historically black
universities and the technikons. If these can be taken to be appropriate indicators,
then similar conclusions can be drawn for the years 1998-2002.
Graph 14 shows that total publication units in the system fell in 1999 to 5030, which
was 570 (or 10%) lower than the total for 1994, and 310 (or 6%) lower than the 1997
total. The graph does however show that the publication unit total grew between
1999 and 2002. Even though the 2002 total was the same as that for 1994, the
change could indicate that basic research production in the system has improved
over in years since 1999.
15
Graph 15 is consistent with this last point. It shows that total doctoral graduates were
in 2002 26% higher than the total for 1999.
Graph 16 shows that most publication units between 1997 and 2002 were produced
by the historically white universities. Their share was 87% in 1997 and 86% in 2002.
The total produced by the historically black universities fell by 10% in 2002 compared
to 1997. The total produced by technikons increased from 93 in 1997 to 224 in 2002,
which increased their share of the total for the system from 2% in 1997 to 4% in
2002.
Graph 14
Total pulication units approved for funding: 1994-2002
5800
5600
5400
5200
5000
4800
4600
Total
1994
1995
1996
1997
1998
1999
2000
2001
2002
5600
5500
5660
5340
5160
5030
5510
5470
5630
Graph 15
Totals of doctoral graduates in the higher education system:
1998-2002
1200
1000
800
600
400
200
0
1998
1999
2000
2001
2002
760
758
827
801
956
Graph 16
Subsidy-earning publication units by higher education sector:
1994-2002
6000
5000
4000
3000
2000
1000
0
1994
1995
1996
1997
1998
1999
2000
2001
2002
HWU
5013
4835
4935
4626
4407
4351
4793
4795
4821
HBU
517
603
613
610
623
505
561
516
559
68
62
114
93
132
174
156
153
224
TECH
16
CHET 2000, in its discussion of indicators for Goal 13, stressed that the 1997 White
Paper had not attempted to answer the crucial question: what are the skills that a
modernizing economy needs? CHET 2000 argued further that the most appropriate
answer to this question would be one which recognizes that higher education has a
dual function in a knowledge economy: (1) it must produce medium-skilled graduates
for the professional and service sectors of the economy, and (2) it must produce
highly-skilled knowledge producers for high-level innovation. CHET 2000 therefore
took as indicators of the system’s moves towards these goals the totals of graduates
which it had produced.
Graphs 17 to 19 offer information for the period 1998-2002 on the public higher
education system’s production of graduates. Graph 17 takes masters and doctoral
graduates to be the potential high level knowledge producers contemplated by the
1997 White Paper, and compares their total to the total of all other graduates. Graphs
18 and 19 deal with the fields of specialization of graduates. Some of the main points
which emerge from these three graphs are these:
♦ Graph 17 shows that the total of all graduates increased by 10% between 1998
and 2002. Masters and doctoral graduates grew at a faster rate than this. The
2002 total of masters and doctoral graduates was 2174 or 40% higher than that
for 1998.
♦ Graphs 18 and 19 show that graduate totals and proportions in the field of
science, engineering and technology, which was one of the areas targeted in the
1997 White Paper, remained relatively constant during 1998 to 2002. A total of 23
000 SET graduates (25% of the overall 1998 total) was produced in 1998,
compared to a total of 25 000 in 2002 (25% of the overall 2002 total). Graduates
in business and management, which was the other main area targeted by the
1997 White Paper, increased rapidly over the period. A total of 16 000
business/management graduates (18% of the overall total) was produced in 1998,
compared to a total of 23 000 in 2002 (23% of the overall 2002 total).
The data in these last three graphs suggest that, if CHET 2000’s indicators are
appropriate ones, then the higher education system did, during the period 1998 to
2002, move further towards the achievement of part of the fourth responsiveness
goal. This is the goal of producing the graduates required for a modernizing
economy.
Graph 17
Masters + doctoral and other graduates: 1998-2002
150000
100000
50000
0
Masters+doctoral
1998
1999
2000
2001
2002
5368
6212
6738
7065
7542
All other
83704
Total
89072
89315
17
95527
85587
89265
90742
92325
96330
98284
Graph 18
Total graduates/diplomats by broad CESM category: 1998-2002
(thousands)
150
100
50
0
1998
1999
2000
2001
2002
SET
23
25
24
24
25
BUS
16
19
20
21
23
HUM
50
52
48
52
51
TOTAL
89
96
92
96
98
Graph 19
Proportions of graduates/diplomates in each broad field of
study: 1998-2002
150%
100%
50%
0%
2.6
1998
1999
2000
2001
2002
HUM
37%
36%
32%
29%
28%
ED
20%
18%
21%
25%
24%
BUS
18%
19%
22%
22%
23%
SET
25%
26%
26%
24%
25%
CO-OPERATION AND PARTNERSHIPS IN GOVERNANCE
Goal 14: System and institutional governance systems must reflect and strengthen the
values and practices of South Africa’s new democracy.
Goal 15: The national governance system must be one which supports all institutions
and which encourages inter-institutional co-operation, particularly at a
regional level
No attempt was made to up-date the CHET 2000 assessments as far as these goals
are concerned. Any adjustments made would have to have been based on further
qualitative analyses which were not available for the period 1998-2002.
2.7
FUNDING
Goal 16: The proportion of funds available for earmarked purposes must increase
Goal 17: A targeted programme of institutional redress must be implemented
Goal 18: Public funding of the higher education system must stabilize, but must be
distributed differently.
18
It is not clear what indicators could be linked to Goal 17. No attempt was therefore
made to find new data in support of this goal.
The 1997 White Paper considered government funding to be one of the key
mechanisms for ensuring that the higher education system moves towards approved
transformation goals. It set Goal 16 as one of the subsidiary funding goals, on the
understanding that earmarked funding would be a more directive mechanism than
block funding. The indicator related to this goal can therefore be changes in the
proportions of the higher education budget used for block grant and earmarked
funding.
Graph 20 below compares government funding of block grant and earmarked grants
over the period 1997-2002. The graph shows that in nominal terms the amount set
aside for earmarked grants in 2002 was 81% higher than the 1997 allocation. The
nominal increase in block grants in 2002 compared to 1997 was 43%, which led to
the proportion of the government higher education budget allocated to earmarked
growing from 10% in 1997 to 12% in 2002.
Graph 20
Total governm ent higher education funds allocated for
earm arked and block grant purposes: 1997 to 2002
(Rands m illions)
10000
8000
6000
4000
2000
0
1997
1998
1999
2000
2001
2002
545
694
807
810
912
984
Block
4887
5309
5803
6204
6620
6985
Total
5432
6003
6610
7014
7532
7969
Earm arked
The second part of Goal 18 requires government funds to be distributed differently,
implying by this that the funding formulas which had been in place since the 1980s
should be replaced by a new framework which satisfies the requirements of the 1997
White Paper. Such a new framework was finally adopted during 1994, and was
applied for the first time in the 2004 academic year.
The first part of Goal 18 requires public funding of the public higher education to
stabilize. Graph sets out a range of indicators which can be used to measure
performance against this goal.
19
Graph 21
C h a n g e s in h ig h e r e d u c a tio n fu n d in g in d ic a to rs : 1 9 9 8 -2 0 0 2
[A ll o n b a s e o f 1 9 9 8 = 1 0 0 ]
140
120
100
80
60
40
20
0
FTE
e n ro l
Head
c o u n ts
N o m in a l
gov
to ta l
R eal
gov
to ta l
% of
GDP
% of ed
to ta l
N o m in a l R e a l
per FTE per FTE
1998
100
100
100
100
100
100
100
100
2002
112
113
128
102
91
100
113
93
The various columns in the graph show the following:
♦ Total FTE enrolments in the public higher education system increased by 12% in
2002 compared to 1998. Head count student enrolments grew by 13%. Total
government appropriations for public higher education in nominal Rands grew in
2002 compared to 1998 by the higher proportion of 28%. Growth in real Rands
over this period was 2%. If these are appropriate indicators, then it could be
argued that government funding of higher education has stabilized over this
period.
♦ The effect of the student growth reflected in the first column is that total
government appropriations per FTE enrolled student grew in 2002 compared to
1998 by 13% in nominal Rands. However, in real Rands total government
appropriations per FTE enrolled student dropped by 7%. These could be used as
indicators that government funding of higher education did not in fact stabilize
over the period.
♦ Government expenditure on public higher education as a % of total government
expenditure on all education remained constant between 1998 and 2002.
However, government expenditure on public higher education as a % of GDP fell
by 9% in 2002 compared to 1998. If these are appropriate indicators, then a
picture emerges of government funding of higher education being stable under
one indicator and not under the second.
The indicators linked to the data in Graph 21 give therefore a mixed message about
movement towards the goal of stabilizing the public funding of higher education. One
set of indicators suggest that this goal was achieved during the period 1998-2000,
and another set that it was not.
2.8
METHODOLOGICAL ASSUMPTIONS OF CHET 2000
It should be clear that the CHET 2000 approach was based on a number of key
assumptions about performance measurement systems. The main assumptions
underpinning the CHET 2000 methodology are these:
20
♦ Higher education institutions are entities to which simple and complex
(including relational) properties can be ascribed. A higher education system is
the sum of its constituent institutions, but properties not ascribable to an
individual institution can be assigned to a higher education system.
♦ The properties of higher education institutions, and of higher education
systems, can change over time.
♦ Indicators are a means of referring to higher education properties; either at a
specific point in time or as these change over time. A distinction has however
to be drawn between (a) descriptive indicators, and (b) performance
indicators.
♦ Descriptive indicators are used to refer to properties which institutions or
higher education systems happen to have. These are properties which either
are not the result of intentional actions of institutions, or are “accidental”
relative to either national or institutional policy goals. Performance indicators,
in contrast to this, must refer to properties which are the result of intentional
actions, and which institution or systems are expected to have in terms of
national goals.
These assumptions led CHET 2000 to state (a) that the goals contained in the 1997
White Paper were based on a set of properties which the South African higher
education system would be expected to acquire (over a period of time), and (b) that a
set of performance indicators for the higher education system could be extracted
from the 1997 White Paper.
2.9
SOME PROBLEMS WITH THE CHET 2000 METHODOLOGY
The methodological assumptions listed in the subsection above led CHET 2000 to
adopt a model which held that the main ingredients of a higher education
performance measurement system are these:
♦ Sets of government-determined goals for the higher education.
♦ Sets of properties which can be derived from these goals, and which the
higher education system will possess if these goals are achieved.
♦ Sets of indicators which can be used to refer to these properties.
The detailed discussions in earlier subsections showed that CHET 2000 did not in
any systematic way attempt to link what it has taken to be the main ingredients of a
systemic performance measurement system. The 18 goals selected are derived
directly from the White Paper. But it is not always self-evident what the systemic
properties are that have been derived from these goals. It is also not always clear,
even when properties are linked to goals, what indicators have been selected to refer
to these properties. Some examples of these problems are these:
21
♦ Goal 6 which deals with the creation of a private higher education sector, and
Goal 11 with the establishing of a national research plan. What systemic
property or properties would that South African higher education system have
if these two goals were achieved?
♦ The property related to Goal 13 is that of a higher education system meeting
the needs of a modernizing economy. What are the most appropriate
indicators for this systemic property?
♦ One of the properties related to Goal 18 is that of government funding of the
higher education system stabilizing over time. A range of indicators can be
selected but these give signals which are inconsistent; which suggest that the
system both has and does not have the required properties.
What could be taken to be a further major weakness in the approach adopted by
CHET 2000 is that it does not attempt permit an overall evaluation to be made of the
South African higher education system. It leaves unanswered the key questions:
♦ How well or how badly is the system doing? It could not do this firstly because
there were no targets or benchmarks set. Statements such as improve and
increase offers no real basis for judgment.
♦ How close is it to achieving the transformation goals of the 1997 White Paper?
♦ This form of assessment offers no internal of international comparisons, nor
does it say anything about whether change is due to government policy,
institutional initiative or societal pressure.
All that CHET 2000 was able to offer a series of “comments” on the performance of
the higher education system. A summary of the comments published in 2000, and
what could be an updated series of comments based on the new data available can
be seen in Table 2 below. As can be seen, these columns of comments cannot be
taken to an adequate overall evaluation of the performance of the South African
higher education system.
22
Table 2
SUMMARY OF NATIONAL
TRANSFORMATION GOALS
A: Broadened and increased
participation
Goal 1: Total student enrolments
must increase
Goal 2: The composition of the
student body must become more
representative
Goal 3: The participation rates of
African, coloured and women students
must increase
Goal 4: Career-oriented programmes
must be expanded
Goal 5: Masters and doctoral
programmes must be expanded
Goal 6: Private institutions must a role
in expanding access to higher
education.
Goal 7: Student throughput and
output rates must improve.
Goal 8: The success rates of black
students must improve.
Goal 9: The proportions of blacks and
of women on the academic and other
professional staff must improve
B: Responsiveness to societal
needs
Goal 10: Basic research must
continue to develop and grow
Goal 11: A national research plan
must be developed
Goal 12: The institutional base for
research must be expanded
Goal 13: Graduates must to meet the
needs of a modernizing economy
C: Governance
Goal 14: Governance systems must
reflect the values and practices of
South Africa’s new democracy.
Goal 15: The national governance
system must support all institutions
and encourage inter-institutional cooperation
D: Funding
Goal 16: The proportion of earmarked
funding must increase
Goal 17: Targeted institutional redress
funding must be implemented
Goal 18: Public funding of the higher
COMMENTS FOR PERIOD
UP TO 1999
POSSIBLE NEW COMMENTS
FOR PERIOD UP TO 2002
Enrolments began to fall by
1999
The proportions of black and
women students increased
rapidly by 1999, but they were
under-represented in key
programmes
Black participation rates,
measured in 1999 as % of agegroup in higher education
remained constant.
Enrolments up to 1999 in SET
and in business majors and in
technikons show movement
towards this goal
FTE and head count enrolments
enrolment reached peaks in 2002
The proportions of black and
women students remained at 1999
levels in period up to 2002
Masters and doctoral enrolments
increased in period up to 1999
No clear evidence available in
1999 on what is occurring in the
private higher education sector
These rates remained at best
constant during period up to
1999
Success rates of black students
were in 1999 sharply lower than
those of white students
Changes could be seen up to
1999 in the historically black, but
not the historically white,
institutions
Increases in black and female
enrolments implies that their
participation rates must have
improved.
The number and % of enrolments in
SET and business majors
increased in period up to 2002.
However technikon enrolments
were flat.
Masters and doctoral enrolments
increased in period up to 2002
No clear evidence available in 2002
on what is occurring in the private
higher education sector
Throughput and output rates
improved during 5-year period up to
2002
Success rates of black students
improved up to 2002, but remained
lower than those of white students
The % of black and women staff
increased in period up to 2002, but
remained far lower than those of
whites and males
Basic research outputs had
fallen sharply in period up to
1999
No national plan had been
developed by 1999
The research bases of
historically black universities and
of technikons were up to 1999
under-developed
Insufficient graduates with
required skills were being
produced in period up to 1999
Basic research outputs, as well as
total of doctoral graduates,
increased between 1998 and 2002
No national plan had been
developed by 2002
The research bases of historically
black universities and of technikons
remained under-developed up to
2002
Graduate totals increased in the
period between 1998 and 2002.
More masters & doctoral graduates,
and more SET and business
graduates, were produced.
The picture available in 1999
was a mixed one.
No new evidence was available in
2002
The picture available in 1999
was again a mixed one.
No new evidence was available in
2002
This proportion remained
constant in the period up to 1999
By 1999 no institutional redress
programme had been introduced
Government funding increased
This proportion increased in the
period up to 2002
By 2002 no institutional redress
programme had been introduced
Total government funding increased
23
education system must stabilize, but
must be distributed differently
SECTION 3:
3.1
in real terms up to 1999, but no
new funding mechanism.
in real terms up to 2002 but real
funding per FTE student fell .
DETERMINING THE “FITNESS FOR PURPOSE” OF
HIGHER EDUCATION INSTITUTIONS
RESTRUCTURING THE SOUTH AFRICAN HIGHER EDUCATION
SYSTEM
A set of performance measures which differed in many ways from the CHET 2000
model were developed by a National Working Group (NWG) which was established
in April 2001 by the South African Minister of Education. The main purpose of the
NWG was set out in this way:
“The National Working Group will investigate and advise the Minister on
appropriate arrangements for consolidating the provision of higher education
on a regional basis through establishing new institutional and organizational
forms, including the feasibility of reducing the number of higher education
institutions. The investigation forms part of the broader process for the
restructuring of the higher education system to ensure that it contributes to
social and economic development, as outlined in the National Plan for Higher
Education (of March 2001)” (NWG 2001: 56).
The NWG’s formal terms of reference stated that its “investigation must be guided by
the principles and goals for the transformation of the higher education system as
outlined in Education White Paper 3: A Programme for the Transformation of the
Higher Education System” (NWG 2001: 56).
The NWG read these transformation principles together with the goals of the National
Plan for Higher Education (March 2001), and then took as its point of departure:
“---the emphasis in the National Plan on the need to ensure that the higher
education system produces high quality graduates with the appropriate skills
and competencies, as well as the knowledge and research required to
contribute to social and economic development. In short, in line with the
National Plan, the NWG focused on the need to ensure the ‘fitness for
purpose’ of the higher education system; i.e. the extent to which the elements
constituting the structures and operations of the system are suited and wellequipped to fulfill effectively those functions which are its raison d’etre, thus
enhancing the quality of the higher education system” (NWG 2001: 12).
The NWG added that three main properties are:
“---critical to ensuring the ‘fitness for purpose’ of the higher education system.
These are equity, sustainability and productivity. A restructured higher
education system should be socially just and equitable in its distribution of
resources and opportunities, it should meet the requirements of long-term
sustainability and it should enhance the productivity of the system through
effectively and efficiently meeting the teaching, skills development and
research needs of the country” (NWG 2001: 12).
24
This notion that policy required higher education institutions in South Africa to have a
specific set of properties played a major role in the development of the performance
measures used by the NWG. The NWG claims to have used these three properties
as the basis for a set of performance indicators and benchmarks designed to:
“---provide a framework for assessing quantitatively the equity, sustainability
and productivity properties that in the NWG’s view should characterize healthy
and well-functioning higher education institutions” (NWG 2001: 12).
The NWG did however add some cautionary notes about its selected indicators and
benchmarks:
“The NWG recognizes that the indicators and benchmarks do not reflect
properties, such as leadership, management, performance and academic
standards, which can only be assessed through qualitative judgments and
peer review. The NWG also recognises that the methodology used to derive
some of the indicators, such as graduation rates, is open to discussion. This is
largely due to the limited availability of appropriate data because of
shortcomings in the old SAPSE management information system. However,
despite these concerns, the NWG is convinced that the indicators provide a
useful framework with which to identify some of the strengths and weaknesses
of the higher education system in general and individual institutions in
particular” (NWG 2001: 12-13).
The key methodological assumptions which emerge from this opening section of the
NWG’s report are these:
♦ Sets of properties which a higher education system and its constituent
institutions; are expected to possess can be derived from the policy-driven
goals for that system.
♦ Quantitative performance indicators can be used to refer to both the properties
which a system and its constituent institutions in fact possess, and to those
which it ought to possess.
♦ Quantitative indicators which refer to properties which a system or institution
ought to c possess can be termed the benchmarks for that system.
The first two of these assumptions are similar to the fundamental ones underpinning
the CHET 2000 methodology. These are the assumptions that sets of properties
which institutions are expected to possess can be derived from national policy goals,
and that quantitative indicators can be used to refer to these properties. The third
assumption places emphasis on an issue not picked up by CHET 2000. This is that
of evaluation: if policy goals generate properties that a system or institution ought to
possess, then it must be possible to use indicators both to refer to where institutions
happen to be in their move towards specific goals, and to evaluate this performance.
In the subsections which follow, the NWG’s sets of indicators and benchmarks will be
presented and discussed in detail.
25
3.2
EXPECTED FEATURES, INDICATORS AND BENCHMARKS
Table 3 on the next page offers a detailed summary of the NWG’s listing of the
policy-derived features which any public South African higher education institution is
expected to have, and of the indicators and benchmarks which will be used (a) to
refer to the selected features and (b) to indicate whether or not the institution meets
the specific “fitness-for-purpose” criterion. The table is divided into three columns
which deal with (a) the broad features which SA universities and technikons are
expected to have in terms of current government policies, (b) the quantitative
indicators used to refer to those features, (c) the quantitative benchmarks linked to
these indicators (for a full version of the table, which refers also to data sources, see
NWG 2001: 61-63).
The NWG’s first group of features deals with issues of equity in student and staff
participation in higher education institutions. Its indicators were the proportions of
students and staff by race group and gender in individual higher education
institutions. Its benchmarks, in the sense of the properties that the properties which
an institution ought to possess were: 40% of on-campus students and of professional
staff being African, and 50% female.
The NWG’s second group of features dealt with the sustainability of the student
enrolment of an institution, and of the size and shape of these enrolments. Its
benchmark for sustainability was that the annual inflow of students into an institution
should at least match the outflow. Its benchmark for size was that the full-time
equivalent (FTE) enrolment in an institution should be at least 8 000. Its benchmarks
for the shape of a university were that at least 50% of FTE enrolments should be in
science and technology and business/management, and for a technikon that at least
70% of FTE enrolments should be in these fields.
The third set of features deals with the availability and qualifications of academic
staff. Its benchmarks for staff availability were a ratio of at most 20 FTE students per
FTE staff member in universities, and a ratio of at most 25 in technikons. Its
benchmarks for staff qualifications were 50% of permanent academic staff in
universities to have doctorates, and 35% of permanent academic staff in technikons
to have either masters or doctoral degrees
The fourth set of features deals with student outputs. The benchmarks were set as
ratios of graduates to enrolments, with the underlying assumption being that these
ratios imply that at least 75% of any cohort of undergraduate or postgraduate
students entering an institution should eventually graduate.
The fifth set of features deals with staff outputs. The benchmarks were set as ratios
of research publications and of masters and doctoral graduates per permanent
academic staff member. The benchmark for universities was set as 1 publication unit
per permanent academic staff member per year, and the benchmark for technikons
was set at 0.5. The benchmark for masters and doctoral graduates was set as the
equivalent of either 1 masters graduate per permanent academic staff member per
year or 1 doctoral graduate per permanent academic staff member every three years.
The benchmarks for technikons was set once again as 50% of those of universities.
26
The final feature dealt with the financial sustainability of institutions. The benchmark
set was that institutions should be given, after use had been made of a wide range of
indicators, a rating of at least “highly likely” to survive as a going concern.
Table 3
NWG’s expected features, indicators and benchmarks
27
3.2
APPLICATION OF INDICATORS AND BENCHMARKS
The NWG used information for 1999 and 2000 to produce, for each institution, tables
which related data to the indicators and benchmarks. It then produced separate sets
of averages for universities and technikons, and related these to the benchmarks for
institutions. These are set out in Table 4 below.
Table 4
D A T A A V E R A G E S F O R H IG H E R E D U C A T IO N S Y S T E M
[ N O T E : U N I S A A N D T E C H N IK O N S A H A V E N O T B E E N IN C L U D E D I N T H E S E A V E R A G E S
EXPECTED FEATURE
U N I V E R S IT IE S
A ve ra g e
B e n c h m a rk
[1 ] S tu d e n t e q u ity
( a ) % A f r i c a n s in c o n t a c t t o t a l
( b ) % f e m a l e s in c o n t a c t t o t a l
[ 2 ] S t a f f e q u it y
( a ) % A f r i c a n s in p r o f e s s i o n a l s t a f f
( b ) % f e m a l e s in p r o f e s s i o n a l s t a f f
[ 3 ] E n r o l m e n t s t a b i l it y
R e t e n t io n + r e p l a c e m e n t r a t e
[ 4 ] E n r o l m e n t s iz e
F T E s tu d e n t to ta l
[5 ] E n r o lm e n t s h a p e
( a ) % S E T p l u s B u s /m a n a g e m e n t
( b ) % H u m a n it i e s
[6 ] S tu d e n ts : a c a d e m ic s ta ff
R a tio o f F T E s tu d e n ts to F T E
in s tr u c tio n /r e s e a rc h s ta ff
[7 ]
A c a d e m i c s t a f f q u a l if i c a t i o n s
% o f p e r m a n e n t a c a d e m i c s t a f f w it h :
(a ) d o c to ra l d e g re e s
(b ) d o c to rs + m a s te r s d e g re e s
[8 ] U n d e rg ra d u a te o u tp u ts
3 - y e a r q u a li f ic a t io n g r a d u a t e s
d iv id e d b y e n r o lm e n ts
[9 ] P o s tg ra d u a te o u tp u ts
(a ) m a s te r s g ra d u a te s / b y e n r o lm e n ts
(b ) d o c to r a l g ra d u a te s / b y e n ro lm e n ts
[1 0 ] R e s e a rc h o u tp u ts
S u b s i d y p u b l i c a t i o n u n i t s d iv i d e d
b y p e r m a n e n t a c a d e m ic s ta ff
[1 1 ] R e s e a rc h s tu d e n t o u tp u ts
M a s te rs a n d d o c to r a l g ra d u a te s
d iv id e d b y p e r m a n e n t a c a d e m ic
s t a f f ( w e i g h t in g : M = 1 ; D = 3 )
[1 2 ]
F in a n c ia l s ta tu s
R a tin g a s g o in g c o n c e r n
T E C H N IK O N S
A v e ra g e
B e n c h m a rk
47%
53%
40%
50%
70%
49%
40%
50%
22%
38%
40%
50%
20%
39%
40%
50%
102%
100%
104%
100%
10 400
8 000
8000
8 000
44%
56%
50%
20%
66%
34%
70%
none
15
20
26
25
41%
50%
none
28%
none
35%
20%
25%
13%
25%
19%
13%
33%
20%
14%
5%
33%
20%
0 .5
1
0 .0 3
0 .5
0 .7
1
0 .1
0 .5
3 .5
4
2 .6
4
Note: these data were circulated to higher education institutions in 2001 but were not
published in the NWG’s formal report.
The data in Table 4 were then represented by the NWG in the form of two radar
graphs (see NWG 2001: 14). Copies of these two graphs can be seen on the page
which follows.
These graphs were drawn after each benchmark had been taken to be a value of 1,
and each average had been converted to a fraction by dividing it by its related
benchmark. For example, the benchmark for academic qualifications was taken to =
1, for both universities and technikons. The university average of 41% of academic
staff with doctorates was then divided by the benchmark value of 50%, giving a value
of 0.82 relative to the norm of 1. The technikon average of 28% of academic staff
with either a masters or doctoral degree was divided by the norm of 35%, giving a
28
value of 0.83 relative to the norm of 1. Similar calculations were made in respect of
the other averages and norms set out in Table 4 above.
Radar graph 1: norms and averages for universities
Radar graph 2: norms and averages for technikons
Radar graphs 1 and 2 were used by the NWG as ways of determining whether or not
the university and technikon sectors were “healthy and well-functioning”; whether
they met “fitness for purpose criteria”. The NWG concluded that neither sector met
these requirements. It made mechanical counts of the 12 points on each graph, and
said this:
29
“--the university sector satisfied only 4 of the 12 benchmarks, namely student
equity, enrolment stability, enrolment size and staff availability. The university
sector does not meet any of the output benchmarks, and on average its
financial stability and staff equity profile is below the benchmark. ---The (radar
graph) for the technikons --- is similar to that of universities. --- it indicates that
the technikon sector is weaker than the university sector in relation to output
benchmarks” (NWG 2001: 13).
The NWG applied the same radar graph methodology in its discussions with
individual universities and technikons. It did this by adding a third line to the overall
summaries for universities and technikons. The new line represented that institution’s
data scores divided by the benchmarks for the system. These institutional radar
graphs were not published, but were made available to the institutions which asked
to see their radar graphs. Examples of the actual radar graphs for four universities
can be seen on the pages which follow. Because these graphs were not published,
the institutions reflected in the graphs have not been identified. The same four
institutions will however appear in the analyses in later sections of the universities
Radar graph 3: University X
Radar graph 4: University Y
30
Radar graph 5: University Z
Radar graph 6: University W
The NWG’s conclusion would have been that none of these 4 universities meet its
“fitness for purpose” criteria, and that none could as a consequence be described as
“well-functioning” institutions. It would have summarized the situation in the way set
out in Table 5 below:
31
Table 5
Summary of institutional radar graphs
Institution
University X
Benchmarks met
7 out of 12
University Y
4 out of 12
University Z
4 out of 12
University W
5 out of 12
3.3
OBJECTIONS TO
BENCHMARKS
THE
Problem areas
Student and staff equity; graduate throughputs;
research outputs
Staff equity, academic staff qualifications; graduate
throughputs; research outputs
Enrolment size & shape; academic staff
qualifications; research outputs; financial
sustainability
Student and staff equity; graduate throughputs;
research outputs
NWG’S
MODEL
OF
INDICATORS
AND
The NWG’s systemic and institutional radar graphs generated considerable
controversy after their release during 2001. Examples of the kinds of objection raised
against the NWG can be seen in a paper written by Hugh Amoore (“What we
Measure”, SA Association of Institutional Research, October 2001) and one written
by Anthony Melck (“Indicators and Benchmarks for Universities and Technikons”, SA
Universities Vice-Chancellors Association. November 2001).
Some of the main objections raised by Amoore and Melck can be summarized in this
way:
♦ Database A system of indicators and benchmarks requires a database which is
stable, and in which there are no definitional ambiguities. The NWG used data
derived from the SA higher education management information system (HEMIS),
which was (in 2001) new and in need of further refinement (Melck).
♦ Definitions of indicators and benchmarks The methodology employed by the
NWG does not make sufficiently clear what the distinctions are between statistical
indicators and benchmarks. It is also not clear how the NWG’s
benchmarks/indicators are supposed to relate to the policy objectives to which
they are linked (Melck).
♦ Use of indicators and benchmarks Indicators and benchmarks should be
developed on a time-series rather than a “snap-shot” basis, as was done by the
NWG. The NWG should also have been made use of qualitative indicators and
benchmarks (Melck).
♦ Uniform indicators and benchmarks Most of the NWG’s indicators and
benchmarks applied to both universities and technikons. The use a single set of
benchmarks for all public higher education institutions could lead to a process of
homogenization in the sectors, which would be contrary to government policy.
Undifferentiated sets of norms should not be applied across the higher education
sector (Melck).
32
♦ Flawed indicators Some of the indicators selected are technically flawed, and
cannot serve the functions intended by the NWG. Key examples are those of
enrolment stability and graduation rate (Amoore).
♦ Inappropriate benchmarks The benchmarks selected by the NWG are not
appropriate for SA universities and technikons. They do not represent even
reasonable aspirations for most SA universities and technikons (Amoore).
Other objections were raised to the NWG’s use of indicators and benchmarks,
because they were used in a report on the restructuring of the higher education
system, the NWG’s radar graphs were seen as being tools for the justification of the
higher education mergers being pressed on an unconvinced higher education sector.
Benchmarks could simply not be used to develop institutional matches. The legacy of
using indicators and benchmarks to justify decisions based mainly on geographic,
equity and political arguments is a rather negative image towards performance
indicators.
33
SECTION 4
4.1
“WELL-FUNCTIONING” HIGHER EDUCATION
INSTITUTIONS
EVALUATING HIGHER EDUCATION INSTITUTIONS: CHET 2003
The CHET research team formulated in 2003 a model for performance measurement
which was based on the methodology adopted by the NWG, but which did not accept
all of the NWG’s sets of expected properties and benchmarks.
The CHET 2003 model took the central aspect of the NWG methodology to be its
emphasis on evaluations, rather than just descriptions, of the performance of higher
education institutions relative to national policy goals. The CHET 2003 judgments
were however to be more explicitly evaluative than those of the NWG. These
judgments were supposed to determine the degree to which an institution was a
“well-functioning” one, rather than whether it met the “fitness-for-purpose” criteria of
the NWG.
The CHET 2003 model begins with the formulation of a set of policy-derived features,
indicators and benchmarks which could be used to determine to what extent the
evaluation “well-functioning” can be applied to individual higher education institutions.
Table 6, which appears on the next page, lists a total of 14 properties which a wellfunctioning South African higher education institution could be expected to have, in
terms of current national policies. These properties have been grouped into 5
subsets:
♦ Academic programmes
The first two of properties which a well-functioning institution is supposed to have are
a reasonable spread of academic programmes by major and by level of study. The
indicators are full-time equivalent (FTE) student enrolments by broad field of study
and the proportion of head count enrolments in postgraduate programmes. The
benchmarks for FTE student enrolments are derived from the 2001 National Plan for
Higher Education. Since similar quantitative benchmarks for postgraduate
programmes have not been formulated in policy documents, those set out in Table 6
were derived from empirical data for South African universities with high proportions
of postgraduate enrolments.
♦ Students
The six properties in this subset deal with issues of student equity and of student
output. In terms of equity, a well-functioning institution is expected to have a student
body which is representative of the total South African population, and to have
educational processes which are fair. In terms of output, a well-functioning institution
is expected to have reasonable undergraduate success rates and high proportions of
its head count enrolment graduating each year. The indicators chosen for equity are
head count enrolments by race and gender, and undergraduate success rates by
race group. The output indicators are undergraduate success rates, and head count
totals of graduates divided by head count enrolments. The only benchmarks which
are derived directly from a policy document (in this case the 2001 National Plan) are
34
those dealing with the expected ratios between graduates and enrolments. The
others can be found in funding formula documents and in various planning directives
issued by the national Department of Education.
Table 6
The CHET 2003 account of indicators and benchmarks for well-functioning
institutions
35
♦ Administrative and academic staff
The properties in these two subsets deal with staff equity, with the qualifications and
research activities of academic staff, with their availability to students. A wellfunctioning higher education institution is expected to have a professional staff which
is representative of the total South African population. It is also expected to have an
academic staff body which is well qualified and is active in research. It is expected
finally to adequate numbers of academic staff to meet the teaching needs of
students. The indicators chosen for staff equity are head count enrolments of
professional administrative and academic staff by race and gender. The indicator
chosen for academic staff qualifications is the proportion of staff with doctorates, and
that for research activity is the ratio between total research outputs and total
permanent academic staff members. The indicator chosen for academic staff
availability is the ratio between FTE enrolled students and FTE academic staff
members. The benchmark for research activity is derived directly from the new
government funding framework. The others are adapted versions of indicators
employed by the NWG.
♦ Finances
The property selected is that a well-functioning South African higher education
institution must be financially stable and financially sustainable. The indicator chosen
is complex one, based on assessments of an institution’s ability to meet its short to
medium term financial commitments. The benchmark is a positive value when the
Department of Education’s capitalisation formula is applied to that institution.
The CHET 2003 model used the same basic radar graph methodology as that of the
NWG. It used HEMIS data available for the period up to 2002 to relate the ratios and
proportions of individual institutions to the benchmarks set out in Table 6. In the case
of each of the 14 indicators in the radar graphs, the benchmark was standardized as
1, and the institutional value taken to be its ratio or proportion divided by 1. Examples
of the radar graphs generated under the CHET 2003 model appear in the pages
which follow. As will be seen, the NWG 2003 graphs have been simplified by omitting
the line representing the average values for the university system.
The examples chosen are the same four universities displayed in the NWG radar
graphs 3 to 6, to enable a comparison to be made of the NWG and the CHET 2003
evaluations. Table 7 below compares these different evaluations; and shows that a
major change in evaluations occurs in the case of University Y under the CHET 2003
model.
36
Table 7
Comparison of institutional radar graphs: NWG and CHET 2003
Institution
Benchmarks met
Problem areas
NWG
NWG
CHET 2003
Student and staff equity;
graduate throughputs; research
outputs
Staff equity, academic staff
qualifications; graduate
throughputs; research outputs
Enrolment size & shape;
academic staff qualifications;
research outputs; financial
sustainability
Student and staff equity;
student outputs
University
X
7 out of 12
CHET
2003
8 out of 14
University
Y
4 out of 12
9 out of 14
University Z
4 out of 12
4 out of 14
University
W
5 out of 12
8 out of 14
Student and staff equity;
graduate throughputs; research
outputs
Radar graph 7: University X under CHET 2003
Radar graph 8: University Y under CHET 2003
37
Staff equity; student outputs;
staff research outputs
Postgraduate shape;
student outputs; staff
research outputs; academic
staff availability; financial
sustainability
Student and staff equity;
student outputs
Radar graph 9: University Z under CHET 2003
Radar graph 10: University W under CHET 2003
4.2
OBJECTIONS TO THE CHET 2003 MODEL
The CHET 2003 model of institutional evaluation was discussed at informal seminar
organized by CHET in Cape Town in January 2004. The participants at the seminar
raised objections to the CHET 2003 model which were in many ways similar to those
raised against the NWG model.
The NWG-type objections raised at the January 2004 seminar included references to
technical flaws in the indicators and to them forcing homogeneity on a system which
is supposed, in national policy terms, to be moving towards institutional diversity.
Other objections raised concerned the reliance solely on quantitative indicators, and
the absence of qualitative indicators from the CHET 2003 set. Objections were also
raised to the use of “snap shot” indicators, based on averages across time.
38
Arguments were raised to the effect that any indicator set must include time-series
data.
Objections other than those listed above were also raised at the January 2004
seminar. These further objections included the following:
♦ No clear account has been offered of what the purpose is of the indicators
proposed in CHET’s 2003 model. These indicators and their associated
benchmarks could be interpreted as offering a methodology for the monitoring of
higher education rather than as simply “grading” them (in the sense of placing
them in some kind of value-laden ranking order). A distinction has to be drawn
between the monitoring of institutional performance against sets of national goals
and the grading of institutions.
♦ If the model is one intended to be one which evaluates institutional performance,
then it cannot be based primarily on quantitative indicators and benchmarks. An
evaluative process presupposes that qualitative indicators have been used.
♦ The notion of “well-functioning” is difficult to understand and define. The use of
this term does not offer any advance on the NWG’s notion that judgments could
be made of the performance of institutions and of the higher education system in
terms of their “fitness-for purpose”.
♦ The CHET 2003 model confuses indicators and benchmarks which can
reasonably be applied only to the system with those intended for the evaluation of
individual institutions. If continued use is to be made of the notion of “wellfunctioning”, then the properties of a well-functioning system must be defined
independently of those of a well-functioning institution.
♦ The radar graphs used in the NWG and the CHET 2003 models are misleadingly
simple. The radar graphs do not, for example, permit different weightings to be
given to different properties and indicators. The graphs suggest that each
indicator carries an equal weighting, and that the final assessment of an institution
or a system involves a simple count of the numbers of benchmarks met and not
met (as is suggested, for example, in Table 7 above).
♦ The radar graphs suggest also that average ratios and average proportions can
be used as the basic units in analyses of institutions. This could generate
problematic results in a higher education policy analysis.
♦ The use of the “benchmark” in CHET 2003 is misleading. These were not based
on a standard benchmarking exercise, which might need to consider such matters
as “best practice” across national higher education systems. The “benchmarks”
employed in this model are in effect national policy targets. The purpose of the
model would become clearer if the term “policy target” were used instead of that
of “benchmark”.
39
SECTION 5:
5.1
POLICY TARGETS
PERFORMANCE
AND
HIGHER
EDUCATION
POSSIBLE CHANGES TO THE CHET 2003 MODEL
The comments and objections raised at the January 2004 seminar have shown that
the methodology on which the CHET 2003 model is based may have to be dropped
and replaced by a very different one. The major changes which need to be examined
include these:
♦ The CHET 2004 model will use indicators as ways of referring to policy-driven
higher education goals. A distinction will be drawn between systemic and
institutional goals, and two sets of indicators will as a consequence be
constructed.
♦ The set of systemic and the set of institutional indicators will not be linked to
benchmarks, if these are understood to be embedded in the evaluation or grading
of a higher education system and higher education institutions.
♦ Radar graphs which relate institutional data averages to benchmark norms will
not be constructed.
♦ The sets of systemic and institutional indicators will be linked instead to
quantitative targets, which will be derived, directly or indirectly, from national
policy documents.
♦ Whenever possible, time series data rather than snapshot or average data will be
employed. These data will be presented in a series of bar graphs, each one of
which will reflect the quantitative targets the South African higher education
system and its constituent institutions are expected to achieve.
The CHET 2004 model, with it sets of indicators and targets is set out in the two
subsections which follow.
5.2
INDICATORS AND TARGETS FOR THE HIGHER EDUCATION SYSTEM
A set of 10 systemic indicators and targets which could be employed in a new model
are presented in Table 8 which follows. Most of these goals are versions of those
which appeared in the CHET 2000 model discussed in an earlier section of this
paper. Table 10 has been limited to just 10 systemic goals to illustrate what kinds of
changes would need to be made if this methodology, rather than those of CHET
2000 and CHET 2003 were adopted.
The size and shape goals in Table 8 are modified versions of Goals 1, 4 and 5 of
CHET 2000 (see Table 1 in subsection 2.1). The indicators employed are the same
as those of CHET 2000, but this earlier model did not employ either targets or
benchmarks. The figures included in the third column of the table are therefore new,
as are the references to the sources of the targets. The first two size and shape
40
targets can be derived from current policy documents (the 2001 National Plan for
Higher Education) and the 2003 Ministerial Statement on Higher Education Funding.
41
Table 8
Adapted goals, indicators and targets for the higher education system: CHET
2004
ADAPTED 1997 WHITE
PAPER GOALS
Size and shape of the
system
Goal 1: Total student
enrolments must increase.
Goal 2: Enrolments in
science/technology (SET) and
business/management (BUS)
must grow
Goal 3. Masters and doctoral
enrolments must grow
Student equity
Goal 4: The participation of
disadvantaged students in
higher education must
increase
Goal 5: The participation of
female students in higher
education must increase
Goal 6: The fairness of
educational processes must
improve
Staff equity
Goal 7: The participation of
disadvantaged staff groups in
the professional staff
complement of the system
must improve
Goal 8: The participation of
females in the professional
staff complement of the system
must improve
Graduate and research
outputs
Goal 9: The output of
graduates must improve
INDICATORS
TARGETS
SOURCE OF
TARGET
Head count enrolments by
instruction mode
460 000 contact &
270 000 distance
heads by 2005
40% SET and 20%
BUS
MS on funding:
December 2003
Head count enrolments by
qualification type
10% of head
counts to be
masters & doctoral
students
None: based on
current
enrolments\
patterns
Head count totals of African
students in contact
programmes
60% of contact
students to be
African
Adapted from
National Plan for
HE: 2001
Head count totals of female
students
Success rates by race in
contact programmes
50% of contact +
distance students
to be females
Success rates to be
equalised
None: but based
on gender equity
= equality
Equity planning
directives of DoE
Proportions of permanent
academic and professional
administrative staff by race
40% of permanent
professional staff to
be African
Adapted from
equity planning
directives of DoE
Proportions of permanent
academic and professional
administrative staff by
gender
40% of permanent
professional staff to
be female
Adapted from
equity planning
directives of DoE
FTE enrolments by broad
field of study
Head count totals of
graduates divided by head
count enrolments
Adapted from
National Plan for
HE: 2001
Annual total of
Adapted from
graduates to = 20% National Plan for
of head count
HE: 2001
enrolments
Goal 10: Research outputs
Output of research masters Weighted
Adapted from MS
must improve
and doctoral graduates plus publication total to
on funding:
research publication units
equal average total December 2003
relative to total of
of permanent
permanent academic staff
academics for past
3 years
Notes: (1) MS on funding = Ministerial Statement on Higher Education Funding, released in
December 2003
(2) DoE = national Department of Education
(3) The weightings employed in the target for Goal 10 are: masters graduate = 1,
publication unit = 1, doctoral graduate =3
42
The first three bar graphs which follow on the next page measure the performance of
the South African higher education relative to the targets linked to goals 1 to 3 in
Table 8 above.
Bar graph A1 shows that the higher education system had up to 2002 met only the
distance education head count target. Graph A2 shows that the proportion of FTE
students in the system in business and management courses increased steadily
between 1999 and 2002, and had by 2002 exceeded the target. The system’s
proportion of SET students remained below the target proportion of 40% through out
the period. Graph A3 shows that the proportion of
Bar graph A1: head count enrolments for HE system; relative to CHET 2004 targets
(thousands)
Bar graph A2: Proportions for system of FTE enrolments in SET and business
courses; relative to CHET2004 targets
43
masters and doctoral students in the system’s head count enrolment total increased
between 1999 and 2002, but remained at 6.9% in 2002 below the set target of 10%.
Bar graphs B1 to B3 deal with student equity goals. Goals 4, 5 and 6 in Table 8 relate
directly to CHET 2000 Goals 2, 3 and 8 (see Table 1 in subsection 2.1). Graph B1
shows that the proportion of African students in the higher education system’s
contact student total increased steadily over the period 1999-2002, and was at 58%
in 2002 only 2 points below the target of 60%. Graph B2 shows that the proportion of
female students in the higher education system
Bar graph A3: proportion of head count enrolments of masters plus doctoral students
in the system; relative to CHET 2004 target
44
Bar graph B1: proportion of African students in the system’s head count total of
contact students; relative to CHET 2004 target
remained above the target of 50% in each year of the period 1999-2002. Bar graph
B3 deals with issues of process equity. It compares by race group average
undergraduate success rates in contact courses. The target set for Goal 6 of Table 8
is an equalization of these undergraduate success rates. Graph B3 shows that the
system has not achieved this target. The
Bar graph B2:
proportion of female students in the system’s head count total of
contact students; relative to CHET 2004 target
45
Bar graph B3: success rates in undergraduate contact courses by race group;
relative to CHET 2004 target
success rates of white and Indian students in contact programmes were, throughout
the period 2000-2002, considerably higher than those of African and coloured
students. The gap between the average success rate for African undergraduates and
white undergraduates in contact programmes was 15 percentage points in each of
the three years.
Graph C1 deal with staff equity goals and targets. It shows what shares African and
female staff had of the total of academic and professional staff in the South African
higher education system during the period 2000 to 2002. The proportion of female
staff was on or above the target of 40% in each year of this period. The proportion of
African was about half of the target in each year.
46
Bar graph C1:
proportions of African and of female staff in the system’s total of
academic and professional administrative staff; relative to CHET
2004 target
Bar graph D1 deals with the goal of student output efficiency. It shows, for each year
in the period 1999 to 2002, what proportion graduates had of the head count
enrolment total of the system. The target proportion of 20% is based on an
expectation that about 67% of any cohort of students entering the higher education
system will eventually complete their qualifications. The graph shows that the South
African higher education system is not meeting this target, and shows further that the
proportion of students graduating has declined between 1999 and 2002.
Bar graph D1: Graduates as a proportion of head count enrolments in the higher
education system; relative to target in CHET 2004
47
Graph D2 deals with Table 8’s final goal of research productivity. It relates weighted
totals of research masters and doctoral graduates plus the total of research
publication units approved for government subsidy to a target total which the
permanent academic staff complement of the system is expected to achieve. The
graph shows that the South African higher education system’s research output,
measured in this way, did not achieve the target in any of the years
2000 to 2002.
Bar graph D2: weighted research output totals for the system; relative to the
CHET 2004 target
5.2
INDICATORS AND TARGETS FOR HIGHER EDUCATION INSTITUTIONS
Examples of the kinds of institutional indicators and targets which CHET 2004 could
generate are set out in Table 9 which follows. This table has, like Table 8, been
limited to just 10 goals, because its purpose is that of illustrating what kind
performance indicator model could flow from adoption of the CHET 2004
methodology.
The student equity goals, indicators and targets in Table 9 are modified versions of
those used in CHET 2004 for the higher education system (see Goals 4 to 6 of Table
8 in subsection 5.2). The student efficiency goals, indicators and targets have been
drawn directly from CHET 2003, with the main change being that the term
“benchmark” has been replaced by the term “target” (see Goals 6 to 8 of Table 6 in
subsection 4.1). The staff equity goals, indicators and targets have also been drawn
directly from CHET 2003, again with the main change being the use of the term
“target” rather than that of “benchmark” (see Goals 9 and 10 of Table 6). The staff
qualification and output goals, indicators and target have been drawn directly, in the
ways described for staff equity, from CHET 2003 (see Goals 11 and 12 in Table 6).
48
Table 9
Adapted goals, indicators and targets for higher education institutions: CHET
2004
ADAPTED 1997 WHITE
PAPER GOALS
Student equity
Goal 1: The participation of
disadvantaged students in oncampus programmes must
increase
Goal 2: The participation of
female students in all
programmes must increase
Goal 3: Educational processes
must be fair
Student efficiency
Goal 4: Undergraduate
success rates must be high
Goal 5: High proportions of
undergraduate enrolments
must graduate each year
Goal 6: High proportions of
masters & doctoral enrolments
must graduate each year
Staff equity
Goal 7: The participation of
disadvantaged staff groups in
the professional staff
complement of the system
must improve
Goal 8: The participation of
females in the professional
staff complement of the system
must improve
Staff qualifications and
outputs
Goal 9: Academic staff must
be well qualified
INDICATORS
TARGETS
SOURCE OF
TARGET
Head count enrolments
by instruction mode and
by race group
40% of students in
contact programmes
to be African
MS on funding:
December 2003
Head count enrolments
by gender
50% of contact +
distance students to
be females
Difference between
African and white
ratios to be no more
than 5 percentage
points
None: but based
on gender equity
= equality
None: but based
on National Plan
for HE: 2001
Undergraduate FTE
degree credits as % of
FTE enrolments in
contact programmes
Undergraduate qualifiers
as % of undergraduate
enrolments
Masters plus doctoral
graduates as % of M + D
enrolments
Average of 80%
None: but based
on National Plan
for HE: 2001
Average of 20%
Adapted from
National Plan for
HE: 2001
Adapted from
National Plan for
HE: 2001
Proportions of permanent
academic and
professional
administrative staff by
race
Proportions of permanent
academic and
professional
administrative staff by
gender
40% of permanent
professional staff to
be African
Adapted from
equity planning
directives of DoE
40% of permanent
professional staff to
be female
Adapted from
equity planning
directives of DoE
% of permanent
academic staff members
with doctorates
Average for
universities to be
average of 40% with
doctorates.
Ratio for universities
to be 1.25 weighted
units per permanent
academic staff
member
Adapted from
NWG: 2001
FTE degree credits as %
of FTE enrolments in
contact programmes by
race group
Goal 10: Academic staff must
be active in research
Average of 25%
Output of research
Adapted from MS
masters and doctoral
on funding:
graduates plus research
December 2003
publication units relative
to total of permanent
academic staff
Notes: (1) MS on funding = Ministerial Statement on Higher Education Funding, released in
December 2003
(2) DoE = national Department of Education
(3) The weightings employed in the target for Goal 10 are: masters graduate = 1,
publication unit = 1, doctoral graduate =3
49
Some examples of the kinds of graphs which the indicators and targets in Table 9
would generate appear on the pages which follow. These graphs have been limited
to those dealing with student equity and student efficiency.
Bar graphs E 1 to E3 deal with issues of access equity and of equity in educational
processes. The four universities reflected in the graphs are the same as those used
as examples in earlier sections. The universities have, once again, not been
identified.
Bar graphs E1 and E2 show how these universities are performing relative to (a) a
target that African students have at least a 40% of share of head count student
enrolments in contact or on-campus programmes, and to (b) a target that female
students have at least a 50% share of total contact plus distance student head count
enrolments
Bar graph E1: proportions of African students in contact student enrolments;
relative to CHET 2004 target
Bar graph E2: proportions of female students in contact plus distance student
enrolments; relative to CHET 2004 target
50
Graph E3A and 3B deals with what could be described as issues of educational
process equity at two of the four universities used as examples. The graphs express
as ratios the total of contact degree credits (or FTE passes in courses) for the
university to its total contact FTE student enrolments. These calculations include
undergraduate plus all postgraduate contact courses. The issue of equity is that the
ratios for African students (considered to be proxies for disadvantaged students)
should not be less than 5 percentage points below the average for the institution. If
the African student ratio is more than 5 percentage points below the average, then
educational processes at that institution cannot be described as fair.
Bar graph E3A: University X contact programmes: comparison of average ratios of
FTE degree credits to FTE enrolments for African students and for all students in the
institution
Bar graph E3B: University W contact programmes: comparison of average ratios of
FTE degree credits to FTE enrolments for African students all students in the
institution
51
Bar graphs F1 to F3 deal with issues of student efficiency. Graph F1 represents
average undergraduate success rates in contact programmes, for which a target of
80% has been set. Graphs F2 and F3 show what proportions of a given year’s head
count enrolments completed their qualifications. The targets set are adaptations of
those employed in the 2001 National Plan. The target for undergraduates is that at
least 20% of the annual enrolments should graduate in that year, and the target for
masters and doctoral enrolments is a ratio of 25%.
Bar graph F1: average undergraduate success rates in contact courses;
relative to the target set in CHET 2004
Bar graph F2: ratio of undergraduate qualifiers in a given year to total head
count enrolments in that year; relative to target in CHET
2004
52
Bar graph F3: ratio of masters + doctoral qualifiers in a given year to total
head count masters + doctoral enrolments in that year;
relative to target in CHET 2004
53
SECTION 6:
PROGRESS AND CHALLENGES
The introduction stressed that this CHET discussion document does not argue for the
adoption of a specific set of performance measures for public higher education in
South Africa. Its main purpose has been that of contributing to the South African
debate on the use of planning and funding as steering mechanisms for the public
higher education system. It concentrates on various models of performance
measurement, because South African national higher education policy stresses that
assessments of institutional performance and the performance of the system will be
central aspects of future planning and funding cycles.
The performance measurement models discussed in the document are a selection of
ones used or proposed in South Africa over the past five years. This is not intended
to be a comprehensive selection, and it omits, for the sake of brevity, consideration of
models of the kind proposed in December 2002 by Stuart Saunders and John
Fielden (see list of references for details). The Saunders/Fielden proposal is based
on a set of 16 objectives which can be drawn from the higher education policies of
the South African government. They set a total of 22 quantitative indicators for these
national indicators, but linked either benchmarks or targets to only 6 of the indicators.
The final model proposed in Section 5 of the discussion is still a tentative one, in the
sense that it has not yet been subject to critical debate and analysis. It will be
reconsidered in the light of the proceedings of the March 2004 seminar. There are at
least two key issues which need to be discussed at the seminar:
Targets and higher education performance measurement
The notion of a “target” was introduced in Section 5 of the document to suggest that
performance measurement in South Africa need not involve evaluations, in the sense
of gradings, of institutions. The following issues concerning the determining or setting
of targets need still to be resolved (these questions clearly over-lap, and have not
been placed in any specific order of priority):
♦ Is the use of the term “target” more appropriate than that of “benchmark”? Does
the model set out in Section 5 rely, even implicitly, on the notion of
benchmarking?
♦ Should targets be limited to quantitative ones only?
♦ Should all targets selected be based directly on government policy directives?
Should “international benchmarks” be considered when targets are set for the
South African higher education system?
♦
Should institutional targets to be set in terms of appropriate averages across the
whole higher education system? Should they to be based on those of a selected
group of institutions?
♦ How are targets to be selected, by government only or by government in
consultation with the higher education sector and higher education stakeholders?
54
♦ The examples of possible targets discussed in Section 5 are all internal to the
higher education system. Should external developmental targets, such labour
market absorption and the resolution of high skill shortages, be set for the higher
education system?
The use of performance measurement models across national higher
education systems
CHET hopes that this discussion of performance indicators will contribute to broader
discussions in Africa about the measurement of higher education performance. The
explicit use of only South African policy directives and data could be taken to imply
that these models are not “exportable”. So a crucial issue which needs to raised is
this: must any higher education performance measurement model be national
system-specific?
CHET’s view is that while the targets being developed are South Africa-specific, the
underlying policy principles on which they are based are fairly universal. These
include the principles of equity, efficiency and quality in higher education. It should
therefore be possible to place these indicators and targets in a framework of a more
general, international policy discourse. This could encourage a broader discussion to
take place around, for example, Africa-wide higher education indicators and targets.
55
Download