Locked Bag 1797 Penrith NSW 2751 Tel: +61 2 9852 5222

advertisement
Locked Bag 1797
Penrith NSW 2751
Tel: +61 2 9852 5222
Office of the Pro Vice-Chancellor (Education)
UNIVERSITY OF WESTERN SYDNEY
SUBMISSION TO THE
ADVANCING QUALITY IN HIGHER EDUCATION
PERFORMANCE MEASUREMENT DISCUSSION PAPERS
February 2012
Contact person:
Professor Kerri-Lee Krause
Pro Vice Chancellor (Education)
Email: k.krause@uws.edu.au Phone: 02 96787453
Address: University of Western Sydney, Locked Bag 1797, Penrith NSW 2751
Overview
The University of Western Sydney (UWS) welcomes the opportunity to comment on the following
three discussion papers distributed as part of the Government’s Advancing Quality in Higher
Education (AQHE) initiative:
i)
Development of Performance Measurement Instruments in Higher Education;
ii)
Review of the Australian Graduate Survey; and
iii)
Assessment of Generic Skills.
UWS endorses the principles of accountability, transparency and clear communication strategies to
guide student decision-making with respect to university and course choice.
We attach the highest importance to continuous quality improvement in higher education and are
committed to the informed use of a range of measures and indicators to assess the quality of
students’ educational experiences and outcomes within our institution and across the sector.
Notwithstanding our recognition of the need for robust and rigorous performance measurement
instruments, we draw attention to two key issues which underpin our responses to each of the
discussion papers:
a) Context: decisions about the various performance measurement instruments must be
informed by consideration of a range of other policy initiatives in the sector, including: the
Base Funding Review; the introduction and implementation of the TEQSA standards,
particularly the teaching and learning standards; mission-based Compacts; and existing
institutional quality assurance frameworks.
b) Purposes: the Discussion Papers mention multiple purposes to which the performance
measurement data may be put, including for compliance purposes, for quality assurance and
quality improvement purposes, for publication on the MyUniversity website; to inform
student choice; and to facilitate transparency. Seeking to address multiple purposes through
Page 1 of 16
the use of a single instrument or a suite of instruments is fraught with methodological
dangers, including those relating to validity of measurement. In the discussion papers, the
lines of demarcation between and among these various purposes are blurred and unless this
is addressed, the Government runs the risk of promoting a reductionist approach to
evidencing the quality of a richly diversified Australian higher education sector in the
international marketplace.
Responses to the three Discussion Paper questions follow.
Discussion Paper 1: Development of Performance Measurement Instruments in Higher Education
1. Are there other principles that should guide the development of performance measurement
instruments?
The five principles identified in the paper (p.6) are sound but they are incomplete. First, while fitness
for purpose is important, equally important is the issue of fitness of purpose. The list should also
include the following principles
a) Prioritising of indicators – which indicators are most telling, for whom and in what context?
b) Context – what is the conceptual framework in which these performance measurement
instruments are developed, applied and interpreted? Do the measures take into account the
needs of 21st century graduates in a transdisciplinary world, or are the conceptual
frameworks underpinned by dated monodisciplinary thinking?
c) Validity – how valid are the measures? What steps have been taken to validate scales and
items? Have the outcomes of existing institutional survey approaches and outcomes been
taken into account in validating the instruments?
d) Clarity – to what extent are shared understandings of terminology being used in the
instruments, for example ‘generic skills’, as compared with ‘competency’ or ‘capability’?
e) Balance – to what extent do the instruments strike a balance between the need for feasible,
cost-effective, quantitative outcome measures and a valid approach to capturing the
nuances and complexities of a diversified higher education sector?
f) Outcomes – while Government may have identified a suite of purposes for which data are
designed to be used, steps need to be taken to avoid perverse behaviours and outcomes
such as ‘teaching to the test’. This is a significant risk in relation to the proposed use of the
Collegiate Learning Assessment instrument for institutional comparison purposes across the
sector.
2. Is it appropriate to use a student lifecycle framework to guide the development of performance
measurement instruments?
There is considerable research evidence to support the use of a student lifecycle approach for
understanding and monitoring the student experience. However, a number of cautions must be
taken into account when considering the use of such a framework to guide development of
performance measurement instruments. These include:
a) Avoid a ‘one-size-fits-all’ approach to the student lifecycle. In a tertiary landscape where
pathways and choice are paramount, a simplistic lock-step depiction such as that in Figures
1-3 fails to capture the complexity and range of student lifecycles in the sector.
b) Monitor sampling over the lifecycle rigorously and with care. There is merit in monitoring
Page 2 of 16
the same cohort over time, rather than gathering data from a different group of first year
and final year students. Also important is attention to details such as the influence of
variables such as pathways, disciplinary differences, course combinations and the like.
3. Are there other aspects that should be included in the student lifecycle framework?
Further to the points raised in (2) above,
a) the lifecycle approach outlined in Figure 1 fails to reflect the complexity of the experience of
students who may transition to Year 2 from TAFE or a pathways college such as UWSCollege,
for example.
b) a more realistic depiction of the student lifecycle would include reference to engagement
with university well before the application/admissions and enrolment stage; that is, in the
early years of schooling and in community contexts. This is particularly important in
supporting the early engagement of under-represented students in higher education. At
UWS we recognise that our ‘pre-entry’ phase begins as early as primary school and it is a
priority to foster successful, supportive transition experiences over an extended period of
time. This allows for engagement not only with students, but with their families,
communities and schools. Acknowledgement in the performance measurement framework
should be given to the extensive pre-entry work involved in widening access and
participation.
c) the framework as it is currently presented suggests a lock-step linear pathway for
undergraduate study. Evidence suggests that this is far from the reality, particularly in an
environment where pathways are encouraged. The diagram, for example, fails to take
account of possible movement from university to TAFE and back again. Further work is
needed in this depiction to ensure that it reflects the complexity of choices, pathways and
cross-institutional movements that form part of the Australian tertiary sector.
d) Figure 2 needs to take account of a broader range of instruments and approaches currently
in use, including the use of employer surveys, successful graduate studies and the like. There
are many valid approaches in addition to student surveys that should be acknowledged in
this framework.
4. What concerns arise from an independent deployment method?
The primary concerns include:
a) privacy in relation to providing confidential student data to external agencies.
b) costs of paying external agencies to administer the survey.
c) engagement with students – there is some evidence to suggest that students respond more
effectively to personal approaches from senior people (e.g., Faculty Deans/DVCs) within
their own university whom they respect and associate with their course experience.
Receiving a letter of invitation from a stranger may be a disincentive to survey engagement.
Nevertheless, while there are various concerns regarding an independent deployment method, we
also recognise the benefits of such an approach, including consistency and comparability.
Page 3 of 16
5. What are the obstacles for universities in providing student details (such as email address, first
name and phone numbers) to an independent third party?
The major obstacles pertain to privacy legislation relating to use of student records by an
independent third party. This could be addressed through appropriate use of HEIMS data to pre- or
post-populate forms, however there are a number of questions to be resolved if this option were to
be pursued. These include whether or not it is possible to access HEIMS data at a more granular level
and presumably student consent would be required for use of this data.
6. Would universities agree to change their privacy agreements with their students to permit
disclosure of personal information to third parties for the purposes of undertaking surveys?
It is possible that universities might consider changing their privacy agreements, however the more
pressing questions relate not to use of the information for undertaking surveys, per se, but to use
and ownership of the emergent data. For instance, consideration would need to be given to whether
the data were ‘owned’ by Government or a commercial enterprise.
Also important would be agreements in relation to availability and accessibility of datasets for
institutional use. Student perspectives on this issue would need to be considered, as would legal
advice on the matter. In order for students to consider agreeing to such disclosure they would wish
to see evidence of beneficial outcomes. This highlights the importance of closing the feedback loop
with students and engaging them in decision-making in relation to productive use of student survey
data for enhancement purposes.
Policy and legal implications in relation to these matters would need to be taken into account, as
would associated costs that would no doubt be incurred during any consultation arising from
institutional deliberations of this nature.
7. What are the other important issues associated with administration of survey instruments?
Further to the issues outlined in the Discussion Paper, there are four key considerations to be taken
into account:
a) sector-wide trust in the rigour of the process is paramount. Universities need to be assured
of the transparency, consistency and independence of the process. The reliability and
consistency of approach needs to be accompanied by timely delivery of data in easily
accessible forms.
b) survey engagement of staff and students in universities is critical. Institutions also need
support, including opportunities to share good practice and ideas for successful engagement
of university communities with survey instruments.
c) costs of administration and associated human resources required by the institution. While
these Discussion Papers mention a small number of surveys, there are a number of
additional institutional surveys to consider. Coordinating these surveys effectively requires
considerable resources and this should be appropriately recognised.
Page 4 of 16
d) flexible approaches to survey administration. For instance serious consideration could be
given to the possibility of biennial rather than annual administration of national surveys to
reduce the administrative burden for institutions. This would provide institutions with time
to analyse and respond to the survey outcomes. It would also provide a more balanced
approach that takes into account the value of institutional surveys as a complement to those
at the national level.
8. What are key considerations in choosing between a sample or census approach to collection of
performance data?
For a multicampus institution like UWS, one of the most important considerations in sampling is the
representativeness of the sample and its capacity to provide meaningful data at campus and course
level. This is particularly important if the institution is to use the survey data for the dual purposes of
quality assurance and improvement. While certain statistical approaches to sampling may be
sufficient for broadbrush, impressionistic depictions of the student experience at the institution
level, they are inadequate for informing improvement strategies at campus and course level.
Similarly, a one-size-fits-all sampling approach that may work for an institution with a predominantly
homogenous student population characterised by full-time, campus-based school leavers, is
meaningless in another context characterised by a heterogeneous student population that includes
mature age, part-time students studying in mixed mode or predominantly online. The diversity of
institutions, course offerings and student populations must therefore be paramount in guiding
decisions about sampling or census approaches.
The multiple purposes of the survey data also need to be accounted for in any decision-making
about sampling methodology. In choosing a sample or census approach, consideration must be given
to complementary data collection strategies, such as focus groups using qualitative methods, and
the value they may add. Equally important in this discussion is the response rate and the advice
available to assist institutions on strategies for raising response rates, where applicable. These
matters, which are fundamental to effective survey implementation, have not been mentioned in
the Discussion Papers. No matter which approach is selected, the rationale for selection must align
with the purpose of the performance measurement instrument and appropriate quality assurance
protocols must be developed and applied.
9. What are the advantages and disadvantages of central sampling of students?
Advantages of central sampling include:
i)
consistency of approach
ii)
a holistic, whole-of-sector perspective
iii)
possible cost efficiencies
iv)
capacity for student tracking
v)
expertise, this may be particularly lacking in small institutions.
Disadvantages of central sampling of students include:
i)
student and institutional disengagement and lack of buy-in as the survey is administered
by unknown contractors
Page 5 of 16
ii)
iii)
iv)
v)
ownership of and access to data
cost of third party provision
complexity of negotiations with a third party, establishing legal agreements etc
perceived intrusion by government.
10. What are appropriate uses of the data collected from the new performance measurement
instruments?
It is difficult to answer a broad question such as this out of context, for one needs to take account of
such factors as the nature of the instruments in question, the sampling methodology, response
rates, institutional demographics and characteristics, agreed institutional performance targets and
mission-based compacts.
Notwithstanding the need to clarify these issues, appropriate uses of performance measurement
data include to:
a) ensure consistent quality across the sector.
b) provide longitudinal tracking on institution-level and sector-wide performance over time.
c) enable informed choice.
d) inform community, industry and international stakeholders.
e) identify institutional areas for improvement and guide local improvement plans.
f) identify and acknowledge areas of excellence and leading practice.
g) assess proportionate risk.
h) complement institutional data as part of a coordinated suite of survey tools.
i) to facilitate benchmarking.
A related and equally important question is that of inappropriate uses of performance data. Of
particular concern here is the potential for perverse behaviours in relation to ‘teaching to the test’
for an instrument that is clearly not fit for purpose as a national performance measurement
instrument, such as the Collegiate Learning Assessment. This issue is expanded in responses to
Discussion Paper 3 below. Other inappropriate uses of the data include the development of league
tables based on aggregated data that mask institutional differences and fail to take account of the
complex interactions between student demographics and outcome data.
11. Are there other issues that arise when considering the overlap of existing and new
instruments?
There does not appear to be a problem with overlap at this stage. Most importantly, there needs to
be a joined up approach to deployment of student surveys across the sector and a recognition of the
considerable burden on institutions of uncoordinated approaches in this regard.
Page 6 of 16
Discussion Paper 2: Review of the Australian Graduate Survey
1. Is joint administration of the GDS and CEQ under the AGS still appropriate?
There would be merit in splitting the GDS and CEQ because they cover two different subject matter
areas. Further, if they are split this allows for the opportunity to apply different survey techniques,
such as sample or census. A split approach also allows for different timing, for example the CEQ
could be undertaken as soon as the student completes their course, rather than around the
graduation period.
By completing the CEQ straight after they finish, responses are more likely to be based on their most
recent experience, not what they remember from months previously. If the survey is undertaken in
the year they complete their degree, the data will be ready in the first half of the following year,
rather than at the end of the following year. This would enable universities to act earlier on any
observations in the data This also raises the question about the UES and CEQ and whether both are
needed if students were to receive them both in the same year.
In regard to the GDS, consideration could be given to administering this survey later because many
students may still be looking for employment, or may not have actively started seeking employment
in their chosen profession.
In summary by splitting the two it allows for more flexibility in regard to what, how and why the
different data are collected. We acknowledge this would add to the cost so a cost-benefit analysis
would be needed, along with consideration of the possibilities of increasing the flexibility of
approach to include biennial administration, for example.
2. Will the GDS and CEQ adequately meet future needs for information in the student driven
environment?
It is unlikely that any survey instrument would adequately meet future information needs without
ongoing expert review and consideration of alternative sources of information. In order to provide
timely, responsive information regarding graduate destinations and career pathways, the value of
sources such as employers, industry representatives involved in course advisory boards needs to be
recognised. Similarly, targeted feedback from successful graduates across the spectrum of work
contexts adds considerable value to the information available to prospective students and to the
community.
While the CEQ has provided longstanding data on a range of scales, the future and fitness for
purpose of the CEQ should be reviewed in light of the development of the UES and the range of
alternative approaches to evidencing quality of teaching. These include the outcomes of the current
ALTC/OLT project investigating teaching quality indicators1.
ALTC/OLT grant on measuring and reporting teaching quality (led by Marnie Hughes-Warrington and involving
Monash, Melbourne, Griffith and UWS).
1
Page 7 of 16
3. Should the basis of the AGS be modified to improve fit with other indicators or to reduce
student burden?
Yes, the AGS should be reviewed in the context of the other indicators as well as other data sources
(e.g., employers), mentioned earlier. Since the AGS is administered post graduation, it does not
necessarily represent a respondent burden; nevertheless there would be merit in ensuring that the
appropriate methodologies are in place to ensure maximum engagement from graduates. This may
include more targeted approaches (e.g., contacting successful graduates and/or following up with
qualitative approaches to investigate the experiences of those not employed) and greater use of
feedback from employer groups.
4. Would a survey sample be a more appropriate option? What are the implications for the
development of the UES for the CEQ?
The CEQ could be sample based if undertaken if the sample is correctly prepared and representative.
In making decisions about sampling, it is important to take account of demographic subgroups or
campus-based groups that have small numbers. In this case, a census approach may be more
appropriate. The principle of flexibility should apply to ensure that representative datasets are
obtained. The risk of moving to a sampling approach is the potential loss of data at the course level
due to unrepresentative samples. This is a serious issue that needs to be considered in light of the
survey’s purpose.
It is possible that the UES may replace the CEQ, but before making that decision, consideration
would need to be given to the principles of effective survey implementation noted earlier, including
purpose, audience, context and sampling approach.
5. Is the current partially decentralised mode of delivering the AGS still appropriate?
If the primary purpose of the AGS data is to provide standardised information on the MyUniversity
website, then the partially decentralised mode of delivery may need to be called into question in the
interests of achieving uniformity of procedures. Some institutions may be disadvantaged due to
limited resources and skill sets in the partially decentralised mode.
Given the emphasis on providing reliable information to be published on the MyUniversity website,
one has to ask whether or not we can accurately and with confidence compare data that is collected
in a different manner by different institutions. It is apparent that each university has a different way
of collecting the data and also different levels of resourcing available to assist with this collection.
This also applies to the coding of the data. While there are guides to follow, there may be different
understandings and use of these guides across different institutions.
The GDS needs to census based, due to the variability of the responses and the detailed data
collected for some data items such as occupation and duties it is very difficult to produce a
representative sample that will allow for the collection of data at this detailed level.
Page 8 of 16
6. How can the timeliness of AGS reporting be improved?
As mentioned earlier, splitting the survey is likely to have a positive impact on the timeliness of
reporting. Centralised data processing should provide economies of scale which will improve
processing timelines. Centralised data collection could also help, however, it all depends on the level
of resources that are allocated and who is undertaking the centralised data collection and
processing.
7. Are current funding arrangements for the AGS appropriate? What alternative funding
arrangements should be considered?
It will be important to ensure that funding for the AGS is assured. Alternative funding arrangements
are uncertain at this stage, but it would be a mistake to consider institutions as an alternative source
of funding to support the ongoing deployment of the AGS.
8. Will AGS data continue to be reliable enough to meet the needs of the sector in the future? How
can data reliability best be improved?
As with any measure, the AGS will need to be reviewed on an ongoing basis to ensure that it is
sufficiently responsive to the changing nature of the university student experience and career paths.
We have already identified the possibility of separating the GDS and the CEQ. In relation to the GDS,
the instrument should be reviewed to ensure that it takes account of institutional and labour market
contexts, regional differences and employment issues and disciplinary differences in terms of
careers and jobs (e.g., compare Creative Arts/Music outcomes to those of professions such as Law or
Education). The instrument also needs to take account of informal work experiences and changing
attitudes and approaches to work. These are conceptual, research-based developments that need to
inform the construction of the GDS instrument and its scales.
Similarly, with the CEQ, there is a need to ensure that it is sufficiently reflective of the changing
student experience and changing modes of learning and engagement, including online and blended
learning modes. The AGS will not be sufficiently reliable unless it reflects changes in the world of
work and in the ways in which students engage with learning in both formal and informal contexts.
The instrument also needs to be informed by developments in surveying techniques and
methodologies to ensure that it is up to date in its approach.
9. Would moving the AGS to a centralised administrative model improve confidence in results?
Yes, it is likely to improve confidence as a result of the more standardised approach facilitated by a
centralised model.
Page 9 of 16
10. Would moving the AGS to a sample survey basis improve data quality?
Yes it is possible that moving the AGS to a sample survey basis could improve data quality, but this
would require professionals with expertise in survey design and management. For example the ABS
may be helpful in this regard.
11. Does the AGS adequately measure the diversity of the graduate population and how might it
be strengthened in this regard?
A standardised instrument such as the AGS is limited in its capacity to gauge the full diversity of the
graduate population. Any instrument that comprises predominantly pre-determined, closed items is
limited in this regard. Its rigour may be strengthened if the sample is representative, however, it
may be difficult to achieve a sufficiently representative sample at the level of field of education.
Attention to enhancing the uniformity of data coding approaches would also enhance the value of
the data.
Page 10 of 16
Discussion Paper 3: Assessment of Generic Skills
Introductory comments
UWS endorses the primacy of direct assessment of learning outcomes, however we draw attention
to the importance of contextualised assessment practices that take account of disciplinary and
transdisciplinary contexts, rather than isolated approaches to skills assessment alone. The Discussion
Paper’s focus on generic skills fails to take account of the broader notion of graduate capabilities
that are recognised internationally as fundamental for 21st century graduates. These include ethical
values, emotional intelligence, sustainability literacy and the capacity to manage and navigate
change. There is a need need for a more sophisticated approach to assessing and evidencing
graduate capabilities in ways that are:
 responsive to industry and community needs;
 sufficiently nuanced to reflect the comprehensive range of skills, attitudes and values that
one would expect from a university graduate;
 reflective of national and international developments in such areas as sustainability and
global citizenship; and
 valid, sustainable and transparent approaches to measuring and report learning outcomes.
There are several dangers inherent in focusing on generic skills alone and in a decontextualised way.
These include the privileging of one or more skills or skill sets over another. For this reason, any
assessment of generic skills must take account of the context in which graduate capabilities are
developed.
The development of a sector-wide approach to direct assessment of learning outcomes must take
into consideration the TEQSA teaching and learning standards in order to ensure a coordinated
approach in this regard. The process should also be informed by sector-wide learning standards
projects, including
 Barrie et al’s (2011)2 investigation of types of assessment tasks and assurance processes that
provide convincing evidence of student achievement;
 Krause & Scott et al’s (2011)3 Learning and Teaching Standards study of inter-university peer
review and moderation of coursework at unit and program level, involving 11 universities
across 11 disciplines;
 Freeman’s (2011)4 project seeking to collaboratively develop and implement a national
model of expert peer review for benchmarking learning outcomes against nationally-agreed
threshold learning outcomes developed under the ALTC 2010 Learning and Teaching
Academic Standards project; and
 Oliver’s (2011)5 OLT Fellowship engaging curriculum leaders of undergraduate courses
across disciplines to work with their colleagues, industry partners, students and graduates
to: define course-wide levels of achievement in key capabilities, articulated through
2
Barrie, S. & colleagues (2011). Assessing and Assuring Graduate Learning Outcomes: Project summaries. Available online:
http://www.itl.usyd.edu.au/projects/aaglo.
3 Krause, K., Scott, G. & colleagues (2011). Learning and Teaching Standards Project: Inter-university peer review and
moderation of coursework and assessment. Available online: http://www.uws.edu.au/LATS
4 Freeman, M. (2011). Achievement Matters: External Peer Review of Learning and Teaching Academic Standards for
Accounting.
5 Oliver, B. (2011). Assuring graduate capabilities: evidencing levels of achievement for graduate employability. Available
online: http://tiny.cc/boliver
Page 11 of 16



standards rubrics and to evidence student achievement of those standards (through student
portfolios and course review processes, for example);
the IRU peer review of learning outcomes project;
the Group of Eight Verification of Standards project; and
discipline-specific initiatives supporting the development of discipline learning standards
(e.g., the Learning and Teaching Academic Standards project and resultant discipline
standards), and cross-disciplinary projects identifying sources of learning standards data and
assurance strategies relating to these data.
Approaches to assessing and evidencing learning outcomes need to be sufficiently flexible to
accommodate sector diversity, yet they also need to be defensible and robust, scaleable and
sustainable. The value of expert peer review in the discipline is widely endorsed, as is the need to
ensure that any framework takes account of existing institutional quality and standards frameworks
in order to streamline reporting and monitoring demands.
Several questions arise in response to the Discussion Paper. These include:
1. How will the national data system be developed and implemented to ensure there is timely,
comparable, valid and reliable performance data available to TEQSA for its assessment of
proportionate risk?
2. To what extent will the assessment of learning outcomes acknowledge the critical role of
peers and expert judgement?
3. Who is the audience for the reporting on learning outcomes data?
4. If a sector-wide test such as the CLA is to be implemented, how will perverse behaviours –
such as ‘teaching to the test’ be addressed?
5. To what extent have national and international expert views on assessing learning outcomes
for sector-wide comparative purposes been considered? Feedback from international
experts (names can be provided) highlights the dangers of misusing instruments such as the
CLA for purposes other than those for which it was intended. Even if a locally adapted
version is to be introduced, the CLA was intended for local improvement purposes rather
than for performance measurement purposes.
Notwithstanding the many reservations regarding the use of the CLA for institutional comparison
and performance measurement purposes, there may be value in piloting its use to support local
institutional improvement as part of the mission-based Compacts agreements, for example.
UWS therefore recommends that:
 consideration be given to trialing an Australian version of the CLA for individual institutional
use and as part of the Compacts process for the purposes of institutional improvement;
 the use of the CLA be considered as a complement to discipline-based peer reviewed
approaches to assessing learning outcomes and graduate capabilities, such as the examples
listed above; and
 DIISRTE resist the notion of using the CLA as part of a summative approach to producing a
single outcome score of questionable validity to be published on the MyUniversity website.
The CLA was never intended for sector-wide performance measurement and comparison
purposes. Such an approach runs the risk of yielding counter-productive and detrimental
outcomes that will negate the possibilities it may hold for improvement.
Page 12 of 16
1. Are there other uses of the assessment of generic skills?
This question follows a discussion entitled ‘direct assessment of learning outcomes’. It is of
considerable concern that ‘learning outcomes’ appear to be equated with ‘generic skills’ in the way
this question is raised. UWS strongly advocates for a more holistic approach to learning outcomes
that takes account of assessment practices in the discipline and that encompasses a more
comprehensive approach to graduate capabilities, rather than a reductionist approach to skills-based
assessment.
We argue that there are other uses of assessment of learning outcomes, including disciplinary-based
outcomes and graduate capabilities which include skills, knowledge and values. These include
community and industry engagement opportunities that go beyond provision of information alone.
Direct assessment of learning outcomes also serves as a catalyst for engaging staff in scholarly, peer
supported discussions regarding strategies for ongoing assurance and monitoring of academic
standards across the sector.
2a. Which criteria should guide the inclusion of discipline specific assessments in the development
of a broader assessment of generic skills?
2b. Are there other criteria, not listed above, which need to be considered?
In addressing this question, we urge clarity in the use of terminology, particularly in relation to the
use of the term ‘generic skills’. To what does this refer? Which specific skills does it include? There
seems to be some confusion between reference to ‘generic skills’ and ‘generic skills in the discipline’.
These issues need to be clarified as part of a coherent, informed approach to assessing learning
outcomes. For this reason, we urge the use of the term ‘graduate capabilities’ which is more holistic
and negates the focus on skills in isolation of disciplinary and transdisciplinary knowledge and
values.
Any consideration of discipline-specific assessments needs to address such criteria as:
 clarifying fitness for purpose and fitness of purpose;
 audience;
 contextual factors;
 explanatory notes provided to external stakeholders to support informed interpretation of
the data;
 existing institutional and cross-sectoral approaches to assessment and associated quality
assurance processes in this regard, including benchmarking, peer review and moderation;
 resourcing to support peer review, moderation and validation within and across universities.
3. What factors should guide the design of performance measurement instruments to assess
generic skills?
In addition to the factors listed in response to item (2) above, the following principles should guide
the design of performance measurement instruments to assess generic skills and graduate
capabilities:
Page 13 of 16
i. validity and reliability of proposed instruments;
ii. clarity of definitions and nomenclature pertaining to the notion of ‘generic skills’;
iii. informed discussion and decision-making relating to which graduate capabilities, including
generic skills, are assessed, which are not, how these are selected for inclusion in
performance measurement instruments, by whom and under what conditions;
iv. use and ownership of data produced by such measurement instruments;
v. consideration of capacity building needs of staff and resourcing of institutions to engage
with these instruments for both quality assurance and improvement purposes; and
vi. strategies for managing competing tensions inherent in any attempts to design performance
measurement instruments relating to learning outcomes, including achieving a balance
between:
 a focus on summative evaluation with a focus on formative evaluation of teaching and
learning quality and outcomes;
 minimum or threshold learning outcomes on the one hand, and excellence and
international standards on the other;
 a focus on generic and discipline-specific knowledge, skills and values;
 compliance in relation to bodies such as TEQSA (summative assessment) and
improvement targets such as those negotiated through the Compacts process
(formative assessment);
 a focus on internal quality improvement (i.e., encouraging an open focus on areas of
weakness) with external accountability and quality assurance (i.e., making performance
data public);
 the drive to have a single number to represent learning outcomes on a public website
like MyUniversity with validly representing the complex aspects of quality and diversity
in performance not just between, but also within universities and private providers;
 sector diversity and comparability of performance; and
 a rapid growth of student numbers in a deregulated environment on the one hand and
ensuring consistently high standards, quality and capacity to deliver on the other.
4. Is value-add an appropriate measure of generic skills?
In order to address this question in an informed way it would be important to have additional
information about:
 the range of other approaches available to evidence student learning outcomes. It should be
noted that the CLA focuses on institution rather than student-level learning outcomes, so
the use of an instrument such as this is to determine the ‘value add’ of an institution to
student learning outcomes is questionable;
 the nature of the measure(s) being considered – their purpose, validity – including construct
and predictive validity, reliability, psychometric properties, and conceptual underpinnings;
and
 the proposed sampling and tracking methodology to ensure consistency of sample
characteristics over time – for example a repeated measures approach.
At this stage, UWS would not support a value-add measure of generic skills for the reasons listed
above. Rather, we strongly encourage a more considered and staged approach to determining
robust alternatives, including approaches to documenting the outcomes of peer review and expert
judgements for the purposes of publication on MyUniversity website. At this stage, it would be
preferable to use the proxy measures currently in use (such as the CEQ) while a more considered
Page 14 of 16
and informed approach takes place. We fully support the need for transparent, efficient and simple
ways of publishing learning outcome data, but we contend that there are several alternatives that
should be considered. Moreover, any decision in this regard should be informed by the outcomes of
the TEQSA learning and teaching standards deliberations to ensure a coordinated approach.
5. Is it necessary to adjust measures of generic skills for entry intake and how should this be done?
The characteristics of a university’s student cohort should be taken into account in order to ensure
meaningful interpretation of data that takes account of a range of student demographics and
background variables. This is particularly important in universities with high proportions of students
from under-represented groups. Any ‘adjustment’ however, needs to be approached with caution.
Considerable expertise and wide consultation would be required to ensure that the principles of
equity and fairness are applied and that statistical approaches are robust and defensible.
6. Are there other more appropriate measures of generic skills?
As outlined earlier in this paper, the answer to this question lies firstly, in a more nuanced and
sophisticated interpretation of the notion of ‘generic skills’ to encompass graduate capabilities;
secondly, all the factors outlined in response to (2) and (3) above must be taken into account.
UWS contends that there are more appropriate measures which include:
 consideration of existing internal approaches to evidencing student learning outcomes;
 the value of expert judgement and peer review in relation to assessment of graduate
capabilities and disciplinary outcomes in context, rather than in isolation; and
 innovative approaches to combining quantitative and qualitative assessment and
publication of data relating to learning outcomes which draw on the findings of several
national projects and sector-wide collaborations (as mentioned in the introductory
comments to this section – see p.10).
7. What would be an appropriate measure of generic skills for reporting university performance?
An appropriate measure might include information on the following, published on the MyUniversity
website, for each university:
i.
the number of courses in each field of education that have engaged in inter-university
peer review and moderation of learning inputs (i.e., course design features, assessment,
course learning objectives) and achievement of outcomes as evidenced in actual
samples of student work. In this exercise, the focus is on exit outcomes as demonstrated
in final year undergraduate assessment outcomes, particularly in capstone courses. A
representative sample of student work across the range of grade bands (A to F) is blind
peer reviewed by at least two partner institutions and the outcomes are shared within
departments for the purposes of assessment quality enhancement and standards
Page 15 of 16
ii.
iii.
iv.
setting6;
hyperlinks could be included to provide illustrative samples of student work and further
details about how the university manages the process and what steps are taken to
ensure consistency of assessment standards;
the number of courses with national and international accreditation, based on
documentation that includes assessment samples to demonstrate learning outcomes;
and
the number of courses using portfolio approaches to evidence students’ development of
graduate capabilities and generic skills.
These are just some of the ways in which measures might be reported. We acknowledge that these
do not represent examples of a value-add approach such as that represented in the CLA. However,
unlike the CLA, these measures represent evidence of student learning and they focus on
assessment quality which is recognised internationally as the key to development of robust
measures of student learning outcomes.
8. What level of student participation is desirable and for what purposes?
Representativeness of the relevant student cohorts is key to any rigorous approach to evidencing
student learning outcomes. Having said this, UWS has serious concerns about the existing
approaches to and cost-benefit of recruiting students to engage in the CLA. This issue needs to be
considered with care to ensure that meaningful, representative data are produced.
9. What design features or incentives would encourage student participation?
Above all, students need to see the assessment process as meaningful, contextualised and
characterised by integrity. They will typically want to see that there is ‘something in it for them’ –
this might include important learning from the process, a connection to their future career, an
understanding of how this might enhance the quality of teaching and course delivery at their
university, and the intrinsic satisfaction of being able to reflect back on achievements resulting from
their course of study. While incentives may be useful in the short term, there would be merit in
consulting with students to investigate whether these are needed and, if so, what kinds of incentives
are preferred.
6
Further information about approaches to publishing outcomes of peer review and moderation of course-level learning
outcomes on MyUniversity website in simple, accessible ways is available from Kerri-Lee Krause, co-director of the national
learning and teaching standards project funded by OLT. This will be an outcome of the project, due for completion late
2012.
Page 16 of 16
Download