Attachment One: The Assessment Context at the University of

advertisement
FOR GENERAL DISTRIBUTION
Attachment One: The Assessment Context at the University of
Saskatchewan and Beyond
The most significant recent assessment initiative at the University of Saskatchewan was Systematic
Program Review (SPR). Between 1999 and April 2005, over 150 degree programs were evaluated
following a rigorous process which included preparation of a self-study, a site visit with a review
team comprised of national and international experts in the discipline, a final report including
assignment of an assessment category, and a response from the program head(s) to the SPR Executive
(the Provost and the Dean of Graduate Studies and Research). A great deal of energy was expended
in the colleges and departments across the campus in putting together the documentation for the
reviews, in discussing priorities and plans, in developing itineraries and meeting with review teams,
in interpreting reports, in examining outcomes, in addressing recommendations. All of this effort
emanated from the Provost’s and Council’s joint interest (through the Planning Committee) in
ensuring the quality of degree programs offered by the University. This exercise, among other things,
confirmed the general high quality of educational programming, both graduate and undergraduate, at
the University of Saskatchewan.
The advent of Integrated Planning in 2002 and its subsequent implementation made it increasingly
obvious that changes to SPR would be required. For one thing, the quality control review of
academic programs provided by SPR needed to be more closely linked to university, college, and
department planning and priority-setting processes. For another, because Integrated Planning was
intended to provide a comprehensive and synoptic view of all of the University’s activities, it was
clearly no longer possible to exclude whole sections of the campus from an assessment system similar
to that required for degree programs. This was most evident when the College Plans Review
Committee (CPRC), a subcommittee of the Planning Committee, reviewed college and major
administrative unit plans for the First Integrated Plan. Those colleges which had not undergone SPR
did not have an independent external comparison point for initiatives proposed in their plan; major
administrative units, which did not undergo a systematic review similar to SPR, had no external
assessment against which to benchmark.
Other major deficiencies in the SPR process highlighted by the advent of Integrated Planning, and
which are relevant in this context, include: 1
1) The focus of the Strategic Directions on the student experience (undergraduate and graduate)
means that new ways need to be found to assess the teaching and learning
environment/experience of students, to measure student opinion and communicate it to the
general campus community. Student perceptions of their educational experience are not
confined to degree programs (as they were for SPR).
2) SPR did not sufficiently emphasize the evaluation of research because of its focus on
instructional programs; further, its treatment of research activities of faculty was uneven.
Given the prominent place afforded research in the Strategic Directions and other planning
documents, the selective inclusion of research activities for evaluation is no longer sufficient.
1
For a complete overview of the strengths and deficiencies of SPR, see the review of SPR conducted in the
fall of 2004 by Dr. Trudy Banta (Indiana University, Purdue University, Indiana) and Dr. Janet Donald
(McGill University). The review report, along with the response from the SPR Executive, was widely
distributed on campus. The Planning Committee also called for comments on both (March – May 2005)
and sponsored a discussion at Council in May 2005 on SPR. Copies of the report, the Self Study, the SPR
Executive’s response, and the consultation results are available from the Integrated Planning Office.
Attachment One – The Assessment Context at the University of Saskatchewan and Beyond
2
3) SPR did not provide clear distinctions between ‘formative’ and ‘summative’ review
processes; rather, it blended both. It was formative in that programs submitted self-studies
and considered how they might best address recommendations from review teams which
challenged them to meet emerging developments in the discipline. It was summative in that a
final assessment grade was ascribed to the program, and, in the case of graduate programs,
the Dean of Graduate Studies and Research used these assessments to allocate graduate
scholarships to programs or to suspend admissions to programs until issues identified in the
review were addressed. Because no similar formal process occurred at the undergraduate
level, specific outcomes for many programs were not obvious or delineated leaving many to
wonder whether changes occurred in undergraduate programs which obtained ‘C’ ratings or
whether all of the effort expended had any impact.
4) Because SPR emphasized instructional programs and Integrated Planning focuses on the
college and major administrative units, the link between program quality and planning was
not obvious.
Even as SPR was occurring, assessment initiatives of a different variety and scale were being carried
out under the sponsorship of a number of colleges, departments, and University-level offices:
o
Attachment Two, while not an exhaustive list, provides an indication of the number of major
student surveys undertaken at the University of Saskatchewan over the period 1994 to 2006.
As even a casual glance will attest, we conduct an astonishing number of surveys in any
given year. Unfortunately, little is known about their results. Students seldom receive
feedback sufficient to raise their awareness of actions and opinions. Surveys on similar
topics are devised independently and used for different purposes by different units. There is
no central repository for the survey information once completed and no obvious ‘home’ for
survey results or instruments. Surveys are expensive, and investing in too many of them will
produce survey fatigue, especially if students cannot see how their feedback is producing
visible results. The creation of a visible accountability loop to students will be an important
outcome of a University-sponsored assessment initiative.
o
Attachment Three provides a list of the selective reviews of academic and administrative
units since the mid 1990s to 2006. Some of these assessment activities are episodic; for
example, the reviews of the University Libraries or of the Basic Sciences in Medicine. Some
are systematic and not included in this Attachment; for example, many of the professional
colleges have accreditation processes which require reviews according to professional
requirements in a pre-determined timeframe. Even with this significant exclusion, the list
provided in Attachment Two is extensive particularly when placed inside the context of SPR.
As in the case of surveys, the results of these reviews and their impact on institutional
decision-making processes are unknown beyond the unit or review sponsor. They certainly
add to the paper and assessment burden on campus. An important consideration in the
development of future review processes will be the coordination and dissemination of these
efforts to ensure the results inform planning and decision-making processes.
In addition to surveys and occasional unit reviews, Integrated Planning itself created a substantial
body of work of its own, ranging from the Foundational Documents (each of which set goals for the
University for the next eight to ten years) to the Integrated Plan (describing over 70 initiatives for
development over the First Planning Cycle, 2003/04 to 2006/07) to the Multi-Year Operating Budget
Framework (describing our projections on the provincial grant, salaries, tuition, and plans to achieve
our financial targets). Each one of these initiatives requires review and evaluation; indeed the
Integrated Plan promises intermittent evaluation of Integrated Planning itself. The Provost’s 8th
Attachment One – The Assessment Context at the University of Saskatchewan and Beyond
3
Academic Agenda Address (February 2006) delivered the first ‘report card’ on Integrated Planning,
primarily on the initiatives contained in the First Integrated Plan. 2 More of this type of reporting at
regular intervals will be necessary to meet the University’s transparency goals.
It is not only University Council that is interested in outcomes and results. ‘Best practices’ in
governance for Boards of Governors are increasingly centered on four themes – planning,
accountability, risk management, and advocacy. 3 Boards are asking whether the appropriate systems
are in place to assure that the goals of the institution are clearly defined, that the resources and
strategies required to meet these goals are well understood and carefully crafted, and that there is
sufficient reporting and monitoring to determine progress towards the goals. Add to this the
heightened public interest in accountability of public institutions and the continued commitment of
provincial auditors to move beyond policies and procedures towards outcomes and outputs. The
Board is certainly looking to the University administration for metrics that would permit national and
international comparisons. At the very least, the Board will expect assessment against our stated
goals and against peer institutions in other jurisdictions.
An additional side-effect of Integrated Planning is the increased interest in units on campus in
evaluation of all types: the reasons students are not retained from first to second year; the
effectiveness of particular services or programs for students; learning outcomes; comparisons with
other similar units in other universities; and so on. In the past year, the Human Resources Division
has undertaken ‘workplace assessments’ at the request of some Deans and, in early 2006, it published
the results of its first employee opinion survey. The ‘Planning Parameters’ prepared by the Provost’s
Committee invite colleges to identify benchmarks for their academic programs and research efforts
intensifying the need for obtaining information about comparisons with peers. The Foundational
Documents all require data and evidence to support claims of progress towards goals. Some central
coordination is certainly required to ensure that information that is already available is used as a
starting point, even if additional tailoring is required to suit different purposes.
In Canada, assessment at the institutional level is a relatively new phenomenon, but it is not a
completely alien concept. In Alberta, for example, all of the universities are required to submit
performance reports annually to the provincial Department of Learning. That province has also
created a Quality Council which is intended “to facilitate the development and expansion of degreegranting opportunities” by reviewing proposals from both public and private institutions wishing to
offer new degree-level programs. In Ontario, the Council of Ontario Universities (COU) has
established guidelines for the regular audit of the universities’ policies and procedures for the conduct
of periodic quality reviews of undergraduate programs and, through the Ontario Council on Graduate
Studies (OCGS), it has, since 1963, conducted quality assurance reviews of graduate programs. The
Association of Universities and Colleges of Canada (AUCC) has adopted a set of principles of
institutional quality assurance in Canadian higher education which describe the purpose, scope and
frequency, key characteristics, and communication guidelines to be followed by member institutions.
For example, one of the key principles is that “the institution has in place a formal, approved,
transparent policy committing it to ensuring the quality and continuous improvement of its academic
programs.” 4 The Government of Saskatchewan recently participated in the development of a Degree
2
See www.usask.ca/vpacademic/integrated-planning for the report card and a link to the Provost’s address.
A second report card was produced in Fall 2007 and a final report is anticipated by January 2009.
3
This information became available through a confidential report prepared by Ken Snowdon and
Associates, Report on Board Governance, for a Brock University Special Committee on Board
Governance, June 2006.
4
See www.aucc.ca/qa/principles/index_e.html for the complete set of principles.
Attachment One – The Assessment Context at the University of Saskatchewan and Beyond
4
Framework policy aimed at providing learning outcomes and expectations associated with each
degree level to be used as a template when new degree programs are introduced.
In the United States, where assessment in higher education has had a much longer history, and where
interest is now primarily focused on learning outcomes and closely associated to institutional research
activities, assessment efforts are generally focused around the following key functions:
o Coordinating and conducting program review, accreditation, institutional-level assessment
activities, and assessment of administrative and educational support units
o Collecting data from students, faculty, and staff, as well as utilizing existing institutional data,
in the preparation of reports for decision-makers
o Supporting planning and decision-making through analysis and presentation of information
o Acting as a campus clearing house for information about best practices in assessment
o Conducting university-wide assessment activities and assisting others in creating and
implementing focused assessment projects (assessment plans, survey instruments, focus
groups, interviews, assessment workshops)
o Conducting inter-institutional comparisons
o Administering centralized surveys and assessment projects
o Administering course and teacher evaluations
o Responding to internal and external requests for information about institutional
characteristics
While not all of the above key functions are manifested in exactly the same way at every university,
as a group they provide an good indication of the typical tasks associated with offices of ‘academic
assessment’, ‘institutional effectiveness’, ‘institutional assessment’, ‘institutional research and
planning’, ‘assessment and institutional research’ that are increasingly prevalent in the USA. The
most important observation/conclusion is the close linkage between ‘research’ and ‘assessment’ as
well as between ‘assessment’ and ‘decision-making’. 5
5
For examples of assessment initiatives in the United States, see for example: <websites>
Download