II. Develop Responsible Assessment Processes

advertisement
Improving Assessment Efficiency And Overcoming Barriers: Relating
Programmatic Accreditation To Institutional Assessment
Joni E. Spurlin, Dianne Raubenheimer, Eleanor Nault and Sarah A. Rajala
This presentation discusses how two institutions have improved efficiency of assessment processes and
overcome barriers to assessment by relating program assessment to institutional needs for assessment. By
enabling data to be used for multiple processes and multiple levels - from program to department, to
college or across the institution, improved decision-making has been accomplished. A framework to help
others is presented as well as practices that did and did not work.
The goals of the presentation are to:



Provide examples of two institutions’ use of programmatic assessment to support and enhance
institutional assessment activities.
Provide a framework for others to help improve institutional assessment through programmatic
assessment and accreditation processes.
Discuss barriers to effective assessment and how program accreditation needs may help overcome
some of these barriers.
The framework presented is based on lessons learned from the institutions including:







All parties do not always understand melding program processes with the institutional process:
building communication and trust are key.
As administrators change, the barriers for assessment may reappear.
Assessment practices are evolving, not static - they must change to meet the needs of employers,
accreditors and society.
Many of our established processes have resulted in program improvements.
Best practices have been shared across programs around the institutions.
Differences in formative and summative assessment processes need to be outlined and
acknowledged by all those who use results for decision-making.
“Statements of Good Practices” are based on what leaders in the field have identified. For a
summary, see Linda Suskie’s compilation: “What is good assessment: A synthesis of Principles of
Good Practice”: see http://planning.iupui.edu/508.html
A Framework To Enable Others To Build A Culture Of Data-Driven Decision-Making And Improve
Efficiency In Decision-Making
The purpose of this framework is to enable you to discuss these issues at your home institution. We are
presenting the components of the framework, some barriers to achieving effective assessment processes,
and some solutions. We will expand on these topics through illustrations from two institutions: NC State
University (NCSU) and Clemson University, drawing specifically on experiences with program assessment
in engineering, and broadly on experiences with institutional assessment.
I.
II.
III.
IV.
Set Assessment Expectations
Develop Responsible Assessment Processes
Communicate Expectations and Processes
Document and Use Results

Copyright Joni E. Spurlin, Dianne Raubenheimer, Eleanor Nault and Sarah A. Rajala, 2007. Permission is granted for
this material to be shared for non-commercial, educational purposes, provided that this copyright statement appears on
the reproduced materials and notice is given that the copying is by permission of the authors. To disseminate otherwise
or to republish requires written permission from the authors.
June 2007 International Assessment & Retention Conference
1
I. Set Assessment Expectations
A. Statements of Good Practice:
 Expectations are discussed at program and institutional levels so that expectations are congruent
and clear to all engaged in assessment processes.
 Faculty, assessment professionals and leadership discuss ramification of programmatic
accreditation, regional accreditation and accountability issues.
 The institution promotes an atmosphere of critical reflection about teaching, learning, research and
services.
 Assessment reflects what stakeholders really care about.
 Assessment evidence is publicly available, visible and consistent.
B. Barriers
 Attitudinal
o Differences in how assessment is valued.
 Knowledge
o Expectations seem to be different and expressed differently depending on level and
purpose of assessment.
o Lack of understanding of accreditation, assessment, and accountability, including
differences and relationships between.
 Practical
o If assessment is not valued, time and resources are put elsewhere.
C. Solutions at Program Level
 Discuss faculty expectations, including workload and purposes of processes.
 Review programmatic accreditation expectations.
 Identify what is needed for making program and curricular decisions.
 Set expectation that program assessment results will be used at institutional level.
D. Solutions at Institutional Level
 Identify what administrators need, how often, and why.
 Discuss how program assessment results will be used for decision making at institutional level;
discuss how institutional results will be used for decision making at program level.
 Review regional accreditation expectations.
 Articulate institutional responses to external pressures, such as legislature, and Secretary of
Education Margaret Spellings’ Commission on the Future of Higher Education concerns.
E. Solutions at Both Levels
 Develop common language between programs and across institution: language that can be used by
everyone on campus for all the purposes and expectations.
o NCSU common language
http://www.ncsu.edu/uap/academic-standards/uapr/process/language.html
o Clemson University definitions (attachment)
 Develop common vision of what is expected by assessment processes at all levels (and what is not
expected).
o NCSU’s assessment vision: http://www2.acs.ncsu.edu/UPA/assmt/Guide_Principles.htm
http://www2.acs.ncsu.edu/UPA/assmt/best_practice_stmt.htm
 Improve communication (see III).
II. Develop Responsible Assessment Processes
A. Statements of Good Practices
 Good assessment processes are meaningful, manageable, effective and efficient.
 Effective assessment is: a) owned by all; b) valued, recognized, and rewarded; and c) supported
and sustained by campus leaders and resources at all levels.
 Assessment works best when it is ongoing, not episodic.
June 2007 International Assessment & Retention Conference
2




Assessment activities ask important questions; reflect institutional mission and values; reflect
programmatic goals and objectives for learning.
Assessment methodology includes both formative assessment, for the purpose of giving feedback
and making improvement, and summative assessment, for the purpose of identifying levels of
success.
Assessment evidence is relevant to the objectives/outcomes of programs and services and to the
needs of the stakeholders, constituencies. Evidence is collected through a variety of methods and
from multiple sources.
Good assessments are cost-effective, yielding value that justifies the time and expense we put into
them.
B. Barriers
 Attitudes
o Faculty feel overwhelmed: they feel they are responding to multiple processes instead of
one, e.g., programmatic needs, program accreditation, institutional needs, institutional
accreditation and accountability.
o Acceptance (or not) of assessment methods (e.g., not scientific enough), therefore results
are (or not) useful.
o “We’ve not had to do this before! Why now?” (e.g., hard to move some who have been at
the institution a long time)
o Fear of being judged.
o ‘Real’ research is more important than assessment or teaching.
 Knowledge
o Faculty and staff don’t know how to conduct assessment, e.g., difference between direct
and indirect data sources and appropriate measures.
 Practical
o Assessment professionals are not engineers or may not be faculty members, and are
therefore not seen as peers.
o Time and expense it takes to develop and implement assessment processes.
o Only “safe” questions assessed, because then results will require little or no change.
C. Solutions at Program Level
 Establish a base or improve processes based on expectations discussions.
 Use data for decision-making at the program level.
 Ensure ownership of academic program assessment processes by faculty.
 Make relevant data easily available to faculty and administrators.
 Provide resources for assessment processes.
D. Solutions at Institutional Level
 Establish a base or improve processes based on expectations discussions.
 Develop "one" flexible process that meets many needs.
o Clemson University: use of WEAVEonline (attachments)
o NCSU undergraduate processes: http://www.ncsu.edu/uap/academicstandards/uapr/UAPRindx.html
 Determine who leads the process: Faculty-driven vs. leadership-driven.
o NCSU - Moved away from an institutional-level standing committee of faculty reviewing
assessment reports to assigning academic associate deans the responsibility for ensuring
assessment processes meet the needs of academic programs and institution.
o Clemson University is implementing electronic system with hierarchy of review
personnel, variety of reports, and multiple cross references for varied programmatic and
institutional purposes (SACS, ABET, CU Goals, Strategic Plan)
 Enhance methods that allow program data to be used for decision-making at the institutional level.
o
Determine who can use programmatic information so that an individual or committee
can identify needs across multiple programs, units, divisions or the intuition
 Improve institutional data-collection/assessment processes to meet program needs.
o NCSU student survey processes: http://www2.acs.ncsu.edu/UPA/survey/index.htm
June 2007 International Assessment & Retention Conference
3
E. Solutions at Both Levels
 Increase knowledge about assessment and assessment methodology.
o Professional development of faculty, staff, and administrators.
o Those responsible for assessment and accreditation need to act responsibly and be
accountable by communicating knowledge.
 Move toward direct assessment of student learning as evidence.
o Not all programs may need to be held to the same rigor of methodology.
 Develop processes of providing and tracking data, e.g. assessment professionals filtering relevant
data, electronic databases for managing data.
 Establish assessment professional’s credibility through intellectual discussions with faculty. Build
trust over time by sticking with the processes and expectations.
 Identify the best place for assessment professionals: Decentralized vs. centralized location of
assessment professional.
o Advantages of decentralized assessment professionals:
 Provide better use of data for program needs.
 Better relationship with programmatic faculty members and departmental and
college-level administrators.
 Understand which institutional data are relevant to program needs = increased
efficiency.
 Possess detailed knowledge of program accreditation needs
 Easier to apply pressure to improve assessment processes.
o Disadvantages of decentralized assessment professionals:
 Harder to coordinate processes, results for institutional decision-making, etc.
o Advantages of centralized assessment professionals:
 Easier coordination of processes.
 Easier to meet institutional needs for regional accreditation and accountability
 Easier to establish common language and processes for all programs (academic
and academic support).
 Easier to identify center of accountability for assessment. Can coordinate from
center rather than independent structures. Will make “transparency” easier for
accountability (Spellings’ commission).
 Can coordinate standard assessments (SSI, NSSE) that can be used by both
programs and the institution.
o Disadvantage of centralized assessment professionals:
 Being able to meet needs of all programs.
 Having the detailed understanding of all different programmatic accreditation
processes.
 Improve communications
III. Communicate Expectations and Processes
A. Statements of Good Practices
 One key to effective assessment is to communicate expectations and processes, regularly,
frequently, continuously, and ongoing.
 Assessment works best when guided by the curiosity and intellectual dialogue that characterize the
culture of higher education.
 Communicate with all parties, including stakeholders and constituents.
 Embed assessment into campus conversations.
B. Barriers:
 Attitudes:
o Lack of trust of leadership’s motives.
o Limit of communication because of fear of resistance and hostility.
 Knowledge
o Miscommunication, especially because of differences in perspectives, expectations and
values.
June 2007 International Assessment & Retention Conference
4

Practical
o Faculty and administrators get tired of conversations about assessment.
o Despite communication, some don’t or won’t hear.
o Institutional leadership does not communicate with program level leadership.
o Lack of assessment momentum after an accrediting body leaves.
C. Solutions at Program Level
 Communicate repeatedly about vision of assessment.
 Engage faculty in regular communication process, through meetings, emails, newsletters,
websites.
 Produce a ripple effect: Identify one faculty member per program as the liaison. That faculty
liaison with own program faculty members or committees. Hold regular college/division
meetings with all liaisons to discuss methods and concerns.
 Model processes: Have those faculty with experience as accreditation reviewers model the process
in their programs (or with similar programs within colleges/divisions).
 Develop attitudes - “SACS is us” - Encourage faculty to take responsibility for working with
professional societies or regional accreditation to ensure criteria/standards are appropriate.
 Identify exemplars of good practice and showcase these. Faculty will be exposed to new processes
and encouraged by their peers in other programs to improve assessment processes. Especially,
facilitate discussions about the use of data to close the loop and show examples.
D. Solutions at Institutional Level
 Communicate repeatedly expectations and processes across institution, with leadership, faculty,
staff and other assessment professionals:
o Workshops, best practices roundtables, institutional newspaper articles, websites.
 Encourage communication up and down the levels.
 Ensure that administrators acknowledge programmatic assessment processes, reports, data, use of
data.
 Establish creditability for processes e.g., communicating with others the written documentation
from accreditation or Spellings reports.
 Make the findings accessible and transparent: Spellings’ commission.
IV. Document and Use of Results
A. Statements of Good Practices
 Assessment results substantially contribute to: a) enhancing the quality of programs and services,
and ongoing assessment processes; b) highlighting excellence; c) making evidence-based
decisions; and d) informing planning and resource management.
 Credible evidence relates to learning, organizational effectiveness, and demonstration of
accountability.
 Results are shared in an informed, objective, and fair manner with multiple audiences.
 Results lead to reflection and action by faculty, staff, and students.
 Results and decisions are documented fairly, ethically, and responsibly.
B. Barriers
 Attitudes
o Faculty feel that they are responding to multiple processes instead of one, e.g.,
programmatic needs, program accreditation, institutional needs, institutional accreditation
and accountability.
 Knowledge
o Faculty not knowledgeable about assessment data analyses.
o Turnover in personnel resulting in loss of institutional memory (especially when little or
no documentation).
 Practical
o TRUST!
o Sharing data, honestly, openly takes TRUST in system, administrators, etc.
o Use of results is not documented. Need practical, integrated solution to assessment both
within the program and for the institution.
June 2007 International Assessment & Retention Conference
5
C. Solutions at Program Level
 Analyze and use data/evidence with integrity.
 Demonstrate on-going assessment activities.
 Use results for curricular and program improvements and enhancements.
 Increase feeling of ownership of data and results by faculty.
o Get more faculty involved in designing rubrics, gathering data, etc, the greater the trust.
 Limit required assessment activities: programs need only assess one or two programmatic
outcomes per year and one institutional goal per year only.
D. Solutions at Institutional Level
 Analyze and use data/evidence with integrity.
 Ensure transparency of data and results, to increase use of program and institutional data for
regional accreditation and accountability needs.
 Determine best methods, transparent methods, for harvesting results from programmatic
assessment for institutional decision-making.
E. Solutions at Both Levels
 Ensure data is NOT used for punitive purposes.
 Link planning and budgeting to assessment:
o NCSU’s compact planning: http://www2.acs.ncsu.edu/UPA/compactplan/index.htm
o NCSU’s student affairs example:
http://www.ncsu.edu/student_affairs/sara/planning/index.php
 Use electronic database, management system:
o Clemson institutional management system (see attached).
o NCSU programmatic management system (engineering).
Joni E. Spurlin, Ph.D., University Director of Assessment, North Carolina State University. For the past
16 years, she has provided leadership and expertise to faculty, administration and staff in development of
tools of assessment, institutional effectiveness and planning processes. She has worked with faculty on
improving outcomes assessment for engineering, computer science, liberal arts, education, business and
nursing and allied health programs. Email: Joni_Spurlin@ncsu.edu
Dianne Raubenheimer, PhD. is Director of Assessment in Academic Affairs in the College of
Engineering at NC State University where she works with faculty and administrators on various assessment
and evaluation processes. She has worked with faculty in different contexts to develop and implement plans
to assess the effectiveness of instruction and to use the data to improve teaching and student learning.
Email: Dianne_Raubenheimer@ncsu.edu
Sarah A. Rajala, Ph.D., James Worth Bagley Chair and Head of Electrical and Computer Engineering at
Mississippi State University. Previously, she was Professor of Electrical and Computer Engineering and
Associate Dean in Engineering at North Carolina State University. During her twenty-eight year career,
she has provided leadership and scholarly contributions in the areas of engineering education and
assessment and in the analysis and processing of images and image sequences. Email:
rajala@ece.msstate.edu
Eleanor Nault, Ph.D, Director of Assessment at Clemson University, works throughout the institution to
further institutional effectiveness. Her service includes participation on national advisory boards, evaluator
of federal grants, being an institutional liaison to the SC Commission on Higher Education, and a
SACS/COC site visitor. In addition to serving as a consultant, she furthers her research in institutional
culture of assessment, accreditation, and accountability. Email: Nault@clemson.edu
All four presenters are also contributing authors or editors of the upcoming book: Designing Better
Engineering Education through Assessment: A Practical Resource for Faculty and Department Chairs on
Using Assessment and ABET Criteria to Improve Student Learning
June 2007 International Assessment & Retention Conference
6
June 2007 International Assessment & Retention Conference
7
Pre-publication flyer too
June 2007 International Assessment & Retention Conference
8
Download