Program Learning Outcomes Assessment Handbook

advertisement
Handbook Page | 1
University of Texas at El Paso Program Learning Outcomes Assessment Handbook Provost’s Office 915­747­5725 provost@utep.edu http://www.utep.edu/provost Acknowledgements:
Parts of this handbook have been adapted or adopted from the sources listed in the Bibliography at the end of this
document, specific sections are referenced.
Handbook Page | 2
Table of Contents
Introduction
What is assessment
Misconceptions about assessment
Student learning outcomes
Rationale for Assessment
The role of the provost office
The Role of the College, Department Assessment Coordinator,
and the Faculty in Program Learning Outcome Assessment
A Recommendation from UTEP-College of Business Administration.
Starting the Work
Step 1. Revisit the Mission and General Goals for your Program.
Program Mission Statements of Various Qualities
Examples of Mission Statements at UTEP
Worksheet for Step 1
Step 2. Identify the 4-6 most important student learning outcomes
Examples of Program Learning Outcomes at UTEP
Worksheet for Step 2
Step 3. Identify where in the curriculum your learning outcomes and
objectives are covered: Curriculum and syllabus analyses
Worksheet for Step 3
Syllabi Analysis
Step 4. Identify assessment methods and techniques.
Step 5. Collect, tabulate, analyze the data, and report the results.
Step 6. Using the Evaluation Results: Closing the Loop!
Evaluation, Reflection, and Taking Action
Monitoring the Impact of the Changes and comparing them to Past Data
Communicate Conclusions and the Actions of Improvement to be implemented
Step 7. Determine who will implement the Improvement Actions
Appendix A
Appendix B
Bibliography
3
4
4
6
6
6
7
8
8
9
9
9
12
13
14
17
20
21
22
23
31
33
34
34
34
35
36
37
38
Handbook Page | 3
Introduction
Academic assessment at UTEP consists of three major components. First, each academic
department and program conducts annual Program Learning Outcomes Assessments (PLOA).
This is a continuous assessment of student performance on the program learning outcomes for
each undergraduate program as required by Southern Association of College and Schools
(SACS) and included in the commitments UTEP made in its mandatory Quality Enhancement
Plan (QEP)1,2,3. The UTEP-QEP states, “We will measure student learning outcomes using a
distributed assessment procedure, but under centralized reporting and oversight.”
The second component of program assessment is the Periodic Program Unit Review (PPUR)
which is a “rolling” review that will occur in cycles of 5 yrs for all academic units. Each year a
select group of academic units will complete a comprehensive PPUR of all their undergraduate
and graduate programs and unit activities as promised in the UTEP-QEP4. This handbook does
not cover PPUR.
Finally, UTEP has also committed to a regular review of the university Core Curriculum5. The
responsibility to lead this effort lies with the UTEP Faculty Senate’s Curriculum Review
Committee6, but will also involve data reporting and review activities by the departments and
programs that offer courses in the core curriculum. This handbook does not cover core
curriculum review.
To assist the departments and programs in completing these tasks, the Provost’s Office is
developing online tools and a website that should be functional by spring 2009. The website and
its tools will help departments report data, store relevant documentation, and have these easily
accessible and continuously available for updates and improvements. The Provost’s Office will
work with Deans and Chairs to standardize reporting formats as much as possible. This
handbook will be part of the website.
SACS requirements include the following standards that address the need to assess student
learning outcomes7 (note that only relevant items are presented here):
•
•
•
•
3.3.1 The institution identifies expected outcomes, assesses the extent to which it achieves these
outcomes, and provides evidence of improvement based on analysis of the results in each of the
following areas:
3.3.1.1 Educational programs, to include student learning outcomes
3.4.10 The institution places primary responsibility for the content, quality, and effectiveness of the
curriculum with its faculty.
3.4.11 For each major in a degree program, the institution assigns responsibility for program
coordination, as well as for curriculum development and review, to persons academically qualified in
the field. In those degree programs for which the institution does not identify a major, this
requirement applies to a curricular area or concentration.
1
UTEP Quality Enhancement Plan, revised edition, September 2006, paragraph 5.2.2, #3. UTEP Quality Enhancement Plan, revised edition, September 2006, paragraph 9.4, items 5‐7. 3
SACS Commission on Colleges (2008). The Principles of Accreditation: Foundations for Quality Enhancement (3rd Ed.) p. 7. Available at http://www.sacscoc.org/pdf/2008PrinciplesofAccreditation.pdf. 4
UTEP Quality Enhancement Plan, revised edition, September 2006, paragraph 5.2.2. #7. 5
UTEP Quality Enhancement Plan, revised edition, September 2006, paragraph 5.2.2. #4. 6
See UTEP Quality Enhancement Plan, revised edition, September 2006, Timeline Table in Section 8. 7
SACS Commission on Colleges (2008). The Principles of Accreditation: Foundations for Quality Enhancement (3rd Ed.) p. 27. 2
Handbook Page | 4
•
3.5.1 The institution identifies college-level general education competencies and the extent to which
graduates have attained them.
The purpose of this handbook is to help academic programs understand PLOA requirements and
develop an effective and efficient assessment plan that is simple to implement and maintain
without imposing a large burden on chairs and faculty. The objective of every PLOA assessment
plan should be to improve the academic programs. This handbook briefly introduces the various
steps in the process of developing a plan and provides recommendations, example worksheets,
and matrices that departments and programs can use to complete each of the steps. Appendix A
and B contain rubrics developed at California Polytechnic State University and shared by Dr.
Morrobel-Sosa that can be used to evaluate the quality of program learning outcomes and the
progress of the PLOA development process.
What is Assessment?
According to Palomba (1999) to assess is “to examine carefully” and assessment can be defined
as “the systematic collection, review, and use of information about educational programs
undertaken for the purpose of improving student learning and development.” The American
Association for Higher Education8 developed 9 principles of good practice for assessing student
learning. These are:
1) Assessment of student learning begins with educational values.
2) Assessment of learning is most effective when learning is understood as multidimensional,
integrated, and revealed in performance improvements over time.
3) Assessment works best when the programs it seeks to improve have clear, explicitly stated
purposes.
4) Assessment requires attention to outcomes but also an equally important attention to the
experiences that lead to those outcomes.
5) Assessment works best when it is ongoing and not episodic.
6) Assessment fosters wider improvement when representatives from across the educational
community are involved.
7) Assessment makes a difference when it begins with issues of use and illuminates questions
that people really care about.
8) Assessment is most likely to lead to improvement when it is part of a larger set of conditions
that promote changes.
9) Through assessment educators meet responsibilities to students and the public.
Misconceptions about Assessment
While assessment defined as such is a laudable effort, numerous misconceptions persist among
faculty and administrators in higher education. The University of Central Florida’s Academic
Program Assessment Handbook lists several misconceptions about program assessment many
hold. These misconceptions also exist at UTEP and are listed here.
1. The results of assessment will be used to evaluate faculty performance on merit or tenure
and promotion evaluations. This is not the case. Program assessment solely serves to provide
data about the quality of academic programs that will help faculty improve them where
necessary.
2. Our program is working well, the students are learning, and therefore we don’t need to
bother with assessment. While individual faculty may know that students are learning in
their classes, most programs have no system in place to determine how effective their entire
8
American Association for Higher Education; hhtp://www.aahe.org/ Handbook Page | 5
3.
4.
5.
6.
7.
curriculum is in terms of student learning. Even if faculty believe that the quality of their
program is good, often that opinion is based on anecdotal evidence or “gut feeling” rather
than valid and reliable assessments. Most likely there is room for improvement. Continuous
assessment to determine how to best improve the educational experience of our students must
be an integral part of departmental activities. Rather than trusting unsubstantiated claims by
programs that they do what they say they do, external stakeholders such as SACS and the
Texas Higher Education Coordinating Board (TXHECB) now require data that provide
evidence for those claims. To retain our institutional accreditation, carefully assessing all our
programs is the only option.
We will just assign a single faculty member to plan and conduct the assessment. It is a
good idea to have one or two faculty members take responsibility and lead the assessment
process for the department, but it is important that everyone is involved at all stages of the
process. Each person in the department contributes different perspectives and ideas for
improving the academic program and combining that wealth of ideas creates a much stronger
end product.
Administration will use the results to eliminate departments and programs. This is a
formative assessment process that will provide substantive feedback to help improve
programs through assessment by their own faculty, not the administration. Program
assessment is not a summative evaluation aimed at eliminating programs; at UTEP we aim to
grow our programs, not to eliminate them.
Assessment is a waste of our time and does not benefit the program or the students.
Nothing can be further from the truth. All programs established learning outcomes for our
last SACS reaffirmation of accreditation process in 2006, but many programs have no data to
show how their students have performed on these learning outcomes over the past 2 years,
and thus have no clear information about potential improvements. We can no longer simply
claim we do certain things and have no evidence to show that we actually do them.
We will just do assessment in the year SACS comes back. In the past this may have
worked, but in the present it no longer does. SACS demands of its accredited programs that
they engage in ongoing and continuous assessment, and requires updates every 5 yrs.
Program assessment sounds like a good idea, but it is time consuming and complex. The
most time consuming part is formulating relevant and clear learning outcomes that are the
essential goals of your program. These will drive the measurement process and data
collection methods. Once that process has been developed, the collection and evaluation of
the data should take little time beyond the everyday activities of faculty.
“To be effective, assessment should be a comprehensive, systematic and continuous process that
is used as a means for self-improvement, based on multiple measures and sources that are
meaningful to the program. Programs should use assessment as a management tool, coordinated
by one person but reviewed by a committee or the entire faculty. Program assessment should
involve the participation and input of all faculty, staff and students”9.
As a result of such an approach, all programs at UTEP should be able to present evidence to their
students that they are actually accomplishing what they claim to accomplish. Programs
accredited by discipline specific organizations such as engineering and nursing, have been doing
this for a long time to assure society that they graduate competent engineers and nurses. Only
through well-designed and executed assessment can we know that we are achieving our desired
9
University of Central Florida, Program Assessment Handbook, 2008 edition. Handbook Page | 6
outcomes. It is also the only way by which we can improve our programs and increase student
learning.
Student Learning Outcomes
Student learning outcomes are defined as the accumulated knowledge, skills, and attitudes
students develop during a course of study. As the experts in their discipline, the faculty in the
departments and programs at UTEP are asked to publicly state these program learning outcomes,
because they have the specialized knowledge needed to write relevant, clear, and precise
discipline-specific learning outcomes and know how to assess student achievement. Faculty from
each program must critically evaluate the results of their analyses and determine if these results
suggest that improvements in the curriculum or pedagogy are necessary. If so, the program must
determine which actions can be taken to improve student performance on the program learning
outcomes. It must then implement these changes, collect further data and determine whether the
changes made a difference in student achievement. Besides being required by SACS, such
careful assessment informs each department and program about the quality of their “product”
and is of great intrinsic value. We should be able to provide clear and compelling evidence of the
educational value we provide our UTEP students.
Rationale for Assessment
Part of the rationale for continuous assessment of program learning outcomes is grounded in the
belief that systematically collecting relevant data improves awareness of how well students
obtain knowledge, skills, and attitudes as a result of the learning experiences in a relevant
curriculum aligned with dynamic professions and changing disciplines.
Continuous assessment of the academic program will enable us to
1.
2.
3.
4.
5.
6.
7.
Devote time to activities that we value most;
Help us decide how to improve instruction, strengthen curricula, and create effective
academic policies;
Strengthen our ability to show that our graduates are well-prepared to succeed in their
chosen professions;
Always have recent data on hand that will satisfy the requirements of accrediting and funding
agencies, without having to engage in a mad scramble to meet deadlines;
Develop policies to effectively allocate funding and/or resources to areas that are producing
valued outcomes;
Increase the effectiveness and truthfulness of our communications about the value of a UTEP
education to the community and other stakeholders;
Provide the faculty with feedback it needs to strengthen and grow its program.
The Role of the Provost’s Office
The Provost’s Office has the primary responsibility for coordinating academic assessment at
UTEP. The Provost collects all assessment plans developed by all academic programs at UTEP
and facilitates the assessment process by providing various electronic tools. These tools will
allow Chairs, Deans, and the Provost’s Office to respond rapidly to inquiries and communicate
the results of assessments and reviews on a regular basis to the President, SACS, the TXHECB,
and the public at large.
This assessment handbook is an effort by the Provost’s Office to provide a step by step approach
to developing an assessment plan and making annual program learning outcome assessment an
Handbook Page | 7
integral part of departmental activities without substantially increasing the burden on faculty and
administrators. The various handbooks and guides listed in the Bibliography have additional
worksheets and information that may be helpful in clarifying the process.
This 7-step approach to developing and implementing a useful and feasible plan can serve as a
review of already completed work and your progress in that process or as a tool to help you get
started.
1. Revisit the mission and general goals for your program.
2. Identify the most important student learning outcomes and objectives of the program.
3. Identify where in the curriculum your learning outcomes and objectives are covered and
whether the learning activities are integrated with the learning outcomes to ensure that your
students have the opportunity to achieve the outcomes you desire.
4. Select relevant, valid, reliable, and objective assessment methods, techniques or tools you
will use for each of the learning outcomes. You should carefully check that learning
outcomes, assessments, and learning experiences are tightly integrated.
5. Develop a process to tabulate, analyze, and report the results for review and evaluation.
6. Develop a process for evaluation of the results. After evaluation of the analysis results, the
faculty should decide on appropriate corrective actions to improve students’ learning and
determine how to best document the impact of the changes.
7. Determine who will implement these actions. What are the possible implications and
consequences for individual faculty members? How and to whom is the responsibility for
implementation delegated?
The Role of the College, Department Assessment Coordinator, and the
Faculty in Program Learning Outcome Assessment
The greatest effort and responsibility resides in the academic departments and programs. The
departments and programs are strongly encouraged to communicate and coordinate with their
Dean for continual input on all aspects of the PLOA plan development, receive feedback, and
ensure that timelines are met. To ensure successful implementation, faculty should identify one
of their members as their Assessment Coordinator. This should not be the Department Chair or
the Program Director. It is important that someone in addition to the Chair or Director can
represent the department on the issues related to PLOA. The Assessment Coordinator ensures
that assessments are conducted at the appropriate time and in the appropriate manner, that data is
compiled, analyzed, and presented for evaluation by the faculty. The Assessment Coordinator
also monitors the implementation of improvements and ensures that their effects are measured.
The Assessment Coordinator and Department Chair ensure that the data are entered in the
appropriate reporting tool and the necessary reports are submitted when necessary.
Upon completion of the assessment plan, the Dean reviews it, requests necessary revisions, and
submits the final version of the plans of all departments and programs in the College to the
Provost’s office. The Dean is responsible for the quality of all plans in his or her College. The
Provost’s Office will not review or critique the assessment plans. The Deans also are responsible
for the implementation of all plans in their Colleges.
Appendix A provides a rubric that can be used in the development and evaluation of the program
learning outcomes. Appendix B provides a rubric departments and Deans can use to determine
the progress and the quality of their assessment plans. Another example of such a rubric can be
found at the TX A&M website at
http://assessment.tamu.edu/asmt_help/asmt_feedback_rubric.pdf.
Handbook Page | 8
Departments, programs and the Deans are encouraged to use these tools to determine where
they are in the process at this moment, because much was accomplished in 2005 and 2006 that
should be retrieved and carefully examined before starting anew.
A Recommendation from UTEP-College of Business Administration.
All programs in the College of Business Administration (COBA) at UTEP are accredited by
AACSB, a discipline specific accreditation agency. In light of accreditation, COBA recently
revised its program assessment procedures and created a “Centralized Assessment Process” that
integrated 1) curriculum coverage of educational material related to learning objectives; 2)
measureable activities in the curriculum and coursework related to learning objectives; and 3)
collection and reporting of the effectiveness of the curriculum in achieving learning objectives.
In the Centralized Assessment Process all graduating seniors participate in a business simulation
and related exam in their capstone class. Their individual performance on specific exam
questions provides excellent feedback about their performance on associated learning
objectives. One instructor is charged with supervising and collecting the assessment activities in
the multiple sections of the capstone course. The Associate Dean consolidates the materials in a
report to the College Curriculum Committee for consideration and evaluation.
The capstone course or project creates a separate, independent and objective educational activity
and measurement process, especially if faculty not involved in the activity or outside reviewers
conduct the assessments. If such objectivity is not present, it’s difficult to identify problems
associated with program learning outcomes. Second, when assessments of learning outcomes are
distributed among courses, “the noisier the process and feedback,” which is then more difficult to
interpret. As the system becomes more complex it is also more difficult to maintain. This
suggests that a program should try to capture learning outcomes in a single capstone experience
that is graded by independent faculty and/or other skilled professionals on criteria and standards
related to all learning outcomes. This is a recommendation that deserves serious consideration.
Somewhat akin to a capstone experience, internships or service learning activities at a work site
may offer even more authentic assessments short of actual employment or professional
experience, and should be part of the assessment plan.
The 7-step process presented in this handbook can easily be applied to the development of a
capstone course or learning experience that a department or program might want to use as their
point of assessment.
Starting the Work
As you embark on this collective journey, it may be helpful to keep these basic questions in
mind:
1. What are we really trying to accomplish with our program?
2. How well are we actually accomplishing those goals?
3. Using the answers to the first two questions, how can we improve what we are doing as a
program?
4. What and how does our program contribute to the development and growth of our students?
5. How can we improve student learning in our program?
Handbook Page | 9
Step 1. Revisit the Mission and General Goals for your Program.
A mission is a broad statement of what your program does, and for whom it does it. It should
provide a clear description of the purpose of the program and reflect how the program
contributes to the education and careers of students graduating from the program. The mission of
your department or program should be aligned with the College and University missions, but be
specific to your program’s unique identity.
It is important that everyone in your department or program, including your students, is very
clear about what you are collectively trying to achieve. The mission should state what you do
and for whom you do it. It should stir excitement and garner commitment from everyone, and
guide you in the development of your program learning outcomes. Revision of your mission may
be necessary because the discipline has changed over time, you have new faculty, new resources,
a revised strategic plan, etc.
Program Mission Statements of Various Qualities 10
Poor: The mission of the Paso del Norte Physical Activity Education Program is to provide a
broad education on the benefits of physical activity.
The statement is very vague and does not distinguish this particular program from other physical
activity programs. It lacks information about the primary functions of the program and does not
identify the stakeholders. Additionally, there is no indication that the program’s mission is
aligned with UTEP’s mission.
Better: The mission of Paso del Norte Physical Activity Education Program is to educate
students from diverse backgrounds in the principles of physical activity education that will
prepare them for both current and future professional challenges in physical activity education.
This statement is better because it identifies the stakeholders as well as a primary function of the
program. However, it still is not a distinctive statement that sets the program apart from others.
Best: The mission of Paso del Norte Physical Activity Education Program bachelor’s degree
program is to educate students from diverse backgrounds in the fundamental skills, knowledge,
and practice of Physical Activity Education through carefully designed courses and internships
in order to prepare them for (1) Physical Activity Education positions in service organizations,
schools, and private industries and (2) graduate programs in Physical Activity Education or
related disciplines. The program promotes scholarship, service and a spirit of innovation and
entrepreneurship in an environment that is inclusive, diverse, and collaborative.
This statement includes a purpose, the primary functions of the program, the primary
stakeholders, it is distinct, and supports the UTEP mission. It is not brief and would be difficult
to memorize, but a slogan (or brand) that captures the essence of this mission may cover that.
Examples of Mission Statements at UTEP
The following missions are examples from various academic departments and programs at UTEP
and other universities. Some are very detailed and others are not. Since the true value of a
mission statement is its unique application by the department or program, these statements are
10
Adapted from University of Central Florida Academic Program Assessment Handbook, June 2008 Edition Handbook Page | 10
only samples to be examined and generate ideas. They are not an endorsement. The UTEP
mission is listed to help faculty review their departmental and program missions, as they should
be aligned with the university’s mission.
The University of Texas at El Paso Mission
The University of Texas at El Paso is dedicated to the advancement of the El Paso region
through education, creative and artistic production, and the generation, interpretation, application
and dissemination of knowledge. UTEP embraces its role as an intellectual, cultural and
socioeconomic asset to the region, offering programs to meet human resource needs and
contribute to the quality of life. As a public university, UTEP is committed to providing access
and opportunity to the people of the El Paso region and the State of Texas. UTEP’s mission of
ensuring access is coupled with a commitment to excellence reflected in rigorous programs,
which prepare students to make significant contributions to their professions, their communities
and the world. As a research/doctoral institution, UTEP fosters a climate of scholarly inquiry,
with a special focus on applying innovative interdisciplinary approaches to explore and address
major issues that confront the multicultural, U.S.-Mexico border region.
Department of Electrical and Computer Engineering
The Department of Electrical and Computer Engineering will:
• Dedicate itself to provide its students with the skills, knowledge and attitudes that will allow
its graduates to succeed as engineers and leaders.
• Maintain a vital, state-of-the art research enterprise, which provides its students and faculty
with opportunities to create, interpret, apply, and disseminate knowledge.
• Prepare its graduates for life-long learning to meet intellectual, ethical, and career challenges.
• Recognize and act upon the special mandate to make high quality engineering education
available to the residents of El Paso and the surrounding region.
Department of Geological Science
The Department of Geological Science seeks to instill core knowledge in the geosciences in nonmajor undergraduate students, including those seeking a degree that leads to teaching and the
primary and secondary levels. We also seek to equip geoscience major undergraduate students
with the knowledge, attitudes, and skills necessary for them to become successful professionals
and leaders in the local to national arena. We also seek to develop graduate students into
professionals that can be employed in industry, academia, and government and have outstanding
careers. Central to this mission is an active research program that involves students at all levels
and serves as a focal point for geoscience, environmental, and materials expertise for our region,
the state of Texas, and beyond to national and international venues.
Department of Information and Decision Sciences
The Department of Information and Decision Sciences Mission is to provide the highest quality
of education to the citizens of El Paso and the West Texas region and is in line with the mission
of the College of Business Administration. In support of The University of Texas efforts, The
Department provides a wide access to quality higher education which allows our graduates to be
competitive in the job market in the fields of Production and Operations Management and
Computer Information Systems.
Department of Kinesiology
The Department creates, interprets, applies, and disseminates knowledge in order to facilitate
students in becoming responsible professionals who can meet life's intellectual and ethical
Handbook Page | 11
challenges, value diversity, be life-long learners, and establish successful careers as teachers,
scholars, leaders, and promoters of physically active, healthy lifestyles in a multicultural society.
Department of Psychology
The mission of the Department of Psychology is to engage in the discovery of knowledge about
behavior and mental processes, and the biological, social, and psychological processes that
underlie them, and to share that knowledge and the processes for obtaining it with our students
and the scholarly community. The Department integrates graduate and undergraduate education,
teaching, research, and service activities to further the overall mission of the university.
Department of Teacher Education
We seek to prepare citizens with the skills needed for lifelong learning and personal growth and
to participate in positive societal change. Therefore, we prepare teachers who:
• Demonstrate the pedagogical and content knowledge, skills, and dispositions that help all
students learn;
• Design learning environments that are infused with educational analysis and critique in the
context of culture, community, power and knowledge;
• Understand the political and historical implications of diverse educational models; and
• Lead to improved educational programs in the communities they serve.
To inform the field of educational research about the best practices in teaching and learning
in diverse, and particularly, multilingual communities such as ours, we conduct and make
public quality research focusing on educational experiences on both sides of the U.S. Mexico border. We align with the university vision to create and maintain multicultural,
inter-American educational research collaborations.
Handbook Page | 12
Worksheet for Step 1 All departments and programs at UTEP should already have a mission. This worksheet can help you determine if your mission statement is effective and clearly in line with the current activities of your department or program. Stakeholders (consider also asking students, alumni, employers) complete this exercise individually first, after which you can identify the degree of agreement or disagreement on each item. You can then compare notes, discuss the differences, and work on a possible revision. Be aware that revising the mission can take quite some time and probably should be done in a dedicated workshop. Examples of departmental mission statements at UTEP are provided on the following pages. Write (or copy and paste) your mission statement here and then complete the checklist: Complete this checklist by placing a checkmark in the “yes” or “no” column. Criteria Yes No Does your mission clearly state the purpose of your program? Does it indicate the primary functions or activities of your program? Does it indicate for whom you do it? Is your mission statement brief and memorable? Is your mission statement distinctive, unique to your program? Does it clearly support the College and UTEP missions? List reasons for your choices here and discuss them with other stakeholders: Handbook Page | 13
Step 2. Identify the 4­6 most important student learning outcomes As a department or program you stand for something. When students decide to enter your major
they have certain expectations about your program and the promises it holds for them. By
publishing clearly written learning outcomes (a.k.a. learning goals), you present the students with
your promise, a promise about what they can expect to achieve when they complete your program
successfully. As a faculty, this is your commitment to your students. The assessment process
presents evidence to them that you honor that commitment.
During Step 2 you should again invite input from all stakeholders in your program, including
staff, students, alumni and professionals in the community who will employ your graduates to
create the most relevant learning outcomes possible.
In the past many faculty thought that SACS wanted to know every learning objective covered in
every course in the curriculum. That is no longer the case. SACS does not want to see how every
student performed on every learning objective for every course. It wants to see evidence of how
your students performed on the program learning outcomes, and what you did to improve the
situation. As a unit you need to agree on 4-6 key program learning outcomes that answer the
question: “What will students who graduate from our program be able to DO intellectually,
physically, and emotionally?” What minimal skill set should these “new players” on our team
have acquired by the time they leave our undergraduate training program? While SACS is mostly
interested in undergraduate students at this time, you should strive to develop the undergraduate
curriculum such that the transition to graduate studies is virtually seamless for the students.
Drafting student learning outcomes can be difficult. It is challenging to reach the level of
specificity required in a relevant, measurable student learning outcome. Therefore, faculty
members might want to plan a mini-retreat or set aside specific meetings that center solely on
formulating the learning outcomes so that undivided attention can be focused on this task for a
period of time. To begin drafting the program’s student learning outcomes, it may be helpful to
start with a very general phrasing of the outcome, and then get more specific with each revision. It
may take several iterations to progress from lofty, idealistic student learning goals to one or more
specific, measurable outcomes.
Your learning outcomes should distinguish the program’s graduates from other University
students. You can achieve this by clearly answering the question: “What knowledge, skills, or
attitudes distinguish the graduates of our program from other students on campus?” To arrive at
answers to this question, it is recommended that the faculty identify what constitutes the “ideal
student.” The worksheet on the next page can be used as a basis for discussions and work
sessions on the various questions about skills, knowledge, values and attitudes that you believe
this student has acquired or strengthened as a result of your program. Individual participants
should be given some time to think through these items first before continuing the discussion in
either small groups or the entire body present. Having small groups discuss the items first and
then report them on large easel pads usually leads to richer results.
Here are some examples of learning outcomes from A Program Guide for Outcomes Assessment
at Geneva College (April 2000) as presented in the UCF Handbook11:
11
University of Central Florida Academic Program Assessment Handbook, June 2008 Edition Handbook Page | 14
“Poor Learning Outcome Statement: Students should know the historically important systems of
psychology.
This is poor because it says neither what systems nor what information about each system
students should know. Are they supposed to know everything about them or just names? Should
students be able recognize the names, recite the central ideas, or criticize the assumptions?”
“Better Learning Outcome Statement: Students should understand the psychoanalytic, Gestalt,
behaviorist, humanistic, and cognitive approaches to psychology.
This is better because it says what theories students should know, but it still does not detail
exactly what they should know about each theory, or how deeply they should understand
whatever it is they should understand.”
“Best Learning Outcome Statement: Students should be able to recognize and articulate the
foundational assumptions, central ideas, and dominant criticisms of the psychoanalytic, Gestalt,
behaviorist, humanistic, and cognitive approaches to psychology.
This is the clearest and most specific statement of the three examples. It provides even beginning
students an understandable and very specific target to aim for. It provides faculty with a
reasonable standard against which they can compare actual student performance.”
A simple guide to writing learning outcomes can be found at the TX A&M University assessment
website at http://assessment.tamu.edu/asmt_help/writing_learning_outcomes.pdf
Examples of Program Learning Outcomes at UTEP
Department of Civil Engineering
1. Graduates will be educated in the fundamental concepts of engineering and science to create
intellectual curiosity in order to provide for a successful career and life-long learning.
2. Graduates will be able to design effective civil engineering systems.
3. Graduates will have the ability to function on multidisciplinary teams.
4. Graduates will serve as productive members of society and the profession by recognizing the
social, ethical, environmental, and political implications of engineering decisions.
5. Graduates will be able to communicate effectively to technical and non-technical audiences.
6. Graduates will have exposure to real-life problems including hands-on experience
Department of Computer Sciences (includes instruments and process of assessment)
The overarching educational objective of the B.S. in Computer Science program at UTEP is to
produce graduates who will be in a profession or in a graduate program that utilizes their
technical expertise, the foundation of which was obtained at UTEP. Specifically, graduates of
UTEP's undergraduate program in Computer Science will be able to:
1. Use the theoretical and technical computer science knowledge to specify requirements,
develop a design, and implement and verify a solution for computing systems of different
levels of complexity. (Objective 1)
a. Apply mathematical foundations, algorithmic principles, and computer science theory in
the modeling and design of computer-based systems.
b. Estimate the feasibility and effort required to build a particular computing system.
c. Identify and specify requirements for computing systems by selecting appropriate
modeling techniques and tools.
Handbook Page | 15
2.
3.
4.
5.
d. Design, implement, and verify computing systems of varying complexity by using
appropriate techniques and tools and by selecting appropriate design patterns,
architectures, languages, and testing approaches.
e. Evaluate a system with respect to criteria such as performance, complexity, correctness,
and usability.
f. Determine the impact of an architecture or platform on software design and
implementation alternatives.
g. Apply problem-solving techniques to solve real-world problems.
Convey technical information in both oral and written formats.
a. Present technical information orally.
b. Write a professional technical report.
c. Formulate and pose incisive, technical questions.
Work in teams.
a. Participate as a productive member of a team.
b. Solve common problems in team dynamics.
Apply a professional code of ethics in the daily practice of their profession.
a. Project the potential impacts of technical decisions on the individuals, organizations and
external constituencies involved, and identify ethical and legal implications.
b. Apply the insights embodied in professional codes of ethics.
Stay current in their profession.
a. Describe the importance of and options available for continuing education.
b. Describe the role of professional societies
c. Articulate the benefits of graduate studies
Assessment Instruments used in Computer Science
The instruments used to assess the program include: Graduating Senior Survey; Alumni Survey;
College of Engineering Employer’s Survey; Senior Exit Interviews; Departmental Advisory
Board; Teaching Evaluations; Course Assessment; Industry Feedback; Advising Survey. The long
version of this document includes description, frequency and timing of instrument, data that is
collected, means of data collection, sources of the data, and use of each instrument.
Process used in Computer Science
The Department’s process for improvement and evaluation of attainment of Program Educational
Objectives (PEOs) and Program Outcomes (POs) is a cyclical, continuous-improvement model
consisting of three phases. Fig. 1 shows the information flow into and out of each phase. The
open rectangles represent information stores, the sources of information are denoted by a square,
and the rounded boxes represent the processes associated with a phase.
Phase 1: Short-term Program and Curriculum Review
Each semester, faculty review their course assessments and teaching evaluations. There are four
main sub-committees (each representing a grouping of courses) that review whether course
outcomes are met. The courses associated with each of the subcommittees are scheduled for
formal assessment every other year, or as deemed necessary. Results from the review are
discussed at a faculty meeting.
Phase 2: Long-term Program and Curriculum Review
Approximately every two to three years, the Department reviews the scheduled recommendations
from Phase 1 and assessment results. Changes made at this level are significant and have broad
impact on the program and, thus, require deeper analysis and discussion with the Department
faculty at large.
Handbook Page | 16
Phase 3: Educational Objectives and Outcomes Review
Once every five years, the Department meets to evaluate the POs for relevance. During this
evaluation, recommendations, assessment results, and national trends are reviewed and discussed
by the faculty. If warranted, the POs and the PEOs are revised to meet the current needs of our
constituencies.
There are many resources available on websites of other universities. It is recommended that you
visit the sites of programs in your discipline to avoid duplicating work. Some professional
organization such as the American Psychological Association have published learning outcomes
on their websites for various programs (see http://www.apa.org/ed/pcue/taskforcereport2.pdf) .
Each department or program should look for and examine these resources carefully in light of the
department or program’s unique nature and the mission and vision of UTEP. Clearly, simply
copying the learning outcome statements from a program at another university would be a
mistake.
Handbook Page | 17
Worksheet for Step 2 1.
Describe “the perfect student” who just graduated from your program in terms of his or her knowledge, skills, values, and attitudes. What would you like this “golden nugget” to look like as a result of your curriculum and pedagogy? Try to be as specific and comprehensive as possible when you identify • What does this “ideal student” know and understand? • What can this “ideal student” do physically or mentally? • What does this “ideal student” value? 2.
Organize knowledge, skill, and attitude (values) characteristics in logical groupings. Handbook Page | 18
3.
Now think about which of the characteristics you identified for your “ideal student” can be directly attributed to the experiences currently presented to them in your program. If you cannot identify experiences to support certain desired characteristics write that down too (expand or shrink the table as needed). Characteristic of our “Ideal Student” Attributable Learning Experiences in Our Curriculum Characteristics not Addressed by Current Learning Experiences: Handbook Page | 19
4.
Use the work from Items 2. and 3. to either evaluate and revise, or formulate your key learning outcomes. Try to follow the guidelines in the Table12 below. Learning Outcome Characteristics Yes No
Our learning outcomes (LO) are aligned with our mission statements and goals Our LO are relevant to our discipline Our LO clearly indicate the level and type of competence that is required of graduates of our program Our LO are written clearly, precisely (just the right level of detail) and unambiguous using action verbs Our LO are measurable Our LO can be measured by more than one assessment method
We have the resources to conduct the necessary measurements of out LO
Our LO are for our program not a specific course
Our LO can be understood by our undergraduate students; they are simple, focused statements not a bundle of different things Our LO describe intended learning outcomes, not actual outcomes.
Our LO describe learning results and not the learning process
We have the resources and capabilities in our program to successfully pursue our learning outcomes Thoughts on Revisions of Learning Outcomes: 12
Adapted from University of Central Florida Academic Program Assessment Handbook, June 2008 Edition
Handbook Page | 20
Step 3. Identify where in the curriculum your learning outcomes and objectives are covered: Curriculum and syllabus analyses Once faculty members have decided upon the essential program learning outcomes, they need to
identify where in the curriculum and coursework the students receive the opportunities to learn
the knowledge, practice the skills, and develop the attitudes and values incorporated in the
learning outcomes. It is the faculty’s responsibility to ensure that the curriculum and courses are
designed to offer students sufficient opportunities to practice the skills and gain the knowledge
and attitudes necessary for success. In other words, learning outcomes, assessments, and learning
activities within the curriculum need to be tightly integrated.
To start the analysis of the curriculum, create a simple overview of curricular coverage of the
learning outcomes by completing a Curriculum Map of Learning Outcomes. The map helps you
chart which courses address and assess the program student learning outcomes you developed.
As your assessment plan changes, the Assessment Coordinator should update the Map.
Worksheet for Step 3 Curriculum Map of Learning Outcomes: Steps to Complete Your Curriculum Map List and number your learning outcomes (expand or shrink as needed): o Learning Outcome 1 (LO1): ƒ Objective 1.1: ƒ Objective 1.2: o Learning Outcome 2 (LO2): o Learning Outcome 3 (LO3): o Learning Outcome 4 (LO4): o Learning Outcome 5 (LO5): o Learning Outcome 6 (LO6): 2. Completing the map. You can do this with the entire faculty on a large flip chart or on the computer projection screen. Alternatively, you can have the faculty complete the map individually and have the Assessment Coordinator compile all the information. Make sure all learning outcomes are covered at minimum in one course or significant learning activity at the “very important” level or at the “moderately important” level in several courses (see below). 1.
Handbook Page | 21
To complete the map, faculty members place a marker under the program learning outcome that is covered in their class and for which they have an appropriate assessment tool or method. Faculty should indicate whether this learning outcome is not important (NI), somewhat important (SI), moderately important (MI), or very important (VI) in each course. Curriculum Map of Learning Outcomes (expand or shrink as needed) Program:______________________ Date:___________ LO 1 LO 2 LO 3 LO 4 LO 5 Course Important Thoughts about LO Coverage: LO 6 Handbook Page | 22
Redundancies and Gaps. After collectively reviewing the Curriculum Map and the integration of learning outcomes and their assessments, faculty should look for redundancies and gaps in the coverage of the learning outcomes. Determine whether each and every program learning outcome receives sufficient attention in various courses to ensure that the students experience enough practice opportunities to successfully attain the program learning outcomes. Not every course needs to address multiple learning outcomes; covering all learning outcomes and how this occurs is a function of the entire curriculum. 4. Collecting Data across the Curriculum and Time. It may be necessary to collect data on several measures across time or in a sequence of key courses or learning experiences to truly document that students achieved your learning outcomes. This type of measurement can be an excellent demonstration of “value added” to the student’s knowledge, skills, and attitudes as a result of your curriculum. Student portfolios are probably the best tool to show progression of competencies, and Alverno College's Diagnostic Digital Portfolio13 is a great example of an assessment system based on mastery of knowledge and skills using student portfolios. 3.
Syllabi Analysis By closely analyzing the course syllabi, determine the alignment between courses on the program learning outcomes and the assessment tools and methods used, especially for courses offered in a preplanned sequence. First, the faculty need to determine whether program learning outcomes are or should be covered in a specific course. Then they should look for a strong integration of learning outcomes, assessments, and learning activities. Check that there is a logical connection and that the assessment tools used in each class allow the faculty to measure performance on the learning outcomes and possibly track performance improvement over time. If integration is lacking the course needs to be revised. At minimum the faculty should examine the links between the stated learning outcomes (by some referred to as objectives or goals) and the assessments used in the course. Often, learning outcomes are stated but not assessed or assessments are included in the course that is not linked to any learning outcomes. Additional information about constructing a syllabus and a checklist can be found on the Center for Effective Teaching and Learning (CETaL) website at http://academics.utep.edu/Default.aspx?tabid=36893. Important Thoughts on Syllabi Alignment with Program Learning Outcomes: 13
A demonstration of this approach can be found at http://ddp.alverno.edu/. Retrieved September 15, 2008. Handbook Page | 23
Step 4. Identify assessment methods and techniques. The unit of analysis for assessing student learning outcomes is the program, not individual
courses, faculty, or students. The better the integration of the assessments into existing
student work (e.g. courses, capstone projects, service learning, etc), the greater the likelihood
that your assessment plans will succeed. The next step is to determine how you can
efficiently integrate program learning outcomes assessment into the existing curriculum. It is
very likely that you only need to make small modifications to courses or the curriculum to be
able to effectively assess student performance on program learning outcomes. Examples of
commonly used assessment tools are provided at the end of this section.
Caution! Assessment of student learning outcomes can be conducted using many different
quantitative and qualitative instruments and methods, and therein lies the danger of trying to do
too much. Select or develop instruments and methods that are simple to use, require little extra
time or effort, and still provide the necessary data for a specific learning outcome, not more and
not less.
Course Grades are (usually) not enough
It is often asked, "Aren't course grades a satisfactory measure of student performance?" In
general, grades have significant short-comings when used as a single measure of assessing
student learning.
• An exam grade is usually the result of how well individual students may have learned the
prescribed information being assessed on that particular exam. Unless the exam is designed
to be a relevant, valid, and reliable assessment of a program learning outcome, it may not
provide useful information.
• A course grade often is a composite of many activities that have little relevance to program
learning outcomes (for example, when attendance or general educational outcomes are part
of the grade) and thus cannot be used in identifying student performance on program learning
outcomes.
• Grading is often approached differently by individual faculty members even when teaching
different sections of the same class, because of individual variations in how tools are used
and standards are interpreted.
• Program learning outcomes sometimes span multiple courses, and individual course syllabi
often do not align well with the program’s learning outcomes.
• If the course addresses multiple learning outcomes and contains assessments for each of
those, the course grade is a composite and lacks the detail necessary to evaluate performance
on the individual learning outcomes. In this case, results of the individual assessments for
each learning outcome should be used.
Handbook Page | 24
Worksheet for Step 4 This exercise expands upon the previous one in Step 3 and asks you to look in greater detail at the assessment tools used to determine student performance on each program learning outcome in the various courses and experiences. To help identify what kind of assessments of your learning outcomes are currently being conducted in, complete the following table. List the course number or the learning experience and their embedded assessments such as course exams, student presentations, and non‐embedded assessments such as portfolios, institutional writing exam, and standardized tests in the matrix. This allows you to determine whether the assessments are appropriate and sufficient. Some advocate that each program learning outcome should be assessed using at least three different measures. Assessment Key (change as needed): P=Paper, E=Exam, PO=Portfolio, L=Lab; S=Standardized Test; O=Oral Presentation; I=Internship Curriculum Map of Learning Outcomes (expand or shrink as needed) Program:______________________ Date:___________ Course LO 1 LO 2 LO 3 LO 4 LO 5 LO 6
Math 1320 S E Handbook Page | 25
When deciding how to analyze your data consider basing your data collection techniques and analyses on questions directly tied to your learning outcomes. These questions will guide the selection or design of your data collection tools. Questions that may be worth asking are for example: • To what degree have students become proficient in quantitative scientific skills and written and oral presentation skills? • Have the students fallen short of, met or exceeded criteria and standards we set? • Are there subgroups of students that differ significantly from each other in what they have learned? • Have students’ performances on all learning outcomes increased over time? Is this improvement similar or different for all learning outcomes? • Did performance stall or plateau prior to senior capstone projects? Why did this happen? • Are the assessment tools (rubrics, tests) still valid and reliable or do they need to be re‐examined in light of other significant changes? • What are the strengths and weaknesses in our curriculum based on the data analysis? • How does the analysis inform the improvements we need to make in the learning experiences and the teaching techniques we employ? Once you have determined what measures, instruments, and methods of data collection can answer your questions, here are some items to consider when developing your process of assessment. 1. Create a detailed timeline for the assessment cycle and collect data each semester. Make this a habit and normal part of the department’s business. 2. Have designated faculty members be responsible for collecting the appropriate data and report them to the Assessment Coordinator 3. The Assessment Coordinator compiles the data from the various sources, enters the data in the software tool and stores it, analyzes the data, and formats the results in a clear manner. 4. Develop a process for the faculty to interpret the results of the analysis and evaluate the program. 5. Develop a process for the faculty to decide on improvement measures that need to be taken. 6. Decide who will be responsible for implementing these improvements and the kind of support these persons will receive? 7. How will you determine that the changes were successful? Is the same assessment plan the right tool to show positive change? Further Investigation of your Assessment Tools. Integration of learning outcomes, assessments, and learning activities that prepare the students for the assessments is critical to an effective course and curriculum. Now that the types of assessments and where they are used in the curriculum have been identified, it is important to complete one more step. Determine how tightly the learning outcomes, their assessments, and the learning activities are connected. This is especially important when you have multiple sections of the same course, taught by different instructors. The discussion should focus on relevance, validity, reliability, and objectivity of the assessment tool or method and the learning activities in relation to the learning outcomes. The focus of this exercise is to create the best possible integration to support student performance using constructive comments and collaboration. In the table on the next page enter the course, the learning outcome(s) covered in that course, the assessment tool(s) used to measure student performance, and whether the tool(s) has sufficient relevance, validity, reliability, and objectivity. Completion of this table helps you identify “broken links” and whether or where you lack quality assessment tools. Handbook Page | 26
Integration of Learning Outcomes and Assessment Tools (expand as needed) Course & section Program:______________________ Date:___________ Learning Outcome LO‐Related Relevance, LO‐Related covered Assessment Validity, Learning Activities
Reliability, Objectivity Important Thoughts about Integration of Key Components: Handbook Page | 27
A Few More Thoughts about Assessment
Data that You can Use. Select assessment methods that are under your control and provide
information you can do something with. An assessment method that is influenced by external
factors beyond the control of the program will give you data that will not help you make a
change in that aspect of the student learning process.
For Qualitative Assessments Create a Scoring Rubric. Rubrics clarify criteria and standards
and are an important tool for evaluating student work, especially if it is difficult to create
objective tests. Rubrics exist for many purposes and there are many types. Most likely someone
has already developed a rubric that you can use or adapt to your needs. Search the internet or
contact your network of colleagues so you do not have to “spin your wheels” creating your own.
When multiple sections of one course are taught simultaneously, faculty should make sure that
the same rubric is used in all sections, and that all faculty are trained to use them similarly. When
rubrics are used to grade a capstone project that serves as the assessment of the learning
outcomes, at least two faculty members should score the student work to ensure reliability. If
these two faculty members disagree significantly, a third member should score the work.
Rubrics are Helpful for Assessing Learning Outcomes.14 Many faculty members consider
using rubrics when looking for ways to grade, give feedback, and assess learning outcomes.
Many resources exist on how to construct a valid, reliable, and objective rubric that can be used
to assign a grade to student work. Below is a brief explanation of rubrics. Keep in mind that
rubrics can be designed such that they track a student’s performance across several courses or
learning experiences and show improvements in performance over time rather than in a single
instance. In this case the rubric should be part of a portfolio of work.
For most educators, a rubric is a printed set of scoring guidelines (criteria) and standards for
evaluating work (a performance or a product) and for giving feedback. A rubric helps address
several important issues:
1. It identifies the key elements (criteria) of the work that will be judged.
2. It indicates the differences between good and poor work (standards) on each criteria.
3. It is a tool to ensure that judgments (or scores) of work or performance are valid and reliable.
4. It helps both performers and judges be more clear about what is expected for excellence.
Why should we use rubrics? The process of designing criteria and standards must be
primarily centered on teaching and learning. In other words, faculty must first identify the key
elements (criteria) of a work or performance, and then develop the standards that discriminate
between poor and excellent accomplishments on those key elements. Subsequently, these ratings
can be converted into scores or grades. A carefully designed rubric:
1. Focuses instruction by identifying the key elements and minimum standards of a skill,
knowledge, or attitude;
2. Helps the instructor provide feedback that is focused and meaningful;
3. Characterize the desired results in a relatively objective manner by clearly describing
expectations;
4. Operationalizes performance standards such that students know in advance what is expected
of them;
5. Rubrics, when given in advance and used constantly, develop self-assessment competence in
students;
14
Adapted from the University of Texas at Dallas, Assessment Workbook, March 2006
Handbook Page | 28
6. Can be developed with the involvement of students, helping them understand the issue in
greater depth.
For more information on rubric construction you can visit http://rubistar.4teachers.org/index.php
or a multitude of other websites.
Use Course-Embedded Direct Methods of Assessments. Plan to use direct methods of
assessment as much as possible, selecting student work that they produce as part of the
curriculum. Many direct assessment methods are already built into and embedded in the courses
and the curriculum. Identify and critically examine the work products your undergraduates
already produce as part of the program’s curriculum, and determine which of these are relevant,
valid, and reliable assessments of your program learning outcomes. Students often take these
assignments more seriously and perform better than they might on a standardized test given
outside of class.
1. Written work:
a. Demonstrates knowledge of important content on an exam or in a paper.
b. Shows analysis, application, synthesis, and evaluation capabilities
c. Displays writing skills
d. Produces reflections on what, how, when, and why they learned
2. Portfolios of student work:
a. Are assessed systematically using a rubric.
i. May be evaluated to determine student learning over time, or may be comprised
of the student’s best work.
ii. Encourage student self-reflection
b. Videotape of oral presentations or performances with self, peer, and or instructor
evaluations using a rubric; include recordings of subsequent performances to document
improvements
c. Senior thesis or “capstone” course experiences
d. Field or service learning projects
e. Performance on in-class tests, assuming they are valid, reliable and objective
Please note that many of these course-embedded assessments need to be evaluated using
a rubric.
In some cases independent measures such as student performance on state or national
certification exams may show that your students achieved the program’s learning outcomes. For
example, the TExES, the state-wide teacher certification exam in Texas, provides a criterionbased, valid, reliable, and independent assessment of knowledge of prospective teachers across
the state. It can be regarded as a clear indication of how well a program prepared students for the
knowledge component of the teaching profession. In many cases, employer ratings of recent
graduates on specific items of interest may also provide valuable information.
Indirect Methods of Assessment can supplement the Evidence. Indirect methods include
student, alumni, or employer surveys, student exit interviews, curriculum and syllabus analysis,
course evaluations, or use of external reviewers. Surveys are particularly good for revealing
students’ attitudes and opinions about what they learned. Surveys are also useful to evaluate
outcomes that are realized in students’ post-college careers. Some of your program’s learning
outcomes may be better assessed through the use of indirect methods which can provide
feedback that is useful in interpreting the results of direct assessments. If several of these tools
are used, for example a graduation survey and an alumni survey, the faculty should ensure that
each uses a similar protocol or prompts so there is consistency in the questions that respondents
Handbook Page | 29
are asked to address in relation to the program learning outcomes. Indirect methods alone are not
sufficient to assess student learning outcomes; they supplement direct methods.
A Mix of Qualitative and Quantitative Methods Can Work to Your Advantage. Also
referred to as triangulation, it is recommended that you use multiple measures to obtain richer
data that enables you to put greater trust in your final conclusions. Consider using both
qualitative and quantitative assessment methods. Quantitative methods assign numerical scores
to student work, but qualitative methods often focus on the quality of work without assigning a
numerical value to them. Using well-developed rubrics allows one to give numerical scores for
qualitative assessments which make the reporting, analysis, and evaluation components much
easier. Adding indirect measures described above allow you to paint a richer picture of your
accomplishments.
Creating Your Own Tools. If you decide to design your own measurement tool keep these
things in mind:
1. Locally developed measurement tools tend to have tremendous content validity15. Work
together to develop your own tool, for example a rubric that can be used in multiple classes
to track performance improvement over time. You could use it for formative and/or
summative assessment of student performance.
2. Simple, course embedded tools are best because they contribute to lasting and more
meaningful learning.
3. Collect information that informs you about student performance on program learning
outcomes. Only collecting relevant data leads to an effective and efficient system and wellinformed decision-making.
4. To increase efficiency you can randomly sample students rather than test everyone, but make
sure that you obtain enough data to trust your outcomes and conclusions.
5. If you do collect data on random samples, obtain enough data to show stakeholders that you
are doing what you say you do with confidence. Collect data across the full range of students
and don’t throw out the data you don’t like. If you found that more than 50% of your students
could not perform a skill listed in your learning outcomes, document it; present the
improvement you instituted, and the results of these actions. Reviewers do not expect
perfection; they want to see that you did something about the problems you identified.
The UT-Dallas Assessment Workbook16 (March, 2006) presents several charts on pages 25-31
that should help clarify and provide ideas to help faculty identify effective assessment methods
and tools they could use (can be retrieved from http://sacs.utdallas.edu/hotdox/assessmentworkbook-2006-03-01.pdf.
15
Content Validity: Degree to which a test (usually in educational settings) adequately samples what was covered in a course. Thomas, J., Nelson, J., & Silverman. (2005). Research methods in physical activity. Champaign, IL: Human Kinetics. 16
Can be retrieved from http://sacs.utdallas.edu/hotdox/assessment‐workbook‐2006‐03‐01.pdf Handbook Page | 30
Some Important Considerations to Make Your Plan Work
• Clearly answer the question what skills, knowledge, or attitudes should your students possess
at the end of their course of study.
• Clearly formulate the minimum performance criteria and standards the program would like
students to meet for each learning outcome.
• Identify which courses and learning experiences (e.g., papers, exams, presentations) are best
suited for the assessment.
• Choose assessments that provide data that are easy to understand and interpret.
• Consider whether the assessments you select allow you to detect the effects of improvements.
• Protect the confidentiality of students whose learning will be assessed.
• Keep the students informed of what you’re trying to accomplish.
• Ensure that the learning experiences in class prepare them to succeed rather than fail and that
their preparatory work flows seamlessly into the assessment.
• Try to make assessments part of the regular course work and learning activities in the
curriculum rather than an additional task for students and faculty outside normal coursework.
• Keep it simple: develop a process that requires little extra time or effort.
Handbook Page | 31
Step 5. Collect, tabulate, analyze the data, and report the results. Analyzing the Data
How data is analyzed and results are reported depends on the type of data collected (qualitative
vs. quantitative, cross sectional vs. longitudinal, categorical vs. scale) and the audience. For the
majority of data analyses MS Excel will suffice; it is widely available and easy to use. Data
entered in Excel can be imported into SPSS for more sophisticated statistical analyses if
necessary. Regardless of how you analyze the data, the results should be reported in a clear, easy
to understand manner, so that it facilitates optimal evaluation. Patterns, problems, and questions
should become apparent while summarizing and evaluating the data.
Organizing Your Results
The simplest method is most likely a matrix that displays your program learning outcomes, how
they were assessed, what the minimum performance criteria were, whether improvements were
deemed necessary, and what types of improvements were planned. The matrix on the next page
is for illustration purposes only, but may provide ideas on how to organize your results, and
report the evaluation and improvement actions. Note that in electronic documents links to other
supporting documents and evidence can be inserted. This allows one to create a concise
“executive summary” that links to more detailed information. This approach works well if the
links are bookmarks to other sections in the same document or to documents stored in an
accessible location.
The Provost’s Office is at this time (September 2008) pursuing an online reporting tool called
Digital Measures that may simplify data entry and reporting of the results. Digital Measures
converts entered data into MS-Word reports that can then be edited and commented upon.
Below are two examples of how one could conceptualize and organize the results of the analyses
relative to the program learning outcomes. Please note that when Digital Measures is fully
functional the approach illustrated below may no longer be necessary or the accepted way of
reporting the results of your assessment process.
The Provost’s Office aims to provide instructions on how to use Digital Measures and examples
of data entries and reports before the conclusion of the fall 2008 semester and prior to the start of
data collection in the spring of 2009.
Handbook Page | 32
Example 1: Program Learning Outcomes Assessment and Review Matrix (based on Eder, 2004) Expectation for Satisfactory Performance Summary of Skills
Percent Met Standards
Satisfactory performance: 85% of enrolled students achieved > 70%: Specific criteria for learning outcomes in KIN courses. Pass rate on TExES > 90% Skill / competencies: PETE Courses’ Satisfactory learning outcome performance: • Teaching assessments 85% of enrolled preparation and Performance during students achieved > execution skills internships 70% on criteria: • Self‐regulated Professional portfolio Test performance learning Rubric for SRLC competencies Rubric for teamwork • Cooperative and social skills Learning, Rubric for teamwork and technological fluency
social skills • Effective use of instructional technology Values and Satisfaction Criteria:
Assessment Data: Attitudes: >70% of majors • KIN Club & member of KIN club; TAHPERD • Engaged professionally membership as % of >100% of PETE majors member of registered majors • Engaged in TAHPERD physically active, • Portfolios and >75% of students healthy lifestyle reflections received satisfactory • Contributes to • Exit survey and/or rubric scores on community interview • Positive work • Student satisfaction portfolio >50% satisfied or habits survey better on exit and • Clinical site other surveys assessments and surveys PETE Courses’ learning outcome assessments Annual TExES Certification Exam Outcome of Analysis 100
80
Teamw ork
60
SRLC
40
Tech fluency
20
0
Percent
Learning Outcom es
Summary of Skills
Percent Met Standards
Academic Knowledge and Reasoning at BS level: Application Analysis Synthesis Evaluation Where, When, and How Monitored; Assessment Tools 100
80
Teamw ork
60
SRLC
40
Tech fluency
20
0
Percent
Learning Outcom es
Summary of Values and Attitudes
Percent Met Standards
Program Goals 90
80
70
60
50
40
30
20
10
0
Healthy Lifestyle
Citizenship
Integrity
Workhabits
Percent
Learning Outcom es
Program Follow‐up
____None required
_X__ Follow‐up required (Link to page that outlines the actions to be taken with timelines) ____None required _X__ Follow‐up required (Link to page that outlines the actions to be taken with timelines) _X__ None required ____Follow‐up required Handbook Page | 33
Step 6. Using the Evaluation Results: Closing the Loop! Evaluation and a determination of how well the program learning outcomes were achieved by the
students can only occur when you are able to compare the actual data to a predetermined target.
At this stage you put a value on the results of your analysis to determine how well the program
achieved its goals. If the results suggest you missed the targets set, and students performed below
expectations on one of the program learning outcomes, you should be able to “drill down” into
the measurement process and tools used for that learning outcome, the learning experiences
students were offered, the pedagogy used, and other variables to determine where improvements
should be introduced.
Evaluation could lead to changes in many aspects of a program such as the learning outcomes,
pedagogical practices, the measurement tools and methods used to document student
performance, and the information collected. Consequently, it is important that the tools and
methods used provide the capability to accurately identify the practices that need improvement.
Evaluation, Reflection, and Taking Action
To gain a rich perspective on the data, assessment results should be disseminated widely and a
variety of stakeholders should be engaged in the evaluation to ensure the obvious is not
overlooked. Don’t forget to ask the students, they experienced the assessment plan and can give
the “inside out” point of view faculty will never be able to see. Use a thorough evaluation as a
basis for targeted improvements in the program. Once the various perspectives have been
compiled consider the following actions.
• Return to the program’s mission, goals, and outcomes. How do the results line up with
previous expectations? Did student do better, worse? Did you make incorrect assumptions or
use incorrect information during the previous planning stage? What are the outside
stakeholders telling you?
• Review the performance levels set earlier in the process. Were those expectations met? Are
the established standards adequate, or did you set the bar at the wrong height? What level of
performance is good enough for undergraduates, graduates?
• Evaluate the assessment instruments, rubrics, and methods. Are they optimally efficient and
effective in light of the questions you seek answers to? Are they the correct tools to use?
• Determine the kinds of corrective actions that will have the greatest potential to improve
student learning. In other words, try to identify the “low hanging fruit,” improvements that
do not require large amounts of resources but lead to significant increases in the quality of
student learning. Again, don’t forget to include the students’ voice in the deliberations.
• Clearly articulate what is to be done, by whom, by when and how data will be collected to
assess the impact. Make sure these actions are aimed directly at improving student learning.
• Determine the implications and consequences of the plan on department policies, curriculum,
resources allocations, faculty effort, the students’ experience of the program, etc. and
prioritize improvement actions based on high impact, low cost.
As the faculty decide on actions to improve student learning, it should set specific targets
wherever possible so it has goals to work towards. For example, if your discipline has a national
database of student performance measures (related to a certification exam for example), you can
compare your student performance with the national means. The faculty should also state what
proportion of the students should achieve a specific performance level. If previously measured
performance data on a learning outcome are available, it can be used as the baseline for setting
new targets for the next cycle.
Handbook Page | 34
Monitoring the Impact of the Changes and comparing them to Past Data
Upon implementation of changes the entire assessment cycle starts again. A carefully developed
assessment plan can likely be used to measure the impact of the changes. However, for a number
of different reasons, such as a change in learning outcomes, it may be necessary to make
modifications in the plan.
A well-designed assessment plan and careful evaluation of the results enables the faculty to make
targeted improvements or affirm current practices efficiently and effectively. This is the main
reason assessments are conducted.
Communicate Conclusions and the Actions of Improvement to be implemented
Honest communication to stakeholders, especially students, is critical in creating trust and
confidence that the program has their best interest in mind. Besides the students, the Dean will
want to know and so will the Provost, the accrediting bodies, future employers of your students,
high school seniors and other prospective students, as well as the community at large. The final
program report needs to tell them what happened, why it happened that way, what the faculty
learned from it, and what it intends to different to improve student learning.
The Department Chairs and Assessment Coordinator have the final responsibility for ensuring
that everyone on the teaching staff has had an opportunity to participate in the development of
the assessment plan, received the final copy, and understood the implications for their teaching
practice. Consider the following when communicating the results:
• Celebrate and publicize your successes. At UTEP we tend to forget to let people know what
we do well. Promote the program vigorously, but use accurate data and evidence to do so.
• Identify the shortcomings and don’t try to hide or minimize them, but present the actions you
will take to improve these weaknesses and explain what you expect of these improvements.
• Consider whether the results should be presented differently to different audiences such as
prospective students, the Dean and other administrators, the rest of the UTEP community,
and beyond.
• Avoid “data dumps,” especially to the lay people. Ask for assistance to format your outcomes
in an effective manner, especially if you are considering publishing a newspaper article and
placing the final report on your program’s website.
Handbook Page | 35
Step 7. Determine who will implement the Improvement Actions Given that these are the program learning outcomes the entire faculty bears responsibility for
helping students achieve them. However, the work to help students achieve the program learning
outcomes mostly occurs in the courses. The Curriculum Map helped you identify in which
courses students experience learning activities and tests related to the program learning outcomes.
Most likely, the program learning outcomes are covered in several courses so that responsibility
for helping students achieve acceptable performance falls on several faculty members. The
question who does what, how, where, and when to improve the situation is an important one.
Clearly, unless one course is responsible for one program learning outcome, several faculty
members will have to shoulder the task of developing learning experiences and material pertinent
to the learning outcomes.
It is advisable for several faculty members who each teach a course or lead designated learning
activities related to a specific learning outcome discuss what precisely each one does to help the
students learn. This discussion will help identify gaps and redundancies that can be eliminated to
better align the efforts in different course sections and maximize student learning. Faculty can
also ask colleagues at other institutions about their most effective strategies and best practices. In
addition, numerous disciplines now have extensive sources and sometimes specific journals
dealing with teaching and learning. These sources should be examined by a committee of faculty
especially those who teach a sequence of courses focused on a select set of program learning
outcomes. A one-semester reading group of faculty and possibly students focused on identifying
best teaching practices in the discipline may be able to identify highly effective improvements
that could have major impacts on student learning.
Handbook Page | 36
Appendix A The following rubric, borrowed from California Polytechnic State University, can be used to determine
the quality of a department or program’s assessment plan. Note that this is a draft, should not be
referenced, nor will it be used by the Provost’s Office to judge assessment plans. The evaluation of the
quality of assessment plans is performed by the Dean of each college and school.
Handbook Page | 37
Appendix B The following rubric, also borrowed from California Polytechnic State University, can be used to
determine a how well a department or program is progressing in the assessment plan development
process.
Handbook Page | 38
Bibliography
California Polytechnic State University: http://www.academics.calpoly.edu/accountability/. Documents
contributed by Dr. Anny Morrobel-Sosa.
Palomba, C. (1999). Assessment essentials: planning, implementing, and improving assessment in higher
education. San Francisco: Jossey-Bass.
Texas A&M University, Guidelines for Academic Program Review, August 2007. Retrieved August 25,
2008 http://ogs.tamu.edu/forms/documents/AcadProgReviewGuidelines_2007.pdf
University of Central Florida, Program Assessment Handbook, June 2008. Retrieved August 25, 2008
from http://www.assessment.ucf.edu/
University of Delaware, Student Learning Outcomes Assessment Manual. Retrieved August 25, 2008
from http://www.assessment.udel.edu/The%20Assessment%20Office/manual.html
University of Texas at Dallas; Assessment Workbook, March 2006. Retrieved August 25, 2008 from
http://www.utdallas.edu/oee/assessment/index.html
University of Virginia; Assessment Guide, Retrieved August 30, 2008 from
http://www.web.virginia.edu/IAAS/assessment/assessment.shtm
The following annotated list of websites is a selection of a larger list developed by the University
of Texas at Dallas, SACS Leadership Team for their UTD Assessment Workbook: A Resource for
Departments and Programs, March 2006. URLs have been checked for functionality September
16, 2008.
The SACS site and several others of universities accredited by SACS
• SACS: Principles of Accreditation — These are the principles accredited universities must
meet for reaffirmation: http://www.sacscoc.org/principles.asp
• Auburn University: http://www.auburn.edu/administration/specialreports/sacsdocuments.html
• Georgia Tech: http://www.assessment.gatech.edu/SACS/index.php
• UNC Chapel Hill: http://www.unc.edu/inst_res/SACS/sacs.html
• Texas A&M International University: http://www.tamiu.edu/sacs/
Helpful guidance and information on assessment
• http://www.bridgew.edu/AssessmentGuidebook/ This is an excellent guide on assessment
developed by the faculty at Bridgewater State University. Particularly helpful are Chapter 4:
Establishing Learning Outcomes, Chapter 5: Assessment Tools (good explanations of various
indirect and direct assessment methods with clear examples), and examples of Rubrics.
• http://www.skidmore.edu/administration/assessment/faq.htm#academicassessment In this
area of Skidmore College's website, you will find brief answers to questions about the
rationale of assessment. The jargon is explained clearly and concisely. Two questions and
answers might be of interest and helpful: The question about primary trait analysis (also
related to developing rubrics) and the question about the connection between assessment of
majors and assessment of general education.
• http://www.k-state.edu/assessment/manual/index.htm Kansas State's Office of Assessment
has one of the best and most user-friendly faculty manuals available. A brief, one-page tips
sheet is found under the "Assessment Tips" link.
• http://www2.acs.ncsu.edu/UPA/assmt/resource.htm This is a meta-site for assessment
resources.
Handbook Page | 39
Sites for Bloom’s Taxonomy for use in writing learning outcomes:
There is a new version of Bloom’s taxonomy. Departments and programs should use it instead of
the original version.
• http://www.coe.uga.edu/epltt/bloom.htm#end This is the best thorough and up-to-date
explanation of Bloom's Taxonomy on the internet. There is an animation that shows verbs
along with types of assessment for each level of Bloom's classification system. In addition,
readers can click on a PowerPoint "test" to practice using this classification system. It shows
a graphic of the old and new versions of the taxonomy.
• http://eprentice.sdsu.edu/J03OJ/miles/Bloomtaxonomy(revised)1.htm The chart at the end of
this site is very helpful. It includes activities and products that students could do based on
Bloom's levels.
• http://www.kurwongbss.qld.edu.au/thinking/Bloom/blooms.htm This site is designed for
public school teachers but it contains many links to explanations of Bloom's revised levels as
well as excellent science examples in different areas.
Help with writing learning objectives
• http://www.utsouthwestern.edu/vgn/images/portal/cit_56417/53/40/416402Creating_Learnin
g_Objectives.pdf This PowerPoint presentation from UT Southwestern provides a helpful
introduction to learning objectives.
• http://edweb.sdsu.edu/Courses/EDTEC540/objectives/ObjectivesHome.html This site
contains a helpful tutorial on writing learning objectives.
• http://www.missioncollege.org/workforce/work_experience/learning.html Although the
Mission College website is oriented toward writing objectives for workforce purposes, it
provides helpful and easy-to-follow steps that could easily be used for classroom learning.
• http://tlt.its.psu.edu/suggestions/research/Write_Objectives.shtml This website from Penn
State gives details about common problems faculty have when writing learning objectives.
• http://captain.park.edu/facultydevelopment/writing_learning_objectives.htm The resource
links and references at the end of this website are helpful. Many other resources related to
teaching and learning.
Download