CNM Handbook for Outcomes Assessment (doc)

advertisement
CNM HANDBOOK FOR
OUTCOMES ASSESSMENT
Central New Mexico Community College
Ursula Waln, Senior Director of Outcomes and Assessment
2015-16 Edition
Assessment You Can Use!
Contents
Overview ................................................................................................................................................................. 1
Purpose of this Handbook ............................................................................................................................1
Why CNM Conducts Formal Learning Outcomes Assessment ..................................................................1
What Constitutes a Good Institutional Assessment Process? ................................................................................. 2
Table 1: Institutional Effectiveness Goals .......................................................................................2
Introduction to the CNM Student Outcomes Assessment Process ......................................................................... 3
The Greater Context of CNM Learning Outcomes Assessment ..................................................................3
Figure 1: CNM Student Learning Outcomes Assessment Context Map .........................................4
Table 2: Matrix for Continual Improvement in Student Learning Outcomes .................................5
Figure 2: Alignment of Student Learning Outcomes to CNM Mission & Values ..........................6
Course SLOs ................................................................................................................................................6
Program and Discipline SLOs .....................................................................................................................6
General Education SLOs..............................................................................................................................7
Embedded Outcomes ...................................................................................................................................9
Connections between Accreditation & Assessment ............................................................................................. 10
Institutional Accreditation .........................................................................................................................10
Program Accreditation ...............................................................................................................................11
Developing Student Learning Outcome Statements ............................................................................................. 11
Table 3: Sample Program SLOs ....................................................................................................11
Figure 3: Bloom’s Taxonomy of the Cognitive Domain ...............................................................12
Figure 4: Bloom’s Taxonomies of the Affective and Psychomotor Domains ...............................12
Table 4: Action Verbs for Creating Learning Outcome Statements ..............................................13
Figure 5: Webb’s Depth of Knowledge Model..............................................................................14
Table 5: Marzano’s Taxonomy ......................................................................................................15
Designing Assessment Projects ............................................................................................................................ 18
Why Measurement Matters ........................................................................................................................18
Assessment Cycle Plans .............................................................................................................................18
Alignment of Course and Program SLOs ..................................................................................................20
Figure 6: The CNM Assessment Process.......................................................................................20
Table 6: A Model for SLO Mapping .............................................................................................21
Table 7: Sample Rubric for Innovative Thinking in the Transformation of Ideas into Art Forms21
Developing an Assessment Focus at the Course Level .............................................................................22
Planning an Assessment Approach at the Course Level ............................................................................23
Table 8: Common Assessment Measures ......................................................................................24
Table 9: Descriptors Related to Assessment Measures .................................................................25
Table 10: Sample Likert-Scale Items .............................................................................................26
Using Descriptive Rubrics to Make Assessment Coherent .......................................................................26
Figure 7: Using Rubrics to Pool Findings from Diverse Assessment Approaches .......................27
Rubric Design ............................................................................................................................................27
Table 11: Sample Rubric for Assessment of Decision-Making Skill ............................................28
Using Formative Assessment to Reinforce Learning ................................................................................28
Collecting Evidence of Learning .......................................................................................................................... 30
Embedded Assessment...............................................................................................................................30
Classroom Assessment Techniques (CATs) ..................................................................................31
Collecting Evidence beyond the Classroom ..............................................................................................34
Sampling Student Learning Outcomes ......................................................................................................34
IRB and Classroom Research ....................................................................................................................35
Analyzing, Interpreting, and Applying Assessment Findings .............................................................................. 35
Analyzing Findings from Individual Measures .........................................................................................35
Interpreting Findings ..................................................................................................................................36
Applying Findings at the Course Level .....................................................................................................36
Pooling and Applying Program Findings ..................................................................................................37
Glossary ................................................................................................................................................................ 39
References ............................................................................................................................................................. 41
Appendix 1: CNM Assessment Cycle Plan Form................................................................................................. 42
Appendix 2: Guide for Completion of Cycle Plan................................................................................................ 43
Appendix 3: CNM Annual Assessment Report Form .......................................................................................... 44
Appendix 4: Guide for Completion of Assessment Report .................................................................................. 45
Appendix 5: Rubric for Assessing Learning Outcomes Assessment Processes ................................................... 46
Appendix 6: NMHED General Education Core Course Transfer Module Competencies ................................... 47
OVERVIEW
Purpose of this Handbook
This handbook was designed to assist CNM faculty and program leaders in assessing student learning outcomes
and applying the findings to optimize student learning.
Why CNM Conducts Formal Learning Outcomes Assessment
CNM is dedicated to ensuring that all academic courses and curricula meet the highest level of relevancy and
excellence. Thus, we are collectively committed to conducting ongoing, systematic assessment of student
learning outcomes across all areas of study. CNM’s assessment processes inform decisions at course, program,
and institutional levels. The resulting evidence-based changes help ensure that the education CNM students
receive remains first-rate and up-to-date.
Assessment of student learning progress toward achievement of expected course outcomes is a natural part of
the instructional process. However, the results of such assessment, in the absence of a formal, structured
assessment process, may or may not factor into decisions for change. Having an intentional, shared approach
that connects in-course assessment to broader program outcomes, documents and applies the findings, and
follows up on the impact of changes benefits the college, programs, instructors, and students.
As a publicly funded institution, CNM has a degree of responsibility to demonstrate accountability for the use
of taxpayer funds. As an educational institution, CNM needs to be able to demonstrate achievement of the
learning outcomes it claims to strive for or, if failing to do so, the ability to change accordingly. As an
institution accredited by the Higher Learning Commission (HLC), CNM must be able to demonstrate that it
regularly assesses student outcomes and systematically applies the information obtained from that assessment to
continually improve student learning. Accreditation not only confirms institutional integrity, but it also enables
CNM to offer students Financial Aid, courses that transfer readily to other institutions, and degrees that are
recognized by employers as valid. Most importantly, however, as an institution that strives to be its best, CNM
benefits from the ability of its faculty and administrators to make well-informed decisions.
Programs are improved through genuine appraisal of student learning when that appraisal is used to implement
well-considered enhancements. Assessment can help perpetuate successful practices, address obstacles to
students’ success, and encourage innovative strategies. Often, when a program’s faculty begins to develop
assessment methodologies related to program outcomes, the outcome statements themselves get refined and
better defined, and the program components become more coherently integrated. Evidence obtained through
assessment can also substantiate programs’ requests for external support, such as project funding, student
services, professional development, etc.
For faculty, assessment often leads to ideas for innovative instructional approaches and/or better coordination of
program efforts. Connecting classroom assessment to program assessment helps instructors clarify how their
teaching contributes to program outcomes and complements the instructional activities of their colleagues.
Active faculty engagement in program assessment develops breadth of perspective, which in turn facilitates
greater intentionality in classroom instruction, clearer communication of expectations, and more objective
evaluation of students’ progress.
Moreover, faculty who focus on observing and improving student learning, as opposed to observing and
improving teaching, develop greater effectiveness in helping students to change their study habits and to
become more cognizant of their own and others’ thinking processes.
1
In addition, the CNM assessment process provides ample opportunity for instructors to receive recognition for
successes, mentor their colleagues, assume leadership roles, and provide valuable college service.
Ultimately, students are the greatest benefactors of the CNM assessment process. Students receive a more
coherent education when their courses are delivered by a faculty that keeps program goals in mind, is attentive
to students’ learning needs, and is open to opportunities for improvement. Participation in assessment,
particularly when instructors discuss the process in class, helps students become more aware of the strengths
and weaknesses in their own approach to learning. In turn, students are able to better understand how to
maximize their study efforts; assume responsibility for their own learning; and become independent, lifelong
learners. And, students benefit from receiving continually improved instruction through top-notch programs at
an accredited and highly esteemed institution.
WHAT CONSTITUTES A GOOD INSTITUTIONAL ASSESSMENT PROCESS?
The State University of New York’s Council on Assessment developed a rubric that does a good job of
describing best practices in assessment (2015). The goals identified in the SUNY rubric, which are consistent
with what institutional accreditation review teams tend to look for, are listed in Table 1. The rubric is available
at http://www.sunyassess.org/uploads/1/0/4/0/10408119/scoa_institutional_effectiveness_rubric_2.0_.pdf.
Table 1: Institutional Effectiveness Goals
Aspect
Element
Plan
Design
Outcomes
Alignment
Resources
Culture
Implementation Data Focus
Sustainability
Monitoring
Communication
Impact
Strategic Planning
and Budgeting
Closing the Loop
Goal
The institution has a formal assessment plan that
documents an organized, sustained assessment process
covering all major administrative units, student support
services, and academic programs.
Measurable outcomes have been articulated for the
institution as a whole and within functional areas/units,
including for courses and programs and nonacademic units.
More specific subordinate outcomes (e.g., course) are
aligned with broader, higher-level outcomes (e.g., programs)
within units and these are aligned with the institutional
mission, goals, and values.
Financial, human, technical, and/or physical resources are
adequate to support assessment.
All members of the faculty and staff are involved in
assessment activities.
Data from multiple sources and measures are considered in
assessment.
Assessment is conducted regularly, consistently, and in a
manner that is sustainable over the long term.
Mechanisms are in place to systematically monitor the
implementation of the assessment plan.
Assessment results are readily available to all parties with
an interest in them.
Assessment data are routinely considered in strategic
planning and budgeting.
Assessment data have been used for institutional
improvement.
Source: The SUNY Council on Assessment
2
INTRODUCTION TO THE CNM STUDENT OUTCOMES ASSESSMENT PROCESS
CNM’s faculty-led Student Academic Assessment Committee (SAAC) drives program assessment. Each of
CNM’s six schools has two voting faculty representatives on the committee who bring their school’s
perspectives to the table and help coordinate the school’s assessment reporting. In addition, SAAC includes four
ex officio members, one each from the College Curriculum Committee (CCC), the Cooperative for Teaching
and Learning (CTL), the Deans Council, and the Office of Planning and Institutional Effectiveness (OPIE). The
latter is the Senior Director of Outcomes and Assessment, whose role is facilitative.
SAAC has two primary responsibilities: 1) providing a consistent process for documenting and reporting
outcomes results and actions taken as a result of assessment, and 2) facilitating a periodic review of the learning
outcomes associated with the CNM General Education Core Curriculum.
SAAC faculty representatives have put in place a learning assessment process that provides consistency and
facilitates ongoing improvement while respecting disciplinary differences, faculty expertise, and academic
freedom. This process calls for all certificate and degree programs, general education areas, and discipline areas
to participate in what is referred to for the sake of brevity as program assessment. The goal is assessment of
student learning within programs, not assessment of programs themselves (a subtle but important distinction).
The faculty of each program/area develops and maintains its own assessment cycle plan, outlining when and
how all of the program’s student learning outcomes (SLOs) will be assessed over the course of five years.
SAAC asks that the cycle plan, which can be revised as needed, address at least one SLO per year. And, SAAC
strongly recommends assessing each SLO for two consecutive years so that changes can be made on the basis of
the first year’s findings, and the impact of those changes can be assessed during the second year. At the end of
the five-year cycle, a new 5-year assessment cycle plan is due.
Program faculty may use any combination of course-level and/or program-level assessment approaches they
deem appropriate to evaluate students’ achievement of the program-level learning outcomes. For breadth and
depth, multiple approaches involving multiple measures are encouraged.
A separate annual assessment reporting form (see Appendix 3) is used to summarize the prior year’s assessment
findings and describe changes planned on the basis of the findings. This form can be prepared by any
designated representative within the school. Ideally, findings from multiple measures in multiple courses are
collaboratively interpreted by the program faculty in a group meeting prior to completion of the report.
For public access, SAAC posts assessment reports, meeting minutes and other information at
http://www.cnm.edu/depts/academic-affairs/saac. For access by full-time faculty, SAAC posts internal
documents in a Learning Assessment folder on a K drive.
In addition, the Senior Director of Outcomes and Assessment publishes a monthly faculty e-newsletter called
The Loop, offers faculty workshops, and is available to assist individual faculty members and/or programs with
their specific assessment needs.
The Greater Context of CNM Learning Outcomes Assessment
CNM learning outcomes assessment (a.k.a., program assessment) does not operate in isolation. The diagram in
Figure 2 illustrates the placement of program assessment among interrelated processes that together inform
institutional planning and budgeting decisions in support of improved student outcomes.
3
Figure 1: CNM Student Learning Outcomes Assessment Context Map
The CNM General Education Core Curriculum is a group of broad categories of educational
focus, called “areas” (such as Communic ations), each with an associated set of student learning
outcomes and a list of courses that address those outcomes. For example, Math is one of the
CNM Gen Ed areas. “Solve various kinds of equations, simplify expressions, and apply
formulas” is one of the Math student learning outcomes. And, MA TH 1315, College Algebra, is
a course that may be taken to meet the CNM Math Gen Ed requirement.
Similarly, the New Mexico General Education Core Course Transfer Curriculum is a group of areas, each with
an associated set of student learning outcomes – which the New Mexico Higher Education Department
(NMHED) refers to as “competencies” – and a list of courses that address those outcomes. CNM currently has
126 of its own Gen Ed courses included in the State’s transfer core. This is highly beneficial for CNM’s
students because these courses are guaranteed transfer between New Mexico postsecondary public institutions.
As part of the agreement to have these courses included in the transfer core, CNM is responsible for continuous
assessment in these courses and annual reporting to verify that students achieve the State’s competencies. The
NMHED’s competency statements are influenced to some degree by input from CNM. And, the core
competencies are taken into consideration as CNM establishes and plans assessment of its own general
education outcomes. The CNM Gen Ed outcomes are cross-walked to the State competencies for reporting
purposes. (For more detail on the NM Gen Ed Core, see Appendix 6.)
Specific course outcomes within the programs and disciplines, along with the embedded outcomes of critical
thinking and life skills/teamwork, provide the basis for program and discipline outcomes assessment. Program
statistics (enrollment and completion rates, next-level outcomes, etc.) also inform program-level assessment.
Assessment findings, in turn, inform curricular decisions.
While learning outcomes assessment is separate and distinct from program review (which is an annual viability
study that looks primarily at program statistics [such as enrollment, retention, and graduation rates] and
curricular and job market factors), assessment does inform the program review process through its influence on
programming, curricular, instructional, and funding decisions. By keeping the focus of assessment on student
4
learning rather than demonstration of program effectiveness, the indirect, informative relationship between
program-level learning outcomes assessment and program viability studies enables CNM faculty to apply
assessment in an unguarded manner, to explore uncertain terrain, and to acknowledge findings openly. Keeping
the primary focus of assessment on improvement, versus program security, helps to foster an ethos of inquiry,
scholarly analysis, and civil academic discourse around assessment.
Along with program assessment, a variety of other assessment processes contribute synergistically to ongoing
improvement in student learning outcomes at CNM. Table 2 describes some of the key assessment processes
that occur regularly, helping to foster the kind of institutional excellence that benefits CNM students.
Table 2: Matrix for Continual Improvement in Student Learning Outcomes
For a full-size, more legible version of the above matrix, see the SAAC SharePoint site at
https://share.cnm.edu/academicaffairs/SAAC/SitePages/Home.aspx.
As illustrated in Figure 2 on the following page, outcomes assessment at CNM is integrally aligned to the
college’s mission and values. CNM’s mission of “Changing Lives, Building Community” points the way, and
course instruction provides the foundation for realization of that mission.
In the process of achieving course learning outcomes, students develop competencies specific to their programs
of study. Degree-seeking students also develop general education competencies. The CNM assessment
reporting process focuses on program-level and Gen Ed learning outcomes, as informed by course-level
assessment findings and, where appropriate, program-level assessment findings.
5
Figure 2: Alignment of Student Learning Outcomes to CNM Mission & Values
Changing Building
Lives Community
Be Caring Be Courageous
Be Ethical Be Connected
Be Inspiring Be Exceptional
Course SLOs
At CNM, student learning outcome statements (which describe what competencies should be achieved upon
completion of instruction) and the student learning outcomes themselves (what is actually achieved) are both
referred to as SLOs. Most CNM courses have student learning outcome statements listed in the college’s
curriculum management system. (Exceptions include special topics offerings and individualized courses.) The
course outcome statements are directly aligned to program outcome statements, general education outcome
statements, and/or discipline outcome statements. While course-specific student learning outcomes are the focus
of course-level assessment, the alignments to overarching outcome statements make it possible to apply courselevel findings in assessment of the overarching outcomes (i.e., program, Gen Ed, or discipline outcomes).
Program and Discipline SLOs
At CNM, the word program usually refers to areas of study that culminate in degrees (AA, AS, or AAS) or
certificates. The designation of discipline is typically reserved for areas of study that do not lead to degrees or
certificates, such as the College Success Experience, Developmental Education, English as a Second Language,
English for Speakers of Other Languages, and GED Preparation. Discipline, however, does not refer to general
education, which has its own designation, with component general education areas. Note, however, that the
word program is frequently used as short-hand to refer to degree, certificate, general education, and discipline
areas as a group – as in program assessment.
6
Each program and discipline has student learning outcome statements (SLOs) that represent the culmination of
the component course SLOs. Program and discipline SLOs are periodically reviewed through program
assessment and/or, where relevant, through program accreditation review processes and updated as needed.
General Education SLOs
With the exception of Computer Literacy, each distribution area of the CNM General Education Core
Curriculum has associated with it a group of courses from which students typically can choose (though some
programs require specific Gen Ed courses). Computer Literacy has only one course option (though some
programs waive that course under agreements to develop the related SLOs through program instruction). Each
of the courses within a Gen Ed area is expected to develop all of that area’s SLOs. And, each Gen Ed course
should be included at least once in the area’s 5-year assessment cycle plan.
The vast majority of CNM’s general education courses have been approved by the New Mexico Higher
Education Department (NMHED) for inclusion in the New Mexico Core Course Transfer Curriculum (a.k.a.,
the State’s transfer core). Inclusion in this core guarantees transfer toward general education requirements at
any other public institution of higher education in New Mexico. To obtain and maintain inclusion in the State’s
transfer core, CNM agrees to assess and report on student achievement of associated State competencies in each
course (See Appendix 6). However, because the NMHED permits institutions to apply their own internal
assessment schedules toward the State-level reporting, it is not necessary that CNM report assessment findings
every year for every course in the State’s transfer core. CNM Gen Ed areas may apply the CNM 5-year
assessment cycle plan, so long as every area course is included in the assessment reporting process at some
point during the 5-year cycle.
The current CNM general education SLOs have been cross-walked to the State’s core competencies to facilitate
application of internal assessment to fulfill the State’s assessment reporting requirements. The CNM Senior
Director of Outcomes and Assessment currently prepares the State reports by cross-walking to the State’s
competencies information provided in the CNM annual assessment reports.
In the case of IT 1010, which is included within the CNM General Education Core Curriculum but not the
State’s transfer core, assessment reporting is expected to address the CNM area SLOs only.
Note: With the exception of Computer Literacy (which is a CNM-specific area), the CNM general education
areas and outcomes will be revised, effective with the 2016-2018 catalog, to directly correspond to the areas and
competencies described by the New Mexico Core Course Transfer Curriculum. For assessment conducted in the
2015-2016 academic year, areas have the option of adopting the revised outcomes early.
CNM’s General Education Core Curriculum has two separate but related sets of areas and outcomes: one for
Associate of Arts (AA) and Associate of Science (AS) degrees, and one for Associate of Applied Science
(AAS) degrees. For AA and AS degrees, 35 credits are required; whereas, for the AAS degrees, 15 credits are
required. Following are the current student learning outcome statements associated with each area:
CNM General Education SLOs for AA/AS Degrees
Communication
1. Produce audience appropriate communication that displays consideration of ethical principles and
diverse points of view.
2. Communicate clearly, concisely, and with purpose in oral and written form.
3. Apply standard oral and written English in academic and workplace communication.
4. Analyze, evaluate, and appropriately apply oral and written communication.
7
5. Identify, categorize, evaluate, and cite multiple resources necessary to produce projects, papers, or
performances.
Math
1. Solve various kinds of equations, simplify expressions, and apply formulas.
2. Demonstrate computational skills with and without the use of technology.
3. Generate and interpret a variety of graphs and/or data sets.
4. Demonstrate problem solving skills within the context of mathematical applications.
5. Demonstrate ability to write mathematical explanations using appropriate definitions and symbols.
Lab Sciences
1. Employ critical thinking skills to judge the validity of information from a scientific perspective.
2. Apply the scientific method to formulate questions, analyze information/data and draw
conclusions.
3. Properly operate laboratory equipment to collect relevant and quality data.
4. Utilize mathematical techniques to evaluate and solve scientific problems.
5. Communicate effectively about scientific ideas and topics, in oral and/or written formats.
6. Relate science to personal, social or global impact.
Social/Behavioral Sciences
1. Analyze relevant issues utilizing concepts and evidence from the social/behavioral sciences.
2. Evaluate alternative explanations of social/behavioral phenomena with regard to evidence and
scientific reasoning.
3. Identify research methods used in the social/behavioral sciences.
4. Describe how the social context can affect individual behavior, and how individual behavior can
affect the social context.
5. Contrast the implications of individual choices from individual, community, and global
perspectives.
Humanities & Fine Arts
1. Distinguish historical periods and respective cultural developments from a global perspective.
2. Demonstrate an ability to understand, analyze, and synthesize concepts logically based on written
and verbal communication.
3. Recognize how culture, history, politics, art, and religion impact society.
4. Participate and/or critically evaluate the arts.
Computer Literacy
1. Demonstrate knowledge of basic computer technology and tools
2. Use software applications to produce, format, analyze and report information to communicate
and/or to solve problems
3. Use internet tools to effectively acquire desired information
4. Demonstrate the ability to create and use various forms of electronic communication adhering to
contextually appropriate etiquette
8
5. Demonstrate the ability to create, name, organize, save and retrieve data and/or information in an
electronic file management system
CNM General Education SLOs for AAS Degrees
Written Communication
1. Communicate clearly and appropriately to specific audiences in a logical manner
2. Ethically identify, categorize, evaluate and cite multiple resources to produce projects, papers or
performance
3. Apply standard English in academic and workplace communication to produce substantially error-
free prose in written form (i.e. follow basic grammar rules and punctuation)
4. Analyze, evaluate and appropriately apply oral and written communication
Math
1. Interpret and write mathematical explanations using appropriate definitions and symbols
2. Solve various kinds of equations, simplify expressions and apply formulas
3. Ability to compute with or without the use of technology
4. Solves problems within the context of mathematical applications
Human Relations
1. Describe how the socio-cultural context affects behavior and how behavior affects the socio-
cultural context
2. Identify how individual perspectives and predispositions impact others in social, workplace and
global settings
Computer Literacy
1. Demonstrate knowledge of basic computer technology and tools
2. Use software applications to produce, format, analyze and report information to communicate
and/or to solve problems
3. Use internet tools to effectively acquire desired information
4. Demonstrate the ability to create and use various forms of electronic communication adhering to
contextually appropriate etiquette
5. Demonstrate the ability to create, name, organize, save and retrieve data and/or information in an
electronic file management system
As noted, general education assessment cycle plans should include assessment within all area courses that serve
as Gen Ed options. Because this can mean conducting assessment in a significant number of courses, it is
recommended that sampling techniques be employed to collect evidence that is representative without
overburdening the faculty. See Collecting Evidence of Learning for further information on sampling techniques.
Embedded Outcomes
At the end of both the AA/AS and the AAS general education outcomes lists posted on the college web site is a
section addressing the embedded outcomes of Critical Thinking and Life Skills/Teamwork. These outcomes are
expected to be developed within the existing coursework of degree and certificate programs.
Note: These embedded areas will no longer be included in the revised Gen Ed areas of the 2016-18 catalog.
9
Critical Thinking
1. Identify and distinguish elements of ideas and/or practical approaches to problems.
2. Distinguish fact from non-factual opinion.
3. Evaluate theories, logical approaches, and protocols related to problems.
4. Apply appropriate processes to resolve problems.
Life Skills/Teamwork
1. Demonstrate a positive work ethic.
2. Demonstrate appropriate professional behavior.
3. Manage time efficiently.
4. Communicate effectively.
5. Demonstrate the ability to work independently and collaboratively.
Just as development of these SLOs is embedded within program instruction, their assessment is currently
embedded in program assessment. For reporting purposes, programs are asked to describe how the outcomes are
embedded within their program outcomes. With the connections to program assessment thus established, these
embedded outcomes do not require separate assessment. Note that assessment reports addressing general
education and discipline areas do not need to demonstrate how the Critical Thinking and Life Skills/Teamwork
outcomes are embedded. (Enter N/A.)
CONNECTIONS BETWEEN ACCREDITATION & ASSESSMENT
Institutional Accreditation
CNM is accredited by the Higher Learning Commission (HLC), one of six regional institutional accreditors in
the U.S. Information regarding the accreditation processes and criteria is available at www.hlcommission.org//.
HLC accreditation criteria that have particular relevance to assessment of student outcomes are listed below:
3.A. The institution’s degree programs are appropriate to higher education.
2. The institution articulates and differentiates learning goals for its undergraduate, graduate, postbaccalaureate, post-graduate, and certificate programs.
3. The institution’s program quality and learning goals are consistent across all modes of delivery and
all locations (on the main campus, at additional locations, by distance delivery, as dual credit,
through contractual or consortial arrangements, or any other modality).
3.B. The institution demonstrates that the exercise of intellectual inquiry and the acquisition, application, and
integration of broad learning and skills are integral to its educational programs.
2. The institution articulates the purposes, content, and intended learning outcomes of its undergraduate
general education requirements. The program of general education is grounded in a philosophy or
framework developed by the institution or adopted from an established framework. It imparts broad
knowledge and intellectual concepts to students and develops skills and attitudes that the institution
believes every college-educated person should possess.
3. Every degree program offered by the institution engages students in collecting, analyzing, and
communicating information; in mastering modes of inquiry or creative work; and in developing
skills adaptable to changing environments.
3.E. The institution fulfills the claims it makes for an enriched educational environment.
10
2. The institution demonstrates any claims it makes about contributions to its students’ educational
experience by virtue of aspects of its mission, such as research, community engagement, service
learning, religious or spiritual purpose, and economic development.
4.B. The institution demonstrates a commitment to educational achievement and improvement through
ongoing assessment of student learning.
1. The institution has clearly stated goals for student learning and effective processes for assessment of
student learning and achievement of learning goals.
2. The institution assesses achievement of the learning outcomes that it claims for its curricular and cocurricular programs.
3. The institution uses the information gained from assessment to improve student learning.
4. The institution’s processes and methodologies to assess student learning reflect good practice,
including the substantial participation of faculty and other instructional staff members.
5.C. The institution engages in systematic and integrated planning.
2. The institution links its processes for assessment of student learning, evaluation of operations,
planning, and budgeting.
Program Accreditation
Many of CNM’s technical and professional programs are accredited by field-specific accreditation bodies.
Maintaining program accreditation contributes to program quality by ensuring that instructional practices reflect
current best practices and industry standards. Program accreditation is essentially a certification of instructional
competency and degree credibility. Because in-depth program assessment is typically a major component of
accreditation reporting, the CNM assessment report accommodates carry-over of assessment summaries from
accreditation reports. In other words, reporters are encouraged to minimize unnecessary duplication of reporting
efforts by copying and pasting write-ups done for accreditation purposes into the corresponding sections of the
CNM assessment report form.
DEVELOPING STUDENT LEARNING OUTCOME STATEMENTS
Student learning outcome statements (SLOs) identify the primary competencies the student should able to
demonstrate once the learning has been achieved. SLOs derive from and reflect your teaching goals, so it is
important to start with clearly articulated teaching goals. What do you want to accomplish? From that, you can
develop SLOs that identify what your students need to learn to do. To be used for assessment, SLO statements
need to measurable. For this reason, the current convention is to use phrases that begin with action verbs. Each
phrase completes an introductory statement that includes “the student will be able to.” For example:
Table 3: Sample Program SLOs
Upon completion of this program, the student will be able to:
 Explain the central importance of metabolic pathways in cellular function.
 Use mathematical methods to model biological systems.
 Integrate concepts drawn from both cellular and organismal biology with explanations of evolutionary adaptation.
 Develop experimental models that support theoretical concepts.
 Perform laboratory observations using appropriate instruments.
 Critique alternative explanations of biological phenomena with regard to evidence and scientific reasoning.
At the course level as well as the program level, SLOs should focus on competencies that are applicable beyond
college. Rather than address specific topics or activities in which the students will be expected to engage,
identify the take-aways (up to 10 maximum) that will help students succeed later in employment and/or in life.
11
Tips:




Focus on take-away competencies, not just activities or scores.
Focus on student behavior/action versus mental processes.
Choose verbs that reflect the desired level of sophistication.
Avoid compound components unless their synthesis is needed.
Cognitive processes such as understanding
and affective processes such as appreciation
are difficult to measure. So, if your goal is to
have students understand something, ask
yourself how they can demonstrate that
understanding. Will they explain it,
paraphrase it, select it, or do something else?
Is understanding all you want, or do you also
want students to apply their understanding in
some way? Similarly, if your goal is to
develop student appreciation for something,
how will students demonstrate that
appreciation? They could tell you they
appreciate it, but how would you know they
weren’t just saying what they thought you
wanted to hear? Perhaps if they were to
describe it, defend it, contrast it with
something else, critique it, or use it to create
something new, then you might be better
able to conclude that they appreciate it.
The verbs used in outcome statements carry
important messages about the level of skill
expected. Benjamin Bloom’s Taxonomy of
the Cognitive Domain is often used as a
guide for writing SLOs. Representing a
progression in sophistication, the taxonomy
is often depicted in a layered triangle,
showing foundational skills at the bottom.
Bloom originally presented his model 1956
(Taxonomy). A former student of Bloom’s,
Lorin Anderson, revised the taxonomy in
2000, flipping the top two levels and
substituting verbs for Bloom’s nouns (A
Taxonomy). A side-by-side comparison is
provided in Figure 3. Both versions are
commonly called “Bloom’s Taxonomy.”
Periods at the Ends?
As a rule of thumb, periods are used at the
ends of items in numbered or bulleted lists
if each item forms a complete sentence
when joined to the introductory statement.
SLO statements fit this description.
Figure 3: Bloom’s Taxonomy of the Cognitive Domain
ORIGINAL
REVISED
Figure 4: Bloom’s Taxonomies of the Affective and Psychomotor Domains
AFFECTIVE
PSYCHOMOTOR
It is worth noting that Benjamin Bloom also developed taxonomies for the affective and psychomotor domains
(see Figure 4). Bloom’s taxonomies imply that the ability to perform the behaviors at the top depends upon
having attained prerequisite knowledge and skills through the behaviors in the lower rungs. Not all agree with
this concept, and the taxonomies have been used by many to discourage instruction directed at the lower level
skills. Despite debate over their validity, Bloom’s taxonomies can be useful references in selecting verbs for
12
outcome statements, describing stages of outcome development, identifying appropriate assessment methods,
and interpreting assessment information.
Following is a handy reference created and shared by Oregon State University, listing verbs corresponding to
the levels of the revised taxonomy. The taxonomy itself is essentially comprised of cognitive processes. The
associated outcome verbs represent more observable, measurable behaviors.
Table 4: Action Verbs for Creating Learning Outcome Statements
13
Other models have been developed as alternatives to Bloom’s taxonomies. While all such conceptual models
have their shortcomings, the schema they present may prove useful in the selection of appropriate SLO verbs.
For example, Norman Webb’s Depth of Knowledge, first published in 2002 and later illustrated as shown in
Figure 5 (from te@chthought), identifies four knowledge levels: Recall, Skills, Strategic Thinking, and
Extended Thinking (Depth-of-Knowledge). Webb’s model is widely referenced in relation to the Common Core
Standards Initiative for kindergarten through 12th grade.
Figure 5: Webb’s Depth of Knowledge Model
In 2008, Robert Marzano and John Kendall published a “new taxonomy” that reframed Bloom’s domains as
Information, Mental Processes, and Psychomotor Procedures and described six levels of processing
information: Retrieval, Comprehension, Analysis, Knowledge Utilisation, Meta-Cognitive System, and Self
System (Designing & Assessing Educational Objectives). In Marzano’s Taxonomy, shown in Table 5, the four
lower levels are itemized with associated verbs. This images was shared by the Adams County School District
of Colorado (wiki.adams):
14
Table 5: Marzano’s Taxonomy
15
In a recent publication, Clifford Adelman of the Institute for Higher Education Policy advocated for the use of
operational verbs in outcome statements, defining these as “actions that are directly observed in external
contexts and subject to judgment” (2015). The following reference is derived from the article.
For effective learning outcomes, select verbs that:
1. Describe student acquisition and preparation of tools, materials, and texts of various types
access, acquire, collect, accumulate, extract, gather, locate, obtain, retrieve
2. Indicate what students do to certify information, materials, texts, etc.
cite, document, record, reference, source (v)
3. Indicate the modes of student characterization of the objects of knowledge or materials of
production, performance, exhibit
categorize, classify, define, describe, determine, frame, identify, prioritize, specify
4. Describe what students do in processing data and allied information
calculate, determine, estimate, manipulate, measure, solve, test
5. Describe the ways in which students format data, information, materials
arrange, assemble, collate, organize, sort
6. Describe what students do in explaining a position, creation, set of observations, or a text
articulate, clarify, explicate, illustrate, interpret, outline, translate, elaborate, elucidate
7. Fall under the cognitive activities we group under “analyze”
compare, contrast, differentiate, distinguish, formulate, map, match, equate
8. Describe what students do when they “inquire”
examine, experiment, explore, hypothesize, investigate, research, test
9. Describe what students do when they combine ideas, materials, observations
assimilate, consolidate, merge, connect, integrate, link, synthesize, summarize
10. Describe what students do in various forms of “making”
build, compose, construct, craft, create, design, develop, generate, model, shape, simulate
11. Describe the various ways in which students utilize the materials of learning
apply, carry out, conduct, demonstrate, employ, implement, perform, produce, use
12. Describe various executive functions students perform
operate, administer, control, coordinate, engage, lead, maintain, manage, navigate, optimize,
plan
13. Describe forms of deliberative activity in which students engage
argue, challenge, debate, defend, justify, resolve, dispute, advocate, persuade
14. Indicate how students valuate objects, experiences, texts, productions, etc.
audit, appraise, assess, evaluate, judge, rank
15. Reference the types of communication in which we ask students to engage:
report, edit, encode/decode, pantomime (v), map, display, draw/ diagram
16. Indicate what students do in groups, related to modes of communication
collaborate, contribute, negotiate, feed back
17. Describe what students do in rethinking or reconstructing
accommodate, adapt, adjust, improve, modify, refine, reflect, review
16
Rewriting Compounded Outcomes: Often in the process of developing assessment plans, people realize their
outcome statements are not quite as clear as they could be. One common discovery is that the outcome
statement actually describes more than one outcome. While there is no rule against compounding multiple
outcomes into one statement, doing so provides less clarity for students regarding their performance
expectations and makes assessing the outcomes more complicated. Therefore, if compounded outcomes actually
represent separate behaviors, it may be preferable to either rewrite them or create separate statements to
independently represent the desired behaviors.
If a higher-level behavior/skill essentially subsumes the others, the lower-level functions can be dropped from
the statement. This is a good revision option if a student would have to demonstrate the foundational skill(s) in
order to achieve the higher level of performance. For example, compare the following:

Identify, evaluate, apply, and correctly reference external resources.

Use external resources appropriately.
To use exteranl resources appropriately, a student must identify potential resources, evaluate them for relevance
and reliability, apply them, and correctly reference them. Therefore, the second statement is more direct and
inclusive than the first. One could reasonably argue that the more detailed, step-by-step description
communicates more explicitely to students what they must do, but the second statement requires a more holistic
integration of the steps, communicating the expectation that the steps will be synthesized into an outcome that is
more significant than the sum of its components. Synthesis (especially when it involves evaluation) is a high
order of cognitive function.
To identify compound outcomes, look for structures such as items listed in a series and/or coordinating
conjunctions. Let’s look at two examples with the compound structures in bold and the coordinating
conjunctions underlined:

Integrate concepts drawn from both cellular and organismal biology with explanations of evolutionary
adaptation.

Use scientific reasoning to develop hypotheses and evaluate contradictory explanations of social
phenomena.
In the first example above, the behavior called for is singular but requires that the student draw upon two
different concepts simultaneously to make sense of a third. This is a fairly sophisticated cognitive behavior
involving synthesis. The second example above describes two separate behaviors, both of which involve
scientific reasoning. One way to decide whether to split apart such statements is to consider whether both
components could be assessed together. In the first example above, assessment not only could be done using a
single demonstration of proficiency but probably should be done this way. For the second example, however,
assessment would require looking at two different outcomes separately. Therefore, that statement might be
better rewritten as two:

Develop hypotheses based in scientific reasoning.

Evaluate contradictory explanations of social phenomena through reasoned, scientific analysis.
17
DESIGNING ASSESSMENT PROJECTS
Why Measurement Matters
Assessment projects have two primary purposes:
1. To gauge student progress toward specific outcomes
2. To provide insight regarding ways to facilitate the learning process
Measurement approaches that provide summative success data (such as percentages of students passing an exam
or grade distributions) may be fine if your aim is to demonstrate achievement of a certain acceptable threshold
of proficiency. However, such measures alone often fail to provide useful insight into what happened along the
way that either helped or impeded the learning process. In the absence of such insights, assessment reporting
can become a stale routine of fulfilling a responsibility – something to be completed as quickly and painlessly
as possible. At its worst, assessment is viewed as 1) an evaluation of faculty teaching that presumes there’s
something wrong and it needs to be fixed, and 2) a bothersome process of jumping through hoops to satisfy
external demands for accountability. However, when assessment is focused on student learning rather than on
instruction, the presumption is not that there’s something wrong with the teaching, but rather that there’s always
room for improvement in student achievement of desired learning outcomes.
When faculty embrace assessment as a tool to get information about student learning dynamics, they are more
likely to select measurement approaches that yield information about how the students learn, where and why
gaps in learning occur, and how students respond to different teaching approaches. Faculty can then apply this
information toward helping students make their learning more efficient and effective.
So, if you are already assessing student learning in your class, what is to be gained from putting that assessment
through a formal reporting process? For one thing, the purposeful application of classroom assessment
techniques with the intention of discovering something new fosters breadth of perspective that can otherwise be
difficult to maintain. The process makes faculty more alert to students’ learning dynamics, inspires new
instructional approaches, and promotes a commitment to professional growth.
Often, underlying frustration with assessment processes are misunderstandings about what constitutes an
acceptable assessment measure. CNM’s assessment process accommodates diverse and creative approaches.
The day-to-day classroom assessment techniques that faculty already use to monitor students’ progress can
serve as excellent measurement choices. The alignment of course SLOs to program SLOs makes it feasible for
faculty to collaboratively apply their classroom assessment techniques toward the broader assessment of their
program, even though they may all be using different measures and assessing different aspects of the program
SLO. When faculty collectively strive to more deeply understand the conditions under which students in a
program learn best, a broader picture of program dynamics emerges. In such a scenario, opportunities to better
facilitate learning outcomes can more easily be identified and implemented, leading to greater program
efficiency, effectiveness, and coherence.
Assessment Cycle Plans
The Student Academic Assessment Committee (SAAC) asks program faculty to submit plans every five years
demonstrating when and how they will assess their program SLOs over the course of the next five years. For
newly approved programs/general education areas/discipline areas, the following must be completed by the 15th
of the following October:

Enter student learning outcome statements (SLOs) in the college’s curriculum management system.

Develop and submit to SAAC a 5-year assessment cycle plan (using the form provided by SAAC).
18
At least one outcome should be assessed each year, and all outcomes should be assessed within the 5-year cycle.
SAAC requests/strongly recommends that each outcome be assessed for at least two consecutive years. Cycle
plans for general education areas should include all courses listed for the discipline within the CNM General
Education Core Curriculum. And, assessment within courses having multiple sections should include all
sections, whenever feasible.
All programs/areas are encouraged to assess individual SLOs across a variety of settings, use a variety of
assessment techniques (multiple measures), and employ sampling methods as needed to minimize the burden on
the faculty. (See Collecting Evidence of Learning.)
Choosing an assessment approach begins with considering what it is the program faculty wants to know. The
assessment cycle plan should be designed to collect the information needed to answer the faculty’s questions.
Much educational assessment begins with one or more of the following questions:
1.
2.
3.
4.
5.
Are students meeting the necessary standards?
How do these students compare to their peers?
How much is instruction impacting student learning?
Are changes making a difference?
Is student learning potential being maximized?
Standards-bases assessment
Benchmark comparisons
Value-added assessment
Investigative assessment
Process-oriented assessment
Determining whether students are meeting standards usually involves summative measures, such as final
exams. Standards-based assessment relies on prior establishment of a target level of achievement, based on
external performance standards or internal decisions about what constitutes ‘good enough.’ Outcomes are
usually interpreted in terms of the percentage of students meeting expected proficiency levels.
Finding out how students compare to their peers typically involves comparing test or survey outcomes with
those of other colleges. Benchmark comparisons, based on normative group data, may be used when available.
Less formal comparisons may involve looking at summary data from specific institutions, programs, or groups.
Exploring how an instructional program or course is impacting student learning, often termed ‘value-added
assessment,’ is about seeing how much students are learning compared to how much they knew coming in.
This approach typically involves either longitudinal or cross-sectional comparisons, using the same measure
for both formative and summative assessment. In longitudinal analyses, students are assessed at the beginning
and end of an instructional unit or program (given a pre-test and a post-test, for example). The results can then
be compared on a student-by-student basis for calculation of mean gains. In cross-sectional analyses, different
groups of students are given the same assessment at different points in the educational process. For example,
entering and exiting students are asked to fill out the same questionnaire or take the same test. Mean results of
the two groups are then compared. Cross-sectional comparisons can save time because the formative and
summative assessments can be conducted concurrently. However, they can be complicated by demographic
and/or experiential differences between the groups.
Studying the effects of instructional and/or curricular changes usually involves comparing result obtained
following a change to those obtained with the same measure but different students prior to the change. This is
essentially an experimental approach, though it may be relatively informal. (See the CNM Handbook for
Educational Assessment.)
Exploring whether student learning is being maximized involves process-oriented assessment strategies. To
delve into the dynamics of the educational process, a variety of measures may be used at formative and/or
summative stages. The goal is to gain insights into where and why student learning is successful and where and
why it breaks down.
19
The five assessment questions presented above for program consideration can also be used independently by
faculty at the course level. Because it is built upon the alignment of course and program SLOs, as discussed in
the following section, the CNM assessment model allows faculty to use different approaches within their own
classes based on what they want to know. Putting together findings from varied approaches to conduct a
program-level analysis, versus having everyone use the same approach, yields a more complete portrait of
learning outcomes.
Alignment of Course and Program SLOs
As illustrated in Figure 5, the process of program assessment revolves around the program’s student learning
outcome statements. Although some assessment techniques (such as employer surveys, alumni surveys, and
licensing exams) serve as external measures of program SLOs, the internal assessment process often begins
with individual instructors working with course SLOs that have been aligned to the program SLOs.
The alignment between course and program SLOs allows flexibility in how course-level assessment is
approached. When faculty assess their students’ progress toward their course SLOs, they also assess their
students’ progress toward the overarching program SLOs. Therefore, each instructor can use whatever
classroom assessment techniques work best for his/her purposes and still contribute relevant assessment
findings to a collective body of evidence. If instructors are encouraged to ask their own questions about their
students’ learning and conduct their own assessment in relation to their course SLOs, not only will courserelated assessment be useful for program reporting, but it will also be relevant to the individual instructor.
Depending upon how the program decides to approach assessment, faculty may be able to use whatever they are
already doing to assess their students’ learning. Or, they might even consider using the assessment process as an
opportunity to conduct action research. Academic freedom in assessment at the course level can be promoted by
putting in place clearly articulated program-level outcomes and defining criteria associated with those
outcomes. Descriptive rubrics (whether holistic or analytic), rating scales, and check lists can be useful tools for
clarifying what learning achievement looks like at the program level. Once a central reference is in place,
faculty can more easily relate their course-level findings to the program-level expectations.
Figure 6: The CNM Assessment Process
20
Clear alignment between course and program SLOs, therefore, is prerequisite for using in-class assessment to
support program assessment. Questions of assessment aside, most people would agree that having every course
SLO (in courses comprising a program) contribute in some way to students’ development of at least one
program SLO contributes to the overall integrity and coherence of the program. To map (a.k.a. crosswalk)
course SLOs to program SLOs, consider using a matrix such as the one below. Use shading (or X’s) in the
intersecting cells where the course outcomes align to the program outcomes.
Table 6: A Model for SLO Mapping
Program SLO #1
Program SLO #2
Program SLO #3
Course SLO #1
Course SLO #2
Course SLO #3
Course SLO #4
The more clearly defined (and agreed upon by program faculty) the program SLOs are, the easier it is to align
course SLOs to them and the more useful the pooling of assessment findings will be. For this reason, schools
are encouraged to carefully analyze their program SLOs and consider whether any of them consist of several
component skills (often referred to as elements). If so, identification of course SLO alignments (and ultimately
the entire assessment process) will be easier if each of the component skills is identified and separately
described. Consider developing a rubric, normed rating scale, or checklist for each such program SLOs, clearly
representing and/or describing what achievement of the outcome looks like.
For example, the SLO “Demonstrate innovative thinking in the transformation of ideas into forms” contains two
component skills: innovative thinking and transforming ideas. Each of these elements needs to be described
separately before the broader learning outcome can be clearly understood and holistically assessed. Some
course SLOs may align more directly to development of innovative thinking while others may align more
directly to the transformation of ideas into forms.
The advantage of using a descriptive rubric for this sort of SLO deconstruction is that you can define several
progressive levels of achievement (formative stages) and clearly state what the student behavior looks like at
each level. Faculty can then draw connections between their own course SLOs (and assessment findings) and
the stages of development associated with the program SLO.
Following is a sample rubric, adapted from the AAC&U Creative Thinking Value Rubric (available at
www.aacu.org). See the section on Using Rubrics to Make Assessment Coherent for more information on
developing and getting the most out of rubrics.
Table 7: Sample Rubric for Innovative Thinking in the Transformation of Ideas into Art Forms
Innovative
Thinking
Transforming
Ideas
Advanced
(3)
Extends a novel or unique idea,
question, format, or product to
create new knowledge or a new
application of knowledge.
Transforms ideas or solutions into
entirely new forms.
21
Intermediate
(2)
Creates a novel or
unique idea, question,
format, or product.
Beginning
(1)
Reformulates
available ideas.
Connects ideas in
novel ways that create
a coherent whole.
Recognizes existing
connections among
ideas or solutions.
Developing an Assessment Focus at the Course Level
Assessment becomes meaningful when it meets a relevant need,
for example when it:

Starts with a question you care about.

Can confirm or disprove what you think.

Can shed light on something you want to better understand.

Can reveal whether one method works better than another.

Can be of consequence to your future plans or those of your colleagues.

Has implications for the future of the profession and/or broader society.
Assessing for
Assessment’s Sake?
Assessment is a tool, not an
end in itself. And, like any
tool, its purpose is to help you
accomplish something.
It is much more fun to
produce something worthwhile than to demonstrate
mastery of a tool.
Before you can formulate a coherent course-level approach to assessment, it is necessary to connect your broad
teaching goals with specific, assessable activities. Begin by asking yourself what you and your students do to
frame the learning that leads to the desired outcome. Sometimes identifying specific activities that directly
contribute to the outcome can be a challenge, but doing so is important for assessment to proceed.
Once you have connected specific activities with the outcome, decide what you want to know about your
students’ learning. What is your goal in conducting the assessment? What are you curious about? See the five
assessment questions listed in the Assessment Cycle Plans section above for ideas you can apply here as well.
Connecting Assessment with
Specific Activities
Imagine a program is assessing the
SLO “Analyze and evaluate texts
written in different literary and nonliterary genres, historical periods,
and contexts.” The instructor’s
closely related course SLO is
“Interpret and analyze diverse and
unfamiliar texts in thoughtful, wellinformed, and original ways.”
Having noticed during discussions
following written analyses that
some students alter their views upon
hearing other students’ interpretations, the instructor decides to
assess the intentional use of guided
class
discussion
to
develop
interpretive and analytic skills. To
assess the impact, she will have
students revise and resubmit their
papers following the guided
discussion. Then, she will compare
the performance on the papers preand post-discussion, using a rubric,
and look at individual gains.
A natural inclination is to focus assessment on problem areas. However, it is
often more productive to focus on what you think is working than to seek to
confirm what is not working. Here are some reasons why this is so:
1. Exploring the dynamics behind effective processes may offer
insights that have application to problem areas. Stated another way:
understanding conditions under which students learn best can help
you identify obstacles to student learning and suggest ideas for how
these can be addressed (either within or outside of the program).
2. Students who participate in assessment that confirms their
achievements gain awareness of what they are doing effectively and
are thereby helped to develop generalizable success strategies. This
benefit is enhanced by faculty discussing with students the
assessment process and the outcomes.
3. Exploring successes reinforces instructor motivation and makes the
assessment process more encouraging, rather than discouraging.
4. The process of gathering evidence of success and demonstrating the
application of insights gained promotes your professional
development and supports your program in meeting public
accountability expectations.
5. Sharing discoveries regarding successful practices could contribute
significantly to your professional field.
However, an important distinction exists between assessing student learning and assessing courses. And,
encouragement to explore successful practices should not be misconstrued as encouragement to use the
assessment reporting process to defend one’s effectiveness as an instructor. To be useful, assessment needs to
22
do more than just confirm that a majority of students at the end of a course can demonstrate a particular learning
outcome. While this may be evidence of success, it does not reveal much at all about what contributed to the
success. Assessment that explores successful practices needs to delve into the questions of what is working, how
it is working, why it is working, whom it is working best for, when it is working best, and under what conditions
is it working best?
Here are some questions that might help generate some ideas:

Which of the course SLOs related to the program SLO(s) scheduled for assessment is most interesting or
relevant to you?

Is there anything you do that you think contributes especially effectively to development of the course
outcome?

Have you recently tried anything new that you might want to assess?

Have students commented on particular aspects of the course?

Have students’ course evaluations pointed to any particular strengths?

Are there any external influences (such as industry standards, employer feedback, etc.) that point to
strategies of importance?
Again, please see in the Assessment Cycle Plans section for more information on formulating assessment
questions and identifying appropriate assessment approaches. The information there is applicable at the course
level as well.
Planning an Assessment Approach at the Course Level
How can you best measure students’ achievement toward the specific outcome AND gain insights that will help
you improve the learning process? The choice of an appropriate measurement technique is highly context
specific. However, if you are willing to consider a variety of options and you understand the advantages and
disadvantages of each, you will be well prepared to make the selection.
Please keep the following in mind:

Assessment does not have to be connected with grading. While grading is a form of assessment,
there is no need to limit assessment to activities resulting in grades. Sometimes, removal from
the grading process can facilitate collection of much more revealing and interesting evidence.

Assessment does not have to involve every student equally. Sometimes sampling methods make
manageable an assessment that would otherwise be unreasonably burdensome to carry out. For
example, you may not have time to interview every student in a class, but if you interview a
random sample of students, you may be able to obtain essentially the same results in a lot less
time. If you have an assignment you grade using course criteria and want to apply it to program
level criteria, you may be able to use a sample rather than re-evaluate every student’s work.

Assessment does not have to be summative. Summative assessment occurs at the end of the
learning process to determine retrospectively how much or well the students learned. Formative
assessment, on the other hand, occurs during the learning process, during the developmental
phases of learning. Of the two, formative assessment often provides the greatest opportunity for
insight into the students’ learning dynamics. In addition, formative assessment enables you to
identify gaps in learning along the way, while there is still time to intervene, instead of at the
end, when it’s too late to intervene. Consider using both formative and summative assessment.
23

It is not necessary that all program faculty use a common assessment approach. A diverse
array of assessment approaches can be conducted concurrently by different faculty teaching a
wide range of courses within a program and assessing the same outcome. The key is to have
group agreement regarding how the outcome manifests, i.e., what it looks like when students
have achieved the learning outcome and what criteria are used for its assessment. This can be
accomplished with a very precisely worded SLO, a list of SLO component skills, descriptive
rubrics (see Using Rubrics to Make Assessment Coherent below), descriptions from industry
standards, normed rating scales, checklists, etc. Once a shared vision of the outcome has been
established, all means of assessment, no matter how diverse, will address the same end.

Assessment does not have to meet the standards of publishable research. Unless you hope to
publish your research in a peer review journal, your classroom assessment need not be flawless
to be useful. In this regard, assessment can be an opportunity for ‘action research.’

Some assessment approaches may need IRB approval. See IRB and Classroom Research
regarding research involving human subjects. And, for further information, see the CNM
Handbook for Educational Research.

Assessment interpretation does not need to be limited to the planned assessment techniques. It
is reasonable to bring all pertinent information to bear when trying to understand strengths and
gaps in student learning. Remember, the whole point of assessment is to gain useful insights.

Assessment does not have to be an add-on. Most of the instructional approaches faculty use in
their day-to-day teaching lend themselves to assessment of program outcomes. Often, turning an
instructional approach into a valuable source of program-level assessment information is just a
matter of documenting the findings.
It may be helpful to think of assessment techniques within the five broad categories identified in Table 6.
Table 8: Common Assessment Measures
Written Tests
Misconception checks
Preconception checks
Pre- or post-tests
Pop quizzes
Quizzes
Standardized exams
Document/Artifact Analysis
Artwork
Displays
Electronic presentations
Exhibitions
Homework
Journals
Portfolios
Projects
Publications
Research
Videos
Writing assignments
Process Observations
Auditions
Classroom circulation
Demonstrations
Enactments
Experiments
Field notes
Performances
Process checklists
Simulations
Speeches
Tick marking
Trial runs
Interviews
Calling on students
Case studies
Focus groups
Formal interviews
In-class discussions
Informal interviews
Oral exams
Out-of-class discussions
Study-group discussions
Surveys
Alumni surveys
Clicker questions
Employer surveys
Feedback requests
24
Show of hands
Student surveys
The lists of techniques above are by no means comprehensive. Plug in your own classroom assessment
techniques wherever they seem to fit best. The purpose in categorizing techniques thus is to demonstrate not
only some common characteristics but also the variety of options available. Yes, interviews and surveys are
legitimate assessment techniques. You need not limit yourself to one paradigm. Consider the possibility of
using techniques you may not have previously thought sufficiently valid, and you may begin to see that much of
what you are already doing can be used as is, or adapted, for program-level assessment. You may also begin to
have more fun with assessment and find the process more relevant to your professional interests.
Some concepts to help you select a measure that will yield the type of evidence you want are presented in the
following table. Each pair (left and right) represents a continuum upon which assessment measures can be
positioned, depending upon the context in which they are used.
Table 9: Descriptors Related to Assessment Measures
Student products or
performances that
demonstrate specific
learning has taken place
(WCU, 2009)
Not influenced by
Objective
personal perceptions;
impartial, unbiased
Quantitative Expressible in terms of
quantity; directly,
numerically measurable
Based on experience or
Empirical
experiment
Direct
Indirect
Subjective
Qualitative
Anecdotal
Implications that student
learning has taken place
(may be in the form of
student feedback or
third-party input)
Based on or influenced
by personal perceptions
Expressible in terms of
quality; how good
something is
Based on accounts of
particular incidents
Note that objectivity and subjectivity can apply to the type of evidence collected or the interpretation of the
evidence. Empirical evidence tends to be more objective than anecdotal evidence. And, interpretation of
outcomes with clearly identifiable criteria tends to be more objective than interpretation requiring judgment.
Written tests with multiple-choice, true-false, matching, single-response short-answer/fill-in-the-blank, and/or
mathematical-calculation questions are typically direct measures and tend to yield objective, quantitative data.
Responses to objective test questions are either right or wrong; therefore, objective test questions remain an
assessment staple because the evidence they provide is generally viewed as scientifically valid.
On the other hand, assessing short-answer and essay questions is more accurately described as a form of
document analysis. When document/artifact analyses and process observations are conducted using objective
standards (such as checklists or well-defined rubrics), these methods can also yield relatively direct, quantitative
evidence. However, the more observations and analyses are filtered through the subjective lens of personal or
professional judgment, the more qualitative the evidence. For example, consider the rating of performances by
panels of judges. If trained judges are looking for specific criteria that either are or are not present (as with a
checklist), the assessment is fairly objective. But, if the judges evaluate the performance based on knowledge of
how far each individual performer has progressed, aesthetic impressions, or other qualitative criteria, the
assessment is more subjective.
Objectivity is highly prized in U.S. culture. Nonetheless, some subject matter requires a degree of subjectivity
for assessment to hit the mark. A work of art, a musical composition, a poem, a short story, or a theatrical
performance that contains all the requisite components but shows no creativity, grace, or finesse and fails to
make an emotional or aesthetic impression does not demonstrate the same level of achievement as one that
creates an impressive gestalt. Not everything worth measuring is objective and quantitative.
25
Interviews and surveys typically yield indirect, subjective, qualitative, anecdotal evidence and can nonetheless
be extremely useful. Soliciting feedback from students, field supervisors, employers, etc., can provide insights
into student learning processes and outcomes that are otherwise inaccessible to instructors.
Note that qualitative information is often translated to numerical form for ease of analysis and interpretation.
This does not make it quantitative. A common example is the use of Likert scales (named after their originator,
Rensis Likert), which typically ask respondents to indicate evaluation or agreement by rating items on a scale.
Typically, each rating is associated with a number, but the numbers are symbols of qualitative categories, not
direct measurements.
Table 10: Sample Likert-Scale Items
Poor
(1)
☐
Please rate the clarity of this handbook:
Assessment is fun!
Disagree
(1)
☐
Fair
(2)
☐
Somewhat
Disagree
(2)
☐
Good
(3)
☐
Somewhat
Agree
(3)
☐
Great
(4)
☐
Agree
(4)
☐
In contrast, direct numerical measures such as salary, GPA, age, height, weight, and percentage of correct
responses yield quantitative data.
Using Descriptive Rubrics to Make Assessment Coherent
Descriptive rubrics are tools for making evaluation that is inherently subjective more objective. Descriptive
rubrics are scoring matrices that differ from rating scales in that they provide specific descriptions of what the
outcome looks like at different stages of sophistication. They serve three primary purposes:
1. To communicate learning outcome expectations (to students and/or to faculty).
2. To facilitate fairness and consistency in an instructor’s evaluation of multiple students’ work.
3. To facilitate consistency in ratings among multiple evaluators.
Used in class, descriptive rubrics can help students better understand what the instructor is looking for and
better understand the feedback received from the instructor. They can be connected with individual
assignments, course SLOs, and ad hoc assessment activities. Used at the program level, rubrics can help
program faculty relate their course-level assessment findings to program-level learning outcomes.
Rubrics are particularly useful in qualitative assessment of student work. When grading large numbers of
assignments, even the fairest of instructors can be prone to grading fatigue and/or the tendency to subtly shift
expectations based on cumulative impressions of the group’s performance capabilities. Using a rubric provides
a framework for reference that keeps scoring more consistent.
Additionally, instructors who are all using the same assessment approach in different sections of a course can
use a rubric to ensure that they are all assessing outcomes with essentially the same level of rigor. However, to
be effective as norming instruments, rubrics need to have distinct, unambiguously defined performance levels.
A rubric used to assess multiple competencies can be referred to as non-linear; whereas, a rubric used to assess
several criteria related to a single, broad competency can be described as linear (Tomei, p. 2). Linear rubrics are
perhaps most relevant to the program assessment process. A linear rubric that describes progressive levels of
achievement within several elements of a broad program SLO can serve as a unifying and norming instrument
26
across the entire program. This is an important point: such a rubric can be used for making sense of disparate
findings from a wide variety of assessments carried out in a wide variety of courses – so long as all are related
to the same program SLO. Each instructor can reference some portion of a shared rubric in relating his or her
classroom assessment findings to the program SLO. It is not necessary that every assessment address every
element of the rubric.
The Venn diagram presented in Figure 6 illustrates how a rubric
can serve as a unifying tool in program assessment that involves a
variety of course assessment techniques. To make the most of this
model, faculty need to come together at the end of an assessment
period, pool their findings, and collaboratively interpret the totality
of the information. Collectively, the faculty can piece the findings
together, like a jig-saw puzzle, to get a broader picture of the
program’s learning dynamics in relation to the common SLO.
Figure 7: Using Rubrics to Pool
Findings from Diverse Assessment
Approaches
For example, students in entry-level courses might be expected to
demonstrate beginning-level skills, especially during formative
assessment. However, if students in a capstone course are still
demonstrating beginning-level skills, then the faculty, alerted to
this, can collectively explore the cross-curricular learning processes
and identify strategies for intervention. The information gleaned
through the various assessment techniques, since it all relates to the
same outcome, provides clues. And, additional anecdotes may help
fill in any gaps. As previously noted, assessment interpretation does
not need to be limited to the planned assessment techniques.
Rubric Design
To be effective, descriptive rubrics need to focus on competencies, not tasks, and they need to validly describe
demonstration of competency at various levels.
Rubric design is highly flexible. You can include as many levels of performance as you want (most people use
3-5) and use whatever performance descriptors you like. To give you some ideas, below are some possible
descriptor sets:

Beginner, Amateur, Professional

Beginning, Emerging, Developing, Proficient, Exemplary

Below Expectations, Satisfactory, Exemplary

Benchmark, Milestone, Capstone

Needs Improvement, Acceptable, Exceeds Expectations

Needs Work, Acceptable, Excellent

Neophyte, Learner, Artist

Novice, Apprentice, Expert

Novice, Intermediate, Advanced

Rudimentary, Skilled, Accomplished

Undeveloped, Developing, Developed, Advanced
27
A recommendation for description
writing is to start with the most advanced
level of competency (unless that level
exceeds expectations) and work down to
the lowest level.
Tip: The process of agreeing on outcome
descriptions can help clarify goals and
values shared among program faculty.
To facilitate collaborative development
of the descriptions in a program-level
rubric, start with a brainstorming
session. List and group ideas to identify
emerging themes.
Table 11: Sample Rubric for Assessment of Decision-Making Skill
Novice
(0)
Recognizes a
general need
for action
Identifies
decisions to
be made
Explores
alternatives
Overlooks
apparent
alternatives
(1)
Developing
(2)
Owns the need
to decide upon
a course of
action
Considers the
most obvious
alternatives
Anticipates
Considers
consequences only desired
consequences
Identifies the
most likely
consequences
Weighs pros
and cons
Recognizes pros
and cons and
compares
length of lists
Chooses a
course based
on external
influences
Acts on decision
Identifies only
the most
obvious pros
and cons
Does not
choose own
course of
action
Does not
follow
through
Chooses a
course of
action
Follows
through
Using Formative Assessment to Reinforce Learning
(3)
Advanced
(4)
Clearly defines
decisions and
their context
and importance
Fully explores
options,
including
complex
solutions
analyzes
likelihood of
differing
outcomes
Weighs pros and
cons based on
values/goals/
ethics
Expresses logical
rationale in
choosing a
course
Acts on decision,
observes outcome, & reflects
upon process
TOTAL:
NCTM Position
As contrasted with summative assessment, the purpose of which is
to evaluate student learning at the end of a learning process,
formative assessment aims to monitor student learning in progress.
In and of itself, formative assessment does not necessarily reinforce
learning. However, if intentionally used as an instructional
technique, formative assessment provides timely feedback that can
help both the students and the instructor adjust what they are doing
to move more directly toward the learning goals. The key is using
the assessment both to provide feedback to students and to redirect
one’s teaching approach. The feedback students receive affirms
their learning progress (providing positive reinforcement) and also
lets them know where they have not yet met the outcome goals.
And, understanding how well students are grasping the learning and
where the learning is breaking down helps the instructor take
appropriate steps to intervene and get the students back on track,
which also reinforces learning.
To inform future teaching strategies, it is not so much the
identification of gaps in the learning process that is beneficial, but
rather the identification of ways to correct those gaps. Together, the
methods used to redirect the learning process and the instructor’s
28
Through formative assessment, students
develop a clear under-standing of learning
targets and receive feedback that helps them
to improve. In addition, by applying
formative strategies such as asking strategic
questions,
providing
students
with
immediate feedback, and engaging students
in self-reflection, teachers receive evidence
of students’ reasoning and misconceptions to
use in adjusting instruction. By receiving
formative feedback, students learn how to
assess themselves and how to improve their
own learning. At the core of formative
assessment is an understanding of the
influence that assessment has on student
motivation and the need for students to
actively monitor and engage in their
learning. The use of formative assessment
has been shown to result in higher
achievement. The National Council of
Teachers of Mathematics strongly endorses
the integration of formative assessment
strategies into daily instruction.
NCTM, July 2013
Score
Performance levels can be arranged from
low to high or from high to low. As
shown in the sample rubric in Table 9,
blank columns (given numerical values
but no descriptors) can be inserted
between described performance levels
for use when demonstration of learning
falls somewhere in between the
descriptions. However, some argue that
well-written rubrics are clear enough that
no in-between or overlap can occur.
subsequent observation of how well those methods worked can lead to insights with implications for future
instructional approaches. To this end, a focus on strengths, with the intention of identifying successful teaching
strategies is likely to be more productive than a focus on weaknesses in teaching and/or learning.
Possible responses to formative assessment include, but certainly are not limited to:
 Clarifying the learning goals.
 Reinforcing concept scaffolding.
 Providing additional examples and/or practice.
 Having students help other students learn (share their understanding).
 Modeling and/or discussing techniques and/or strategies.
 Teaching metacognitive processes.
 Reframing the learning within the epistemology of the discipline (i.e., showing how it fits the nature and
limitations of knowledge within the discipline and/or the methods and processes used within the discipline).
Using formative assessment to facilitate classroom discussions can help students learn to view their own
learning process with greater breadth of perspective and objectivity. With practice, students can learn to switch
back and forth between learning and observing their own learning process. Conducting formative assessments
and talking with students about the results can help students develop metacognitive skills. Learning to critically
evaluate their own progress and motives as learners encourages students to accept responsibility for their own
learning outcomes. And, seeing their progress along the way helps students develop an intrinsic appreciation for
the learning process. Nothing motivates like realizing one is becoming better at something.
Measurement approaches used in formative assessment are not inherently different from those used in
summative assessment. However, because formative assessments are typically less formal and carry less (or no)
point value for grading, instructors are more apt to be creative in using them. See the section Classroom
Assessment Techniques (CATs) for fifty activities that are commonly used as formative assessment processes.
One might rightly question whether the use of formative assessments provides the data necessary to
demonstrate programs’ achievement of their SLOs. After all, as a publicly funded institution, CNM does need
to demonstrate some level of program success to satisfy needs for public accountability. If only one assessment
approach were used to assess student progress toward a program SLO, then for accountability purposes, it
would be desirable for that assessment approach to be summative (to show that the outcomes were achieved to
an acceptable level). However, the CNM assessment model allows (but does not require) programs to draw
upon multiple measures from multiple courses for a more comprehensive picture of the learning dynamics
leading up to and including the achievement of any given program SLO. Viewed together, findings from
formative assessments across multiple courses, all helping to develop a shared program SLO, will typically
provide more actionable information than will a single summative assessment. When formative and summative
findings are analyzed in combination, the potential for actionable insights is magnified.
For example, if a program uses an external licensing exam as the sole indicator of student progress toward a
particular outcome and 90% of students who take the exam pass it, we may deduce that the program is doing an
acceptable job of developing the learning outcome. However, we receive little information that can be used to
inform a plan of action. If we focus on how the results are less than perfect, we might ask what happened with
the 10% who did not pass. Where along the way did their learning break down? Even if we know that the
questions they missed pertained to the target SLO and know what the questions were, we may have to rely on
anecdotal information (such as the students’ prior course performance, faculty knowledge of extenuating
circumstances, etc.) to understand why those students missed those questions. Alternatively, if we focus on the
performance of the successful students, we might ask what helped these students to be successful. Again, in the
29
absence of other assessment information, the answers probably lie in anecdotal observations. And, while there is
nothing wrong with bringing anecdotal information to bear on the interpretation of assessment findings, the
conclusions so derived typically are not generalizable.
In this example, the faculty could attribute the outcomes to factors beyond the control of the program and
conclude that no changes need to be made. That would be fine if all that mattered were demonstration of a
certain threshold of success for public accountability purposes. However, CNM as an institution values the
impact of its education on students’ lives and the community. Public accountability is not the only reason (nor
even the most important) for conducting assessment. When optimizing student learning is the primary goal,
there is always room for improvement in learning outcomes. Standing alone, a summative assessment with
results that represent anything less than 100% success, therefore, suggests the need for a plan of action, even if
the action is merely to recommend changes in college practices, supportive services, etc., to address the factors
that are beyond the control of the program.
If the program faculty in the example above were to come together to consider the findings of formative
assessments conducted in several different courses, along with the licensing exam results, together these might
provide insight into the development of component skills that comprise the broader program SLO. A holistic
look at the learning process might reveal patterns or tendencies. The faculty might identify successful practices
that could be implemented more broadly, or earlier, to reinforce student learning in additional ways.
COLLECTING EVIDENCE OF LEARNING
Evidence of student learning can be as informal as an opinion poll or as formal as the data collected in a
randomized control trial. Because evidence of learning can take so many different forms, there is no single
method for collecting it. Perhaps the most important consideration for program assessment purposes is that the
evidence be documented in a way that enables the faculty to reference and interpret it at a later date. Also
important is keeping the collection process from getting in the way of teaching and learning. The latter can be
accomplished by:
1. Embedding assessment in the instructional process.
2. Collecting evidence beyond the classroom.
3. Using sampling methods.
Embedded Assessment
Assessment is already embedded in assignments and tests, so instructors who wish to often can use what they
already have in place. However, if instructors will collect grading information at the level of specific grading
criteria, rather than report letter-grade distributions, the evidence will be more analyzable, and therefore more
useful. Program rubrics can greatly facilitate this level of data collection. (See Using Rubrics to Make
Assessment Coherent.)
Why not just count grades? First, for grades to provide useful information about students’ achievement of
specific course SLOs, they need to relate directly to the SLOs. A comprehensive final exam, for example, may
contain 50 questions, of which perhaps only 5 relate to the specific SLO being assessed. In such a case, the 5
specific test questions need to be analyzed separately because learning related to the SLO cannot be accurately
represented by the overall test grade. Second, even if a grade does represent the target learning outcome only, a
summary of the grade distribution does not provide as much useful information as would a breakout showing
the grading criteria used and the average level of performance on each. Third, the assignment of a letter grade is
not, in itself, an assessment; it is a symbolic summary used to evaluate performance, not to identify specific
strengths and weaknesses in the performance. The assessment lies in the analysis that supports the grade. Many
30
ungraded instructional activities offer excellent fodder for assessment simply by creating the opportunity to
analyze strengths and weaknesses in student performance.
In their book Classroom Assessment Techniques: A Handbook for College Teachers, Thomas Angelo and
Patricia Ross describe 50 classroom assessment techniques that can be used to collect evidence of learning (pp.
119-362). These are summarized below for easy reference. Most can be used as formative and/or summative
assessments. All can have instructional value, particularly if the information collected is discussed with students
and/or used to inform follow-up instruction. And, as one might expect, the relative usefulness of each technique
is highly dependent upon the subject area and the instructor’s goals, teaching philosophy, etc.
Classroom Assessment Techniques (CATs)
I. Assessing Course-Related Knowledge and Skills
A. Prior Knowledge, Recall, and Understanding
1. Background Knowledge Probe: Having students respond to a short questionnaire/test, typically
at the beginning of a course, unit, or new topic
2. Focused Listening: Having students write down key words or concepts following a lesson, then
using those later to provide clarification
3. Misconception/Preconception Check: Having students write answers to questions designed to
uncover prior knowledge or beliefs that may impede learning
4. Empty Outlines: Providing students with an empty or partially completed outline and having
them fill it in
5. Memory Matrix: Giving students a table with column and row headings and having them fill in
the intersecting cells with relevant details, match the categories, etc.
6. Minute Paper: Giving students one minute to answer some variation on the questions “What was
the most important thing you learned during this class?” and “What important question remains
unanswered?”
7. Muddiest Point: Asking students to jot down answers to the question “What was the muddiest
point in _______?”
B. Skill in Analysis and Critical Thinking
8. Categorizing Grid: Giving students a table with row headings and having students match by
category and write in corresponding items from a separate list
9. Defining Features Matrix: Giving students a table column and row headings and having them
enter + or – to indicate whether or not the column heading corresponds to the row heading
10. Pro and Con Grid: Having students list pros and cons side-by-side
11. Content, Form, and Function Outlines: Having students outline the what, how, and why related
to a concept
12. Analytic Memos: Having students write a one- or two-page analysis of a problem or issue as if
they were writing to an employer, client, stakeholder, politician, etc.
C. Skill in Synthesis and Creative Thinking
13. One-Sentence Summary: Having students write one sentence that tells who does what to whom,
when, where, how, and why (symbolized as WDWWWWHW)
14. Word Journal: After students read a text, having them write a single word that best summarizes
the text and then write a couple of paragraphs explaining why they chose that word
31
15. Approximate Analogies: Having students complete the analogy A is to B as ___ is to ___, with
A and B provided
16. Concept Maps: Having students illustrate relationships between concepts by creating a visual
layout bubbles and arrows connecting words and/or phrases
17. Invented Dialogues: Having students use actual quotes or compose representative quotes to
create a dialogue between differing characters/personas
18. Annotated Portfolios: Having students create portfolios presenting a limited number of works
related to the specific course, a narrative, and maybe supporting documentation
D. Skill in Problem Solving
19. Problem Recognition Tasks: Presenting students with a few examples of common problem types
and then asking them to identify the particular type of problem each represents
20. What’s the Principle?: Presenting students with a few examples of common problem types and
then asking them to state the principle that best applies to each problem
21. Documented Problem Solutions: Having students not only show their work, but also explain
next to it in writing how they worked the problem out (“show and tell”)
22. Audio- and Videotaped Protocols: Recording students in the act of working out solutions to
problems and then studying it with the student(s)
E. Skill in Application and Performance
23. Directed Paraphrasing: Having students paraphrase part of a lesson for a specific audience and
purpose
24. Application Cards: Handing out an index card (or slip of scratch paper) and having students
write down at least one ‘real-world’ application for what they have learned
25. Student-Generated Test Questions: Having students anticipate possible test questions and write
them out
26. Human Tableau or Class Modeling: Having students create “living” scenes, do enactments, or
model processes
27. Paper or Project Prospectus: Having students create a brief, structured plan for a paper or
project, anticipating and identifying the elements to be developed
II. Assessing Learner Attitudes, Values, and Self-Awareness
A. Students’ Awareness of their Attitudes and Values
28. Classroom Opinion Polls: Having students indicate their opinions on specific issues via written
(or clicker) responses
29. Double-Entry Journals: Having students write down the ideas, assertions, and arguments they
find most meaningful and/or controversial and then explain the personal significance of or
respond to the topic
30. Profiles of Admirable Individuals: Having students write a brief, focused profile of an
individual – in a field related to the course – whose values, skills, or actions they admire
31. Everyday Ethical Dilemmas: Presenting students with a case study that poses an ethical
dilemma – related to the course – and having them write anonymous responses
32. Course-Related Self-Confidence Surveys: Having students write responses to a few questions
aimed at measuring their self-confidence in relation to a specific skill or ability
32
B. Students’ Self-Awareness as Learners
33. Focused Autobiographical Sketches: Having students write one to two pages about a single,
successful learning experience in their past relevant to the learning in the course
34. Interest/Knowledge/Skills Checklists: Giving students a checklist of the course topics and/or
skills and having them rate their level of interest, skill, and/or knowledge for each
35. Goal Ranking and Matching: Have students write down a few goals they hope to achieve – in
relation to the course/ program – and rank those goals; then comparing student goals to
instructor/program goals to help students better understand what the course/program is about
36. Self-Assessment of Ways of Learning: Presenting students with different approaches to learning
and asking students to identify which approaches they think work best for them
C. Course-Related Learning and Study Skills, Strategies, and Behaviors
37. Productive Study-Time Logs: Having students record how much time they spend studying, when
they study, and/or how productively they study
38. Punctuated Lectures: Stopping periodically during lectures and having students reflect upon and
then write briefly about their listening behavior just prior and how it helped or hindered their
learning
39. Process Analysis: Having student keep a record of the step they take in carrying out an
assignment and then reflect on how well their approach worked
40. Diagnostic Learning Logs: Having student keep a record of points covered that they understood
and those they didn’t understand as well as homework problems they completed successfully and
those they had trouble with; then having them reflect on their strengths and weaknesses as
learners and generate possible remedies
III. Assessing Learner Reactions to Instruction
A. Learner Reactions to Teachers and Teaching
41. Chain Notes: Handing out note cards (or slips of scratch paper) in advance (for responses) and
then passing around an envelope with a specific question for each student to answer at the
moment in time when the envelope reaches them (e.g., “Immediately before this reached you,
what were you paying attention to?” or “What exactly were you doing during the minute or so
before this reached you?”)
42. Electronic Mail Feedback: Posing a question to students about the teaching and allowing
students to respond anonymously via the instructor’s electronic mailbox
43. Teacher-Designed Feedback Forms: Having students respond anonymously to 3 to 7 questions
in multiple-choice, Likert scale, or short-answer formats to get course-specific feedback
44. Group Instructional Feedback Technique: Having someone else (other than the instructor) poll
students on what works, what doesn’t, and what could be done to improve the course
45. Classroom Assessment Quality Circles: Involving groups of students in conducting structured,
ongoing assessment of course materials, activities, and assignments and suggesting ways to
improve student learning
B. Learner Reactions to Class Activities, Assignments, and Materials
46. RSQC2: Periodically having students do one or all of the following in writing: Recall,
Summarize, Question, Comment, and Connect
47. Group-Work Evaluations: Having students answer questions to evaluate team dynamics and
learning experiences following cooperative learning activities
33
48. Reading Rating Sheets: Having students rate their own reading behaviors and/or the interest,
relevance, etc., of a reading assignment
49. Assignment Assessments: Having students rate the value of an assignment to them as learners
50. Exam Evaluations: Having students provide feedback that reflects on the degree to which an
exam (and preparing for it) helped them to learn the material, how fair they think the exam is as
an assessment of their learning, etc.
-- Adapted from Angelo & Ross, Classroom Assessment Techniques
Collecting Evidence beyond the Classroom
The Higher Learning Commission, in compliance with the U.S. Department of
Education’s 2009 revision to EDGAR, 34 CFR §602.16(a)(1)(i), “Accreditation and
preaccreditation standards,” has adopted the policy shown to the right. This policy,
one of a group of policies generally referred to as “the Federal Compliance
Criteria,” has implications for program assessment, particularly in regard to the
collection of external data.
While most programs have access to any applicable licensing-exam outcomes, they
may not have direct access to data on course completion rates and job placement.
Fortunately, programs are not expected to collect this data on their own. The Office
of Planning and Institutional Effectiveness (OPIE) provides support for collecting
course enrollment and completion data. (See https://www.cnm.edu/depts/planning
for a data request form.) And, CNM’s Job Connection Services provides support
for the collection of employment outcomes data. (Find detailed reporting at
https://www.cnm.edu/depts/advisement/job-connection/information.)
Review of Student
Outcome Data
An
institution
shall
demonstrate that, wherever
applicable to its programs,
its
consideration
of
outcome data in evaluating
the success of its students
and its programs includes
course completion, job
placement, and licensing
examination information.
HLC Policy FDCR.A.10.080
Other forms of external data can also be quite useful, and surveys, interviews, and focus groups can be good
techniques for collecting such data. As with classroom assessments, the usefulness of the information gathered
depends to some extent on the quality of the questions asked or problems posed. Faculty who would like
assistance with questionnaire development and/or review are encouraged to contact the Senior Director of
Outcomes and Assessment in OPIE. For assistance with survey administration, contact the OPIE institutional
research staff (https://www.cnm.edu/depts/planning).
Feedback from alumni, area employers, high school career guidance counselors, or instructors in 4-year
programs to which students transfer may be of particular relevance and value to some programs. If survey
response rates tend to be low, consider conducting one-on-one interviews or inviting people to campus to
participate in focus groups. Providing food might help entice them to come. For example, interviewing
employers who could but do not hire your alumni might provide some good insights and/or build some bridges.
Sampling Student Learning Outcomes
Sampling can facilitate the collection of assessment information while preventing the collection process from
becoming overly intrusive and/or burdensome for faculty and students. Consider the value of sampling in terms
of 1) sampling learning outcomes in a variety of contexts, and 2) sampling student learning products within a
single context. Sampling a widely representative group of courses within a program, rather than assessing the
entirety of a program competency in a single course, not only distributes the workload more fairly, but also
yields more useful information. And, in contrast to assessing every student’s competency at some final stage in
the learning process, sampling learning outcomes at different levels of development paints a more
comprehensive picture of how students effectively achieve proficiency.
34
For example, while assessing performance on a final exam in an upper-level course may give the faculty
information about how well students demonstrate an outcome toward the end of the program, it sheds little light
on the learning processes upon which that outcome was built (or what happened with students who did not
persist to that stage). If, however, the program has a well-designed rubric in place to represent progressive
levels of competency and the program faculty uses that rubric to identify courses in which learning expectations
correspond with the proficiency levels, a sampling of assessments from each of those courses may reveal
information related to students’ early misconceptions, educational expectations, learning approaches, study
strategies, and program persistence.
Ideally, a program’s faculty collectively identifies courses that offer appropriate opportunities for assessment of
common program outcomes at various benchmarks of progression through the program. The mapping of course
SLOs to program SLOs facilitates identification of courses for consideration. While including assessment
activities in all courses with SLOs that align to a given program SLO is certainly an option, the faculty involved
in course delivery may have insights regarding the relative usefulness of sampling learning in specific contexts.
Not all courses will necessarily offer equally useful assessment opportunities. The professional judgment of the
faculty can inform a joint decision about which aspects of which courses will provide the best opportunities for
sampling learning outcomes.
Sampling can also refer to the process of analyzing small but sufficiently representative collections of the
products of student learning when the analysis of such products is too time consuming to be practically applied
to every student. Applying a competency-based rubric to assess a random selection of projects, papers, etc.,
from a course with multiple sections and several hundred students would be an example. Note that in this type
of approach the assessment process is not tied to the grading process. However, an instructor may use coursespecific grading criteria for an assignment and also sample the work product of that assignment for program
assessment. Please see the CNM Handbook for Educational Research for a more detailed discussion of
sampling methodologies.
IRB and Classroom Research
If you are interested in conducting in-depth, structured classroom assessment that might be viewed as “research
involving human subjects,” your project may require prior approval by the Institutional Review Board (IRB).
Project characteristics that warrant IRB approval include, but are not limited to, providing one or more
interventions to particular students so that you can compare their learning outcomes with those of students who
did not receive the intervention. Please see the CNM Handbook for Educational Research for further discussion.
ANALYZING, INTERPRETING, AND APPLYING ASSESSMENT FINDINGS
The CNM assessment model presents two opportunities for analysis and interpretation of assessment findings:
1) at the level of the individual course, and 2) in the broader context of the program’s pooled information.
Analyzing Findings from Individual Measures
Assessment data may or may not require in-depth statistical analysis. Sometimes the findings are obvious and/or
can only be described in very specific terms. But, when a great deal of data has been collected, or when the
interpretation is not so obvious, data analysis may help clarify the significance of the information. Some reasons
for conducting a statistical analysis include:

Wanting to compare outcomes between two or more groups and know whether the differences are
statistically significant (cross-sectional analyses).

Wanting to compare pre- and post-assessment scores to determine whether statistically significant gains
were made (longitudinal analyses).
35

Wanting to understand the degree to which a change in one factor is associated with a change in another
(dispersion and correlation and studies).
For guidance/assistance with running statistical analyses, contact the Senior Director of Outcomes and
Assessment in OPIE and/or see the CNM Handbook for Educational Research.
Interpreting Findings
The following considerations may be helpful when interpreting the results of assessment:

Remember that maximizing student learning is the goal of outcomes assessment, not evaluation of the
program or its faculty. Insight into student learning is better facilitated when findings are analyzed on
the basis of separate performance criteria than when passing-score totals or grades are tallied. When
findings are further analyzed in terms of learning development along a trajectory, previously hidden
learning patterns may emerge, offering further insight.

Interpretation of findings in relation to target outcomes reflects expectations for student learning, not
program success. While having 70% of students in a program meet a minimal threshold on an
assessment may offer evidence that the program is performing adequately, interpreting such a result as
meeting target expectations suggests a goal that 70% of students will acquire the desired outcomes.
What constitutes “good enough” when we are talking about student learning? If there is room for
improvement, shouldn’t we strive to facilitate it?

Keep in mind the original purpose and design of the assessment, and avoid reading too much into the
results. What were you looking for? Did the assessment provide the desired information? If the findings
carry unexpected implications, will a new assessment approach be needed to further explore these?

Avoid letting assessment results alone dictate action; on the other hand, be careful not to be overly
dismissive of unexpected or undesirable data. When assessment findings suggest one thing and
observations and/or knowledge of extenuating circumstances suggest another, consider the potential
validity of both, and apply professional judgment in your interpretation.
Applying Findings at the Course Level
Within the context of your own class, your assessment findings may have implications for prioritization of
course learning goals, identification of pre-requisite skills, instructional emphasis, teaching methodologies,
outside study expectations, and/or use of class time. In some cases, the findings may merely suggest a need for
refining the assessment process itself. These are all factors that are within your control and are, therefore, worth
reviewing in light of new information about your students’ learning. The process of teasing out the implications
requires, however, that you conduct an active critical analysis.
Focusing on what you can do does not mean that students’ failures are all attributable to your teaching. Anyone
who has much experience working in higher education knows that many students fail for reasons that have
nothing to do with course instruction. It is to improve outcomes for those students for whom the instructor can
make a difference that faculty strive for continual professional improvement. If you could help more of your
students develop genuine proficiency just by changing something you do in your teaching approach, wouldn’t
you want to do so?
A good way to begin is identifying what is working well, as opposed to looking for where the failures lie. By
analyzing whom it works best for, under what conditions it works best, and why it works, you may discover
opportunities for broader application, tweaking, and/or creative variation. You may identify reasons why it
doesn’t always work. You may learn something about your students’ preconceptions, misconceptions, attitudes,
36
study habits, and/or backgrounds that you had not previously recognized. You may see new ways to bridge gaps
and better integrate students’ understanding.
Taking the professional insights you gain through the analysis and interpretation of assessment findings and
applying those insights to make changes that improve student learning is the whole point of conducting
assessment in the first place. This is what makes assessment relevant and useful. And, this is why it is important
to choose meaningful assessment approaches from the beginning.
Pooling and Applying Program Findings
Frequent references have been made throughout this handbook regarding the value of developing a
collaborative assessment process within each program. Assessment can be made especially meaningful through:
 The strategic collection of information at the program level.
 The sharing, compilation, discussion, and interpretation of findings among the program faculty.
 The application of findings toward future planning.
Program assessment leaders are encouraged to hold faculty meetings for the purpose of pooling and
interpreting assessment findings prior to the writing of the annual assessment report. The discussions that arise
out of such meetings can be invaluable. Keeping the focus on student learning and what conditions seem to be
effective in promoting it, not on instructor effectiveness, will make the process most productive. The following
are some suggestions for facilitating the group process:

Provide ample time for the meeting.

Include all full-time program faculty, and invite part-time faculty.

Remind those present to please share the floor equally; consider the views of others; critique ideas, not
people; express disagreement civilly; and seek solutions, not culpability.

Present any program-level outcome information (e.g. licensing exam results, employer survey results,
course completion rates, alumni employment outcomes)

Have each faculty member who conducted assessments related to the program SLOs targeted for the
year’s reporting present his/her course-level findings.

Take notes (separated by SLO if more than one was assessed) for all to see.

Encourage discussion and sharing of anecdotal observations.

Trace progressions in development of specific skills through time in the program.

Identify key stages in the development of the learning outcome(s).

Consider developing diagrams to group or connect ideas.

Identify common themes.

Identify obstacles to student success.

Identify effective strategies.

Discuss implications.

Connect the implications to ideas.

Identify goals (what the faculty would like to see happen).

Identify resources needed (if any).
37

Determine whether follow-up assessment in the coming academic year would be useful.

Develop a plan (with or without suggestions for external actions) toward improvement in student
learning outcomes.

Update the assessment cycle plan as needed, based on the current findings.
Once the program faculty has collectively analyzed, interpreted, and applied the assessment findings, the
writing of the assessment report will be relatively easy. More importantly, the faculty will be better informed,
have a clearer sense of the interconnectedness of program course work, and have ownership of an assessment
process that is relevant to them.
38
GLOSSARY
Anecdotal Evidence
Area
AT
BIT
Case Studies
CHSS
Construct
Construct Validity
Core Competencies
Correlation
Criterion-Referenced
Cross-Sectional Analysis
Dean's Council
Discipline Area
Direct Assessment
Embedded Outcomes
Empirical Evidence
Exit Competencies
External Assessment
Formative Assessment
HLC
HWPS
Indirect Assessment
Usually refers to individual examples observed or stories heard. Typically not viewed as
scientifically reliable, but may suggest new hypotheses.
Usually refers to disciplines within General Education
The School of Applied Technologies
The School of Business Information and Technology
Anecdotal reports prepared by trained observers. When numerous case studies are
analyzed together, the findings are often considered empirical evidence.
The School of Communication, Humanities & Social Sciences
A theoretical entity or concept such as aptitude, creativity, engagement, intelligence,
interest, or motivation. Constructs cannot be directly measured, so we assess indicators
of their existence. For example, IQ tests assess prior learning as an indicator of
intelligence.
The ability of an assessment approach to actually represent the construct it purports to
address. Since constructs cannot be directly measured, construct validity depends upon
how much people can agree upon a definition of the construct and how truly the
indicators represent that definition.
A term formerly used for a group of student learning outcomes assessed by all degree
programs. When the core competencies were dropped from the assessment process, they
were more-or-less replaced by the embedded outcomes of critical thinking and life skills.
A process of quantifying a relationship between two variables in which an increase in one
is associated with an increase in the other (a positive correlation) or a decrease in the
other (a negative correlation). Correlations do not necessarily demonstrate causal
relationships between variables. For example, preparedness for college-level work and
number of visits to hospital emergency rooms may be positively correlated, but it would
be a logical fallacy to conclude that one causes the other.
Looks at individual student performance in reference to specific performance criteria
Studies comparing, at a single point in time, people representing different stages of
development, for the purpose of drawing inferences about how people change/develop
over time. E.g., comparing outcomes of students completing a program to those of
students just entering the program. Cross-sectional analyses assume that the less
advanced group is essentially the same as the more advanced group was at that earlier
stage.
A weekly meeting of deans and other members of academic leadership. Once a year SAAC
gives an annual report to this group.
Usually refers to a non-degree, non-certificate, non-Gen-Ed subject (mostly housed within
SAGE)
Looks at products of student work (work output) that demonstrate learning
The core competencies of critical thinking and life skills. Only degree and certificate
programs are charged with demonstrating how these outcomes are embedded within
their program SLOs.
Usually refers to what has been systematically observed. May or may not be obtained
through scientific experimentation.
A term formerly used at CNM for program SLOs
May be conducted outside of CNM (e.g., licensing exams) or outside of the course or
program (e.g., next-level assessment)
taking place during a developmental phase of learning
Higher Learning Commission, the accrediting body for CNM
The School of Health, Wellness & Public Safety
Looks at indicators of learning other than the students’ work output (e.g., surveys or
interviews)
39
Inter-Rater Reliability
Ipsative Assessment
Longitudinal Analysis
Meta-Assessment Matrix
MSE
Next-Level Outcomes
Norm-Referenced
Assessment
Objective
Program
Program Assessment
Program Review
Qualitative Evidence
Quantitative Evidence
Reliability
SAAC
SAAC Annual Report to
Deans Council
SAAC Report
SAGE
SLO
Subjective
Summative Assessment
Validity
Value-Added
Assessment
Refers to consistency in assessment ratings among different evaluators. For example, if
five judges score a performance with nearly identical ratings, the inter-rater reliability is
said to be high.
Comparisons within an individual’s responses or performance. E.g., changes over time or
different preferences revealed through forced-choice responses.
A comparison of outcomes over time within a given group, based on changes at an
individual level (versus differences in group averages). E.g., pre-test and post-test scores
A method developed by SAAC to track a wide variety of assessment practices for evidence
of how the college is performing as a whole.
The School of Math, Science and Engineering.
Performance in more advanced courses, employment, internship, transfer, etc., for which
a course or group of courses prepares students.
Analysis of individual or group performance to estimate position relative to a normative
(usually peer) group.
Not influenced by personal perceptions; impartial, unbiased.
Usually refers to a degree (AA, AS, or AAS) or certificate discipline (but may occasionally
be used more broadly to include areas and disciplines).
An annual process of examining student learning dynamics for strengths and gaps,
interpreting the implications of findings, and developing plans to improve outcomes.
An annual administrative study of programs, based on quality indicators such as
enrollment and completion rates, to determine the viability of program continuation.
Evidence that cannot be directly measured, such as people’s perceptions, valuations,
opinions, effort levels, behaviors, etc. Many people mistakenly refer to qualitative
evidence that has been assigned numerical representations (e.g., Likert-scale ratings) as
quantitative data. The difference is that the qualitative evidence is inherently nonnumerical.
Evidence that occurs in traditionally established, measurable units, such as counts of
students, grade point averages, percentages of questions correctly answered, speed of
task completion, etc.
Consistency in outcomes over time or under differing circumstances. For example, if a
student challenges a placement exam repeatedly without doing any study in between and
keeps getting essentially the same result, the test has a high level of reliability.
Student Academic Assessment Committee – a faculty-driven team that facilitates CNM
program assessment. See http://www.cnm.edu/depts/academicaffairs/saac.
SAAC provides an annual report to the Deans Council on the state of assessment at CNM.
This report includes a meta-assessment rubric to show the progress of programs toward
comprehensive implementation of assessment procedures.
This usually refers to the annual assessment report completed by each program, Gen Ed
area, and discipline. The report is usually prepared by the program chair/director and
submitted to the school's SAAC representatives. Loosely, the "SAAC Report" can also refer
to the SAAC Annual Report to Deans Council.
The School of Adult & General Education.
Student Learning Outcome (competency) – This term carries a dual meaning, as it can
refer to a student learning outcome statement or the outcome itself.
Based on or influenced by personal perceptions.
Assessment that takes place at the end of the learning process.
The ability of an assessment approach to measure what it is intended to measure. An
aptitude test in which cultural expectations or socio-economic level significantly influence
outcomes may have low validity.
Assessment that looks for confirmation of gains, presumably attributable to an
educational (or other) intervention, typically using longitudinal or cross-sectional
analyses.
40
REFERENCES
Adams County School District. Marzano’s taxonomy: Useful verbs [PDF document]. Adams County School
District 50 Competency-Based System Wiki. Retrieved from
http://wiki.adams50.org/mediawiki/images/f/f9/Bprtc_Marzano_taxonomy_verbs.pdf.
American Association of Colleges and Universities. Value rubrics [PDF document]. Retrieved from
https://secure2.aacu.org/store/SearchResults.aspx?searchterm=value+rubrics&searchoption=ALL
Anderson, L. W., & Krathwohl, D. R. (2000). A taxonomy for learning, teaching, and assessing: A revision of
Bloom’s taxonomy of educational objectives. London, England: Longman.
Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd
ed.). San Francisco, CA: Jossey-Bass.
Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. New York,
NY: McKay.
Higher Learning Commission. Criteria for Accreditation. Retrieved from
http://policy.ncahlc.org/Policies/criteria-for-accreditation.html
Higher Learning Commission (2012). Federal compliance requirements for institutions [web publication].
Retrieved from http://policy.ncahlc.org/Federal-Regulation/review-of-student-outcome-data.html
Institutional Research Office. (2006). Action verbs for creating learning outcomes [Word document]. Oregon
State University. Retrieved from http://ir.library.oregonstate.edu/xmlui/handle/1957/1816.
Marzano, R. J., & Kendall, J. S. (2008) Designing & assessing educational objectives: Applying the new
taxonomy. Thousand Oaks, CA. Corwin Press.
NCTM. (July 2013). Formative assessment: a position of the National Council of Teachers of Mathematics
[PDF document]. NCTM web site. Retrieved from
http://www.nctm.org/uploadedFiles/About_NCTM/Position_Statements/Formative%20Assessment1.pdf
#search=%22position%20formative%20assessment%22
SUNY Council on Assessment. (2014). SCOA institutional effectiveness rubric [PDF document]. State
University of New York. Retrieved from
http://www.sunyassess.org/uploads/1/0/4/0/10408119/scoa_institutional_effectiveness_rubric_2.0_.pdf
Teaching, Learning and Assessment Center. (2009). Assessment terms and definitions. “Assessment Brown
Bag.” West Chester University. Retrieved from http://www.wcupa.edu/tlac
Te@chthought. (2013). 6 alternatives to Bloom’s taxonomy for teachers [Blog]. Retrieved from
http://www.teachthought.com/learning/5-alternatives-to-blooms-taxonomy/
Tomei, L.J., Designing effective standards/competencies-aligned rubrics [PDF document]. LiveText. LaGrange,
IL: livetext.com. Retrieved from http://cdn2.hubspot.net/hub/254524/file-461027415pdf/RubricDesign_LanceTomei.pdf
U.S. Department of Education (2009). 34 CFR part 602: The Secretary’s recognition of accrediting agencies.
EDGAR [Word document]. Retrieved from
http://www2.ed.gov/policy/highered/reg/hearulemaking/hea08/34cfr602.pdf
Webb, N. L. (2002). Depth-of-knowledge levels for four content areas [PDF document]. Wisconsin Center for
Education Research. Retrieved from
http://providenceschools.org/media/55488/depth%20of%20knowledge%20guide%20for%20all%20subj
ect%20areas.pdf.
41
APPENDIX 1: CNM ASSESSMENT CYCLE PLAN FORM
https://www.cnm.edu/depts/academic-affairs/saac/SAAC_process_forms
42
APPENDIX 2: GUIDE FOR COMPLETION OF CYCLE PLAN
https://www.cnm.edu/depts/academic-affairs/saac/documents/process-and-forms/guide-for-completion-of-assessment-cycle-plan
43
APPENDIX 3: CNM ANNUAL ASSESSMENT REPORT FORM
https://www.cnm.edu/depts/academic-affairs/saac/SAAC_process_forms
44
APPENDIX 4: GUIDE FOR COMPLETION OF ASSESSMENT REPORT
https://www.cnm.edu/depts/academic-affairs/saac/documents/process-and-forms/guide-for-completion-of-assessment-report-1
45
APPENDIX 5: RUBRIC FOR ASSESSING LEARNING OUTCOMES ASSESSMENT PROCESSES
0
The Basics
1
2
For Additional Effectiveness
3
4
For Greatest Effectiveness
5
Identification
of Student
Learning
Outcomes
The Student Learning Outcomes
(SLOs) reflect the institution’s
mission and core values and are
directly aligned to any overarching
goals or regulations.
The SLOs represent knowledge
and/or capacities that are relevant
to the field and meaningful to the
involved faculty.
The SLOs are consequential to the
field and/or to society as a whole.
Articulation
of SLOs
The SLOs are clearly stated,
realistic, and measurable
descriptions of learning
expectations.
The SLOs are designed to clarify
for students how they will be
expected to demonstrate learning.
The SLOs focus on the
development of higher-level skills.
(See Benjamin Bloom’s taxonomies
of the cognitive, affective, and
psychomotor domains.)
Assessment
Processes
Assessments are created,
implemented, and interpreted by
the discipline/program faculty.
The assessment process is an
integral and prominent component
of the teaching/learning process
rather than a final adjunct to it.
The assessment process is a closed
loop in which valid and reliable
measures of student learning inform
strategic changes.
Relevance of
Assessment
Measures
Assessment measures are authentic,
arising from actual assignments
and learning experiences and
reflecting what the faculty think is
worth learning, not just what is
easy to measure.
Assessment measures sample
student learning at formative and
summative stages, and results are
weighted and/or interpreted
accordingly (e.g., low-weighted
formative assessments used to
provide early feedback
/intervention).
Assessment measures promote
insight into conditions under which
students learn best.
Alignment of
Assessment
Measures
Assessment measures focus on
experiences that lead to the
identified SLOs.
Assessment measures reflect
progression in complexity and
demands as students advance
through courses/programs.
Assessment measures are flexible,
allowing for adaptation to specific
learning experiences, while
maintaining continuity with
identified SLOs.
Validity of
Assessment
Measures
Assessment measures focus on
student learning, not course
outcome statistics, and provide
evidence upon which to evaluate
students’ progress toward identified
SLOs.
Direct and/or indirect, qualitative
and/or quantitative, formative
and/or summative measures are
carefully selected based upon the
nature of the learning being
assessed and the ability of the
measure to support valid inferences
about student learning.
Multiple measures are used to
assess multidimensional, integrated
learning processes, as revealed in
performance over time.
Reliability of
Assessment
Measures
Assessment measures used across
multiple courses or course sections
are normed for consistent scoring
and/or interpretation.
Assessment measures are regularly
examined for biases that may
disadvantage particular student
groups.
When assessment measures are
modified/improved, historical
comparisons are cautiously
interpreted.
Reporting of
Assessment
Findings
Reporting of assessment results
honors the privacy and dignity of
all involved.
Reporting provides a thorough,
accurate description of the
assessment measures implemented
and the results obtained.
Reporting includes reflection upon
the effectiveness of the assessment
measures in obtaining the desired
information.
Interpretation
of Assessment
Findings
Interpretation of assessment
findings acknowledges limitations
and possible misinterpretations.
Interpretation focuses on
actionable findings that are
representative of the teaching and
learning processes.
Interpretation draws inferences,
applies informed judgment as to the
meaning and utility of the evidence,
and identifies implications for
improvement.
Action
Planning
Based on
Assessment
Assessment findings and
interpretation are applied toward
development of a written action
plan.
The action plan proposes specific
steps to improve student learning
and may also identify future
assessment approaches.
The action plan includes a critical
analysis of the obstacles to student
learning and seeks genuine
solutions, whether curricular or cocurricular in nature.
https://www.cnm.edu/depts/academic-affairs/saac/documents/process-andforms/rubric%20for%20evaluating%20outcomes%20assessment.docx
46
APPENDIX 6: NEW MEXICO HIGHER EDUCATION DEPARTMENT GENERAL EDUCATION
CORE COURSE TRANSFER MODULE COMPETENCIES
The following seven tables present the competencies required for courses to be included in the corresponding
areas of the New Mexico General Education Core Course Transfer Module.
47
48
49
50
51
52
http://hed.state.nm.us/uploads/files/Policy%20and%20Programs/HED%20Gen%20Ed%20Competencies-All%20Areas.pdf
53
Download