Completed Sample LADOR

advertisement
[Department]
Assessment Report: [Program]
[Years]
TABLE OF CONTENTS
Executive Summary .................................................................................................................................................2
Introduction ............................................................................................................................................................4
Target Population ............................................................................................................................................................. 4
Theory, Research, and Needs Informing the Program ..................................................................................................... 4
Rationale and Context of Outcomes Assessment ............................................................................................................. 4
Assessment Cycle Timeline ............................................................................................................................................... 5
Specifying Student Learning Outcomes .....................................................................................................................6
Program Goals and Learning Outcomes ........................................................................................................................... 6
Creating and Mapping Program to Outcomes ...........................................................................................................6
Overview of Program ........................................................................................................................................................ 6
Map of Program Components to Outcomes..................................................................................................................... 6
Selecting/Designing Instruments ..............................................................................................................................7
Map of Outcomes to Instruments .................................................................................................................................... 7
Description of Instruments ............................................................................................................................................... 7
Additional Information to Collect ..................................................................................................................................... 7
Location of Information about Selecting/Designing Instrumentation.............................................................................. 7
Examining Implementation Fidelity ..........................................................................................................................7
Process to Examine Implementation Fidelity ................................................................................................................... 7
Measurement and Results of Implementation Fidelity .................................................................................................... 7
Collecting Outcomes Information .............................................................................................................................9
Planned Data Collection: Timeline and Responsibility for Collecting ............................................................................... 9
Actual Data Collection....................................................................................................................................................... 9
Analyzing Data, Reporting Results, and Maintaining Information ..............................................................................9
Data Analysis and Statistical Results ................................................................................................................................. 9
Interpretation of Results................................................................................................................................................... 9
Data Location .................................................................................................................................................................. 10
Using Results for Program-Related Decisions .......................................................................................................... 10
Using Results for Program Improvement or Continuation Decisions ............................................................................. 11
Using Results to Make Changes to the Assessment Process .......................................................................................... 11
Using Results for Program Administration ..................................................................................................................... 11
Additional Final Thoughts ...................................................................................................................................... 12
Assessment Report: [Program]
[Years]
EXECUTIVE SUMMARY
The executive summary provides a brief, 2 -page summary of the assessment cycle of the program.
Overview of Program
The “Know How to Graduate” program is intended for students with over 200 credit hours that have not yet completed
their graduation requirements. Justification for this program resulted from a report from the Office of Institutional
Research indicating that approximately 7% of JMU students fell within this category. After completing the program,
students are expected to be able to identify the general graduation requirements at JMU.
The program currently consists of a workshop lasting approximately 1 hour. This workshop entails both a consultation
experience and a motivational speaker. Within the consultation experience students are review their current transcript
and are presented with information about the general graduation requirements at JMU. The motivational speaker
reiterates this information and also provides information about other issues, such as time management, that are
relevant to completing graduation requirements.
Purpose of Current Assessment Cycle
Assessment results from implementation of the program in 2010-11 have indicated that on average undergraduate
students with more than 200 credit hours failed to identify their graduation requirements after completing the
program. In Fall 2011 the Know How to Graduate! program was re-designed. The newly designed program was
implemented during the 2012-13 academic year.
The new version of the program consists of one workshop combining three elements: 1) a review of academic work
completed, 2) identification of key academic work still needing to be completed, and 3) a motivational speaker
presentation.
The purpose of this year’s assessment is to investigate whether or not this new component of the program is
successful.
Method
Implementation fidelity assessment was conducted by training two graduate students to observe the workshop as it
was occurring. This information was collected in order to assess the occurrence and quality of essential features of the
program that should result in changes in our student learning outcome. Results from this procedure are discussed on
conjunction with other assessment data.
Ten student enrolled in the program. All students volunteered to participate in the current study. Data was collected
by program coordinators. Each student was administered the Academic Requirements Knowledge Scale prior to
attending the workshop and once again after the workshop was completed.
The Academic Requirements Knowledge Scale consists of seven multiple choice items. Total scores are calculated by
summing the number of correct responses. Internal consistency estimates was .80.
Results
Results from the implementation fidelity assessment indicated that various aspects of the program failed to be
implemented as intended. These results specifically indicated that coverage of graduation requirements during the
consultation experience either failed to occur or that the information was not presented clearly.
Prior to the workshop, students were able to answer on average 2.00 items correctly (SD = 0.82). After the workshop,
this value increased to 2.50 (SD = 0.71). A dependent sample t-test indicated that these differences failed to be
statistically significant t (9) = 1.46, p = .18, d = .46. In other words, results indicate that these differences failed to be
either statistically or practically significant.
Implications
2
Assessment Report: [Program]
[Years]
Since implementation fidelity was low, conclusions about the effectiveness of the program cannot be determined. It is
recommended that program facilitators receive additional training prior to implementing the program next semester.
The program will be re-assessed the following semester after alterations have been made in the training of program
facilitators.
3
Assessment Report: [Program]
[Years]
INTRODUCTION
Provide an overview of the program and a plan for the assessment cycle timeline. See completed examples o f this
section in the Online Resources.
Department: Career & Academic Planning
Program: Know How to Graduate!
Program Coordinators: Michael McCleve & John Hathcoat
Assessment Report Authors: Michael McCleve & John Hathcoat
Years: Fall 2011 - Spring 2014
Target Population
Define the target population for this program.
Undergraduate students without any degree with more than 200 credit hours and have not yet applied for graduation.
Theory, Research, and Needs Informing the Program
Document and reference t he relevant theory and research supporting the program for the target population.
Describe student needs that are driving the program development or delivery.
In Spring 2009 Institutional Research identified 7% of all undergraduate students who have do not have a degree and
have completed more than 200 credit hours. The “Know how to Graduate!” program was developed in response to this
need. This program was created using Malkowitzky’s Model of Timely Academic Completion Related to Life Success
(Malkowitzky, 2009). The program was first implemented in Fall 2010 and has been implemented each semester since
this date.
Rationale and Context of Outcomes Assessment
Provide rationale and context for the current assessment plan. Identify the stakeholders involved in the process to
specify student learning outcomes. If the program has gone through the assessment cycle in the past, provide a
context for the current assessment activities based on previous activities.
Assessment results from implementation of the program in 2010-11 have indicated that on average undergraduate
students with more than 200 credit hours failed to identify their graduation requirements after completing the
program. In Fall 2011 the Know How to Graduate! program was re-designed. The newly designed program was
implemented during the 2012-13 academic year.
The new version of the program consists of one workshop combining three elements: 1) a review of academic work
completed, 2) identification of key academic work still needing to be completed, and 3) a motivational speaker
presentation.
The purpose of this year’s assessment is to investigate whether or not this new component of the program is
successful.
4
Assessment Report: [Program]
[Years]
Assessment Cycle Timeline
For each component of the assessment cycle (pictured below), indicate the date that the component was completed
(in a previous cycle or the current cycle), or the date that the component is planned to be completed in the current
assessment cycle. Please note this may be a multi -year process.
Specifying Student
Learning Outcomes
Using Results for
Program-Related
Decisions
Date Completed/
Planned Completion:
Spring 2010
Date Completed/
Planned Completion:
Fall 2013
Creating and Mapping
Programming to
Outcomes
Date Completed/
Planned Completion:
Spring 2012
Analyzing Data,
Reporting Results,
and Maintaining
Information
Selecting/Designing
Instruments
Date Completed/
Planned Completion:
Spring 2010
Date Completed/
Planned Completion:
Summer 2013
Collecting Outcomes
Information
Examining
Implementation
Fidelity
Date Completed/
Planned Completion:
2012-13
Date Completed/
Planned Completion:
2012-13
5
Assessment Report: [Program]
[Years]
SPECIFYING STUDENT LEARNING OUTCOMES
Student learning outcomes refer to what students should know, think, or do as a result of participating in the
program. The longest amount of time in the assessment cycle should be dedicated to establishing realistic learning
outcomes, because all other aspects of the assessment cycle will be tied to these outcomes since they are the
foundation. Learn about specifying student l earning outcomes and see completed examples of this section in the
Online Resources for Specifying Learning Outcomes .
Program Goals and Learning Outcomes
Specify the measureable student learning outcomes of the program (and overarching goals if applicable) . Identify
how the program’s learning outcomes map to department, divisional, and JMU goals.
As a result of participating in the Know How to Graduate! program students with more than 200 credit hours will
increase in the number of correctly identified graduation requirements at JMU.
CREATING AND MAPPING PROGRAM TO OUTCOMES
Mapping the program to the outcomes refers to specifically identifying how the program components (e.g. activities,
curriculum) relate to each learning outcome. Learn about creating and mapping program to outcomes and see
completed examples of this section in the Online Resources for Creating and Mapping Program to Outcomes .
Overview of Program
Provide a brief description of the pr ogram – this may be the description of the program on the Department website.
The Know How to Graduate! program consists of a single workshop combining two elements: 1) Consultation
Experience and 2) Motivational Speaker. Within the consultation experience students are provided with a presentation
that reviews the general academic requirements to graduate at JMU. They also review the academic work that they
have actually completed and identify key academic work that has not yet been completed.
Map of Program Components to Outcomes
Identify program components that directly relate to individual learning outcomes. For each learning outcome ,
specifically identify the program components, delivery method, duration, and the stakeholder responsible . You may
want to utilize a table to help illustrate the connections between the program components and the outcomes . If the
program has been assessed in the past, describe the planned changes based on previous assessment results and if
those changes were actually i mplemented in the current assessment cycle.
Objective
Program
Component
As a result of participating in the Know How to Graduate! program students with more than 200
credit hours will increase in the number of correctly identified graduation requirements at JMU.
Consultation
Experience
Motivational
Speaker
6
Assessment Report: [Program]
[Years]
SELECTING/DESIGNING INSTRUMENTS
To measure the program’s lea rning outcomes, instruments need to be identified by selecting existing instrument s or
developing new instrument s. CARS can help with this section unless otherwise indicated. Learn about
selecting/designing instruments and see completed examples of this section in the Online Resources for
Selecting/Designing Instruments .
Map of Outcomes to Instruments
Identify each learning outcome and the specific measure s that will be used to assess the outcome. You may want to
utilize a table to help illustrate the connections. Attach instruments in the appendices . If changes were made to an
instrument, provide an appendix charting the items that have changed and the rationale.
Objective
Instrument
Number
of Items
Scoring
Reliability
Validity Evidence
Participants will be
able to identify the
specific
requirements they
need to complete to
graduate.
Academic
Requirements
Knowledge
Scale
7
All items are
multiple choice. A
total score is
calculated by
summing the
number of correct
responses.
In previous years
internal
consistency was
estimated at
approximately .80.
Prior research has
indicated that there
is a relationship
between total scores
and graduation (see
McCleve & Hathcoat,
2011).
See Appendix A for a list of actual items.
Description of Instruments
Include a description of the instruments selected or designed , including the name of instruments ; what they are
measuring; reliability and validity scores (if known); scoring instructions; and the number of items. You may want to
utilize a table to help provide this information.
See above table.
Additional Information to Collect
Identify information to collect that will help d etermine if the program affects groups differently (e.g. gender,
students’ interest in participating) ; CARS can help with this. Identify information to collect that may be of interest
to program administrators (e.g. how students learned about the program); members of the SAUP Assessment
Advisory Council can help with t his, because it does not address the assessment of learning outcomes but may help
with other aspects of program evaluation . With any additional information, identify the purpose for collection .
No additional information is collected.
Location of Information about Selecting/Designing Instrumentation
It is strongly encouraged that departments document the process for examining and selecting, and designing
instruments; including their pros/cons, r eliability and validity scores, and stakeholders involved. In this section,
identify the specific location (e.g. department server, physical location) where this information is stored.
In Spring 2011 a review of existing instruments was conducted. A file containing a critical review of existing
instruments is located at the following location: N:\AA\PROGRAM\GRADUATION\INSTRUMENTS
EXAMINING IMPLEMENTATION FIDELITY
Implementation fidelity refers to the alignment of the planned program and the implemented program. Therefore,
this section documents the program that was actually delivered. Learn about examining implementation fidelity and
see completed examples of this section in the Online Resources for Examining Implementation Fidelity .
Process to Examine Implementation Fidelity
Describe the process used to examine implementation fidelity (e.g. who conducted the study; when, where, how).
Two graduate students were recruited and trained in order to conduct the implementation fidelity assessment. These
students observed the workshop and completed the checklist as the program was actually implemented.
7
Assessment Report: [Program]
[Years]
Measurement and Results of Implementation Fidelity
For each learning outcome and corresponding program component, document the following aspects of
implementat ion fidelity: adherence, quality, exposure, and responsiveness. You may want to utilize a table to
illustrate these aspects . Provide a summary of the implementation fidelity results found and potential impact s on
the outcomes assessment results.
Aggregated results of the implementation fidelity assessment (i.e. across two trained graduate students) are reported
in the Table below:
Duration
Objective
Program
Component
Planne
d
Actual
Comments/
Responsivene
ss
1 = Low
(unengaged)
3 = Medium
5 = High
(engaged)
Participant
s will be
able to
identify
the specific
requireme
nts they
need to
complete
to
graduate.
Consultatio
n
Experience
30 min
30 min
4
Specific Features
Adhere
nce
Yes/No
Quality
1 = Low (confusing)
3 = Medium
5 = High (clear)
Each student reviews
4
what academic work
Yes
they have actually
completed.
Each student identifies
Yes
1
what academic work
needs to be completed
in order to graduate.
Comments: Each student had their transcript, which was thoroughly
reviewed. However, the coordinators didn’t have the academic
requirements that were needed for many program participants. A lot
of time was spent searching the website in order to identify
requirements for graduation for specific academic programs.
000 credit on transcript
Yes
2
intended for major, go to
major department head
I didn’t hear the presenters say
RKM5
this, but it was on the screen.
Yes
2
The minimum GPA
required to graduate is
Mentioned quickly and wasn’t
2.0 ARK8
clear.
000 credit on transcript
Yes
2
intended for major, go to
major department head
Shown on screen but not
RKM5
explained verbally.
Importance of
Yes
3
Graduation
Yes
5
Persistence in Difficult
Times
Yes
Motivation
al Speaker
20 min
5
Time Management Skills
20 min
4
Advisor Consultation
Resources
8
Yes
4
Yes
4
Assessment Report: [Program]
[Years]
Results of the implementation fidelity assessment for the program objective indicate that important aspects of the
program failed to be implemented as intended. Specifically, the results indicate that by the end of the program
coordinators were unable to assist many students with the identification of graduation requirements. In addition to
this, various general degree requirements were either not mentioned or not mentioned clearly. This implies that
program coordinators may need additional training prior to implementing the program in subsequent years.
COLLECTING OUTCOMES INFORMATION
Collecting information refers to the actual data collection process. Learn about collecting outcomes information and
see completed examples of this section in the Online Resources for Collecting Outcomes Information .
Planned Data Collection: Timeline and Responsibility for Collecting
Describe the timeline for when and how data is collected and by whom. You may want to utilize a table to help
provide this information.
Timepoint
When is this
data to be
collected?
How is this data to be collected?
What data will be
collected?
Who will collect this
data?
Pretest:
April 1
The pre-test will be administered to
students prior to implementing the
program. This data is collected in paper
and pencil format.
Academic Requirements
Knowledge Scale
A program facilitator will
collect the data.
Posttest:
April 1
The post-test will be administered to
students after implementing the
program. This data is collected in paper
and pencil format.
Academic Requirements
Knowledge Scale
A program facilitator will
collect the data.
Actual Data Collection
Describe the method for collecting data, including instrument administration and any training provided for
administrators; methods utilized to have students take measures (e.g. mandatory, incentives) ; and the number of
times data was collected in this assessment cycle . Also, describe control groups (if applicable) and identify how
information was collected from these students. Describe any differences between the original data collection plan
and what actually occurred. You may want to utilize a table to help provide this information.
Data collection for both the pre-test and post-test occurred as intended. All program participants volunteered to
participate in the study (n = 10).
ANALYZING DATA, REPORTING RESULTS, AND MAINTAINING INFORMATION
In order to determine the effectiveness of a program, data analysis is necessary. CARS can help with this section
unless otherwise indicated. Learn about analyzing data, reporting results, and maintaining information ; see
completed examples of this section in the Online Resources for Analyzing Data, Reporting Results, and Maintaining
Information.
Data Analysis and Statistical Results
Thoroughly describe data analysis and statistical results by outcome. Identify the technique s used to analyze the
data. Typical quantitative analysis would include descriptive statistics, results of practical and significance tests,
and tables/graphics that de scribe the findings. Typical qualitative analysis would include number of responses,
emergent themes, and tables/graph ics that describe the findings.
Objective 1 Results: Academic Requirements Knowledge Scale. Students’ average performance on the seven-item Academic
Requirements Knowledge Scale (ARK) can be found in Table 1. Scores on the ARK could range from 0 to 7. At pretest, students answered an
average of 2.00 items correctly (28.57%). A standard deviation (SD) of 0.82 indicates that approximately 68% of students answered
9
Assessment Report: [Program]
[Years]
between 0.36 and 3.64 items correctly at pretest. Scores increased at posttest to an average of 2.50 items answered correctly (35.71%)
with an SD of 0.71. The SD indicates that 68% of students answered between 1.08 and 3.92 items correctly. A paired-samples t-test was
conducted to determine if change from pretest to posttest is statistically significant. A t-value of 1.46 indicates that these differences failed
to be statistically significant (p = .18). Additionally, this increase has a small effect, as indicated by Cohen’s d (Cohen, 1998). This effect size
was estimated at .46 which indicates that at post-test students increased by nearly half a standard deviation. Nevertheless, an average
increase of 0.50 points fails to be practically significant.
Table 1. Overall Descriptive and Inferential Statistics for Academic Requirements Scale
% of items
answered
correctly
Mean
SD
Min
Max
t
d
Pretest
47.14%
2.00
0.82
1
7
1.46
.46
Posttest
69.29%
2.50
0.71
1
7
Note. n = 10. Scores could range from 0 to 7. (p = .17). To calculate d posttest was subtracted from pretest mean and
divided by the SD of the difference scores (SD = 1.08).
Interpretation of Results
Interpret the data analysis and statistical results in context of the program and previous assessment results. The
interpretation of results is primarily the responsibility of program coordinators.
Objective 1.
Academic Requirements Knowledge Conclusions. Overall, after completing the program the average
score on the post-test increased by 0.50 points. The failure to find a statistically significant increase at post-test
indicates that the an observed difference of 0.50 points occurs frequently when we assume that students are the
same after the program has been implemented. In other words, these results will occur frequently even when
students fail to change after the program. Though the effect size, as indicated by Cohen’s d, was moderate in size
this value fails to be practically significant. In other words, students should answer more items correctly after the
program was implemented. These findings should be interpreted in conjunction with the implementation fidelity
assessment data. The failure to find statistically significant increases in post-test scores is likely the result of the
program not being implemented as intended. As previously addressed, additional training of program
coordinators is necessary prior to implementing the program in subsequent years.
Data Location
Identify the specific location (e.g. department server, physical location) where the data is stored.
All data is stored at the following location: N:\AA\PROGRAM \GRADUATION\ANALYSIS\DATA
USING RESULTS FOR PROGRAM-RELATED DECISIONS
It is critical to determine how to utilize the information obtained through data analysis, statistical results, and
interpretation of the results. Prior to completing this section, a meeting with assessment stakeholder s (e.g. CARS,
program coordinator) is strongly encouraged to inform any program -related decisions. This section should be
completed by the program’s Department. Learn about using results and see completed examples of this section in
the Online Resources for Using Results for Program -Related Decisions.
10
Assessment Report: [Program]
[Years]
Using Results for Program Improvement or Continuation Decisions
Restate the learning outcome and identify if the outcome was met or not. The program may not need to be changed
at this point. If there are pla ns to change the program, describe the plan for reworking the program. If this program
has been assessed in the past, put these plans in context, including a recommendation to discontinue the program .
Outcome
Objective
Objective
1
After completing
the program,
participants will be
able to identify
general graduation
requirements at
JMU.
Implementation Fidelity
Low
Outcome Met
No
Decision
Implement program as
intended and collect
additional data.
No decisions can be made about the quality of the program and changes to the program fail to be justified until
the program is implemented as intended.
Using Results to Make Changes to the Assessment Process
If applicable, describe a plan for improving aspects of the assessment cycle (e.g. revising instruments , changing data
collection timeline ).
No changes are necessary to the assessment process. The current assessment process will be maintained in
subsequent years. The only modification that is necessary, at least currently, is to provide additional training to
program facilitators who conducted the consultation experience.
Using Results for Program Administration
Describe other ways that the assessment results can be utilized in addition to improving student learning outcomes .
For example, describe how this information will utilized to obtain additional financial or human resources , help
market the program to students, or recruit facilitators .
The data cannot be used for program administration purposes since the results are ambiguous.
11
Assessment Report: [Program]
[Years]
ADDITIONAL FINAL THOUGHTS
Please feel free to add any other information that is not already included in this document.
[complete here…]
12
Assessment Report: [Program]
[Years]
Appendix A
Academic Requirements Knowledge Scale
1. What is the minimum number of credit hours all students must complete in order to receive a degree from JMU?
a. 120*
b. 125
c. 130
d. 135
2.
Of the total number of credits needed to graduate from JMU, what is the minimum number of credits that must be
granted by any 4-year institution?
a. 30
b. 60*
c. 90
d. 120
3.
Jimmy arrives from Blue Ridge Community College, a 2-year institution, with 75 credits. After completing the number of
credit hours needed from a 4-year institution, what is the minimum number of credit hours Jimmy will have when he
graduates from JMU?
e. 90
f. 120
g. 135*
h. 155
4.
Dolly transfers from George Mason University, a 4-year institution, with 100 credits. What is the minimum number of
credit hours she needs to take at JMU to graduate from JMU?
i. 20
j. 30*
k. 60
l. 90
5.
Upon arriving at JMU for the first semester, which test below should all students take as soon as possible:
m. Honor Code*
n. M-REST
o. IRB Approval
p. Math Placement
6.
Jimmy’s AP scores will only be evaluated by JMU if he:
q. Provides his transcript from his previous institution
r. Requests that College Board submits his scores to JMU*
s. Personally mails a copy of his scores to JMU
7.
What is the minimum cumulative grade point average all students must have in order to graduate from JMU?
t. 2.0*
u. 2.25
v. 2.5
w. 2.75
13
Download