SFASU Report of Best Practices

advertisement
Richard and Lucille DeWitt School of Nursing
Stephen F. Austin State University
Nursing Education Performance Initiative
Recognized Best Practice – Dissemination Plan
September 2009
Nursing Education Performance Initiative
Recognized Best Practice
I.
Summary information
In the space provided below, provide a summary of the process for establishing the
best practice. With that summary, include totals for start-up and on-going costs.
Summary: Active learning has been demonstrated as an effective teaching strategy
to increase the retention and recall of information. Active learning involves activities,
such as role playing, group assignments, collaborative testing, and individual testing.
In active learning, the student cognitively manipulates information as opposed to
passive learning where students are simply reading and highlighting material.
Research (Cranney, Ahn, McKinnon & Watts (2009); Karpicke, J., & Roedigar, H.
(2007); Roedigar, H. & Karpicke, J. D. (2006) Research in education, psychology,
and nursing has demonstrated that testing is an effective active learning strategy
which increases the retention of information. According to Cranney, Ahn, McKinnon,
and Watts (2009), collaborative testing was the most effective strategy in increasing
memory recall of information. Individual testing was also found to be effective. The
term, “testing effect”, is used in educational and psychological research to denote the
finding that testing is more effective than simply rereading and highlighting material
in regards to increasing memory recall of information. The theoretical framework
which forms the foundation for “testing effect” is Transfer Appropriate Processing
(TAP) and retrieval induced facilitation. The best practices described in this report
support the use of active learning, specifically collaborative testing and resource
cards, to improve memory recall as measured by test scores. In fall 2007, faculty in
the Fundamentals of Nursing course implemented collaborative testing as a teaching
strategy. During the first Fundamentals of Nursing test, students were divided into
groups of approximately five students each. The first 25 questions of a 50-item test
were collaboratively answered by the team; the second set of questions was
answered individually by each student. In the four semesters prior to
implementation, there were 37 failures out of 239 for a 15.5% failure rate for the
course. After implementation, there were 11 failures out of 236 students for a failure
rate of 4.6%.In spring 2008, in the Medical-Surgical course, the faculty implemented
resource cards. Students were allowed to bring a 5” x 7” card to the first three tests
in Medical-Surgical nursing. The students were allowed to place anything on the
card which they believed would be helpful during the test. In the three semesters
prior to implementation of the resource cards, the attrition rate for the MedicalSurgical course was 9.5% (n=147). In the three semesters after implementation of
the resource cards, attrition was 3.9% (n=151).
Total Start-up Costs: -0Total On-going Costs: -02
For more information about this best practice, contact:
Contact Person: Dr. Glenda C. Walker, DSN RN
Institution: Richard and Lucille DeWitt School of Nursing at SFASU
Email Address
gwalker@sfasu.edu
Telephone Number: (936) 468-3604
II.
Title: Director
Background
Describe the institution’s student population, including class size, student
demographics, admission criteria, and other information that will provide context for
the best practice. Describe previous student success practices.
Fall 2006 Student Population/Demographics: SFASU Freshmen with a Nursing
Major
Stephen F. Austin State University (SFASU) is a four-year public university which is
located in rural East Texas. The SON admits 60 students per semester (fall and
spring). In fall 2006, the freshman entering class at SFASU, who declared nursing
as a major, had the following demographics:
Of the 213 females, 26% were African American; 1% Asian; 9% Hispanic; 64%
White.
Of the 16 males, 19% were African American; 12% Asian; 6% Hispanic; 63% White.
The average ACT score for female was 9. The average SAT for females without
writing was 786. For males, the average ACT was 6 and the average SAT without
writing was 683.
Fall 2008 Student Population/Demographics: Students Admitted into SON
Program
Of this group of students who was admitted into the nursing program fall 2008, the
demographics, as identified by the Higher Education Coordinating Board (HECB)
Intervention Grant, “A Research Model for Identifying and Intervening with At-Risk
Nursing Students”, included the following:
Of the 108 students in the HECB data file for SFASU, the average age was 21.9
years, average NET reading, 65.7; average NET math, 78.5; average NET
3
composite, 71.7. 6% had NET reading comprehension scores in the 30’s; 6% in the
40’s; 13% in the 50’s; 29% in the 60’s; 34% in the 70’s; and, 11% in the 80’s. In
regards to NET math, 1% had a NET math in the 30’s; 6% in the 40’s; 6% in the 50’s;
13% in the 60’s; 19% in the 70’s; 22% in the 80’s; and 34% in the 90’s. For NET
composite, 5% were in the 40’s; 10% in the 50’s; 22% in the 60’s; 37% in the 70’s;
22% in the 80’s; and, 4% in the 90’s.
Of this group, 89% were female; 11% were male; 93% were single; 2% divorced;
and, 5% married. The average number of hours worked per week was 6; 90%
identified English as their first language; 24% repeated Anatomy and Physiology I;
13% repeated A&P II; 6% repeated both A&P I and II; average Grade Point Average
(GPA), 3.18; and, 22% took a developmental course.
The HECB innovation study, “A Research Model for Identifying and Intervening with
At-Risk Nursing Students“, used the following variables to identify at-risk students:
Net comprehension score below 65; NET reading score below 65; A&P I & II GPA
below 2.75; working more than 16 hours per week; limited peer or family support.
Using these criteria, 74% of the students admitted into the nursing program during
the fall and spring were identified at-risk.
Admission Criteria
The admission criteria for the SON at SFASU include the following:
Completion of the following prerequisites: Anatomy and Physiology ( I and II (BIO
238, BIO 239), Introduction to Chemistry (CHE 111), Microbiology (BIO 308 or 309),
Pathophysiology (NUR 304), Child Development (PSY 376 or HMS 336, HMS
236/HMS 236L), Nutrition (HMS 239 or 339), Introduction to Nursing (NUR 305),
Computer Science (CSC 101, 102, 121, or 201), Culture class (SOC 139 or ANT
231), English (ENG 131 and ENG 132), Prescribed Elective I (ART 280, 281, 282;
MUS 140; MHL 245; THR 161, 370; DAN 140, 341), Prescribed Elective II (ENG
200-235, 300), Prescribed Elective III (BCM 247; COM 111, 170; ENG 273; SPH
172, 272 (Sign Language); FRE 131, 132; ILA 111,112 (Ind. Language); SPA 131,
132), Introduction to Psychology ( PSY 133), and Statistics (MTH 220).
Students must have a GPA of 2.5 overall; a 2.75 in the prerequisite sciences (A&P I
and II, Chemistry, Microbiology, and Pathophysiology). They must submit their NET
scores to the School of Nursing. In the past, the composite NET score was used in a
formula calculation for prioritizing applicant ranking. Based on the initial results from
the HECB grant, the SON for fall 2009 established an acceptable range for
admission on the reading comprehension score of above 60.
4
Previous Best Practices
One of the most significant best practices which has been implemented by the SON
in regards to its impact upon NCLEX passing rates has been computer based
NCLEX style testing throughout the curriculum. In fall 2004, the SON was an alpha
testing site for the Health Education Systems Incorporated (HESI) computer based
testing software. This software utilizes an NCLEX style format and requires all
questions to provide rationale for correct answers. Two courses alpha tested the
software. By spring 2006, 85% of courses in the SON were using the HESI
computer based software or WebCT computer testing which follows NCLEX format.
As the following data indicates, once computerized NCLEX formatted testing was
implemented throughout the curriculum, NCLEX pass rates significantly increased.
While various strategies also contributed to the increased NCLEX rates, the most
significant correlation is attributable to the implementation of computer based testing.
The following indicated the increase in NCLEX pass rates after implementation of
computer based testing:
2009
2008
2007
2006
2005
2004
III.
100%
92.77%
97.47%
94.37%
82.76%
81.82%
Project Need
Present the problem statement and provide a rationale for the proposed best
practice. Include references to any previous educational research that supports the
best practice.
Students who enter the School of Nursing (SON) at SFASU are admitted with low
reading comprehension scores on the nursing entrance test. In the past, we have
used the NET test; we are currently using the TEAS and are trying to find the critical
cutoff point. In fact, the SON has admitted students with reading comprehension
scores on the NET in the 30’s, 40’s and 50’s. Even when students have high reading
comprehension scores, faculty in the SON have found that students have difficulty
reading, integrating, synthesizing, and applying the information necessary to be
effective in their courses. Students have been traditionally educated to regurgitate
facts, thereby focusing on knowledge-based competencies. Nursing curricula
require students to integrate knowledge and be able to apply that knowledge to
clinical situations. In order to address students’ difficulty with application based
questions and tests, the SON Testing Committee made recommendations about the
percentage of application questions which should be given at each level. The
recommendations were as follows:
5
Beginning of 1st semester: 25% application level or above exam questions
End of 1st semester: 75% application level or above exam questions
Beginning of 2nd semester: 50% application level or above exam questions
End of 2nd semester: 90% application level or above exam questions
Beginning of 3rd and 4th semesters: 75% application level or above exam
questions
End of 3rd and 4th semesters: 90% application level or above exam questions
As the above guidelines indicate, at the beginning of each level the recommended
percentage of application questions decreases since the test covers all new content.
In addition, students need the opportunity to familiarize themselves with the teaching
and testing styles of new instructors. Furthermore, students need the opportunity to
develop a certain level of confidence in new content areas without the negative
consequence of initial poor test scores.
Historically, students entering first and second semester have higher failure rates in
those courses because of a lack of skill in taking application based tests. In addition,
students have difficulty reading and identifying the most critical information
necessary for application questions. The SON implemented two new teaching
strategy to address these issues in the Fundamentals of Nursing course (1st
semester) and the Medical-Surgical course (2nd semester). The first intervention to
be used in this first semester course was collaborative testing. Resource cards were
used for the first three tests in the second semester Medical-Surgical course.
Rationale and References
Students entering nursing programs have been exposed to years of the educational
process which focuses on the acquisition of facts with little emphasis on the
application of knowledge/critical thinking. In fact, students often memorize facts only
for the short term benefit of passing a test and then immediately forget the
information. Nursing education has not embraced learning theories from other
disciplines which could provide guidance in regards to helping students obtain and
apply knowledge. Two of these disciplines are education and psychology.
According to the cognitive learning literature, repeated testing leads to greater
retention of information than does repeated reading of the same information. The
term, “testing effect”, was coined to designate this finding (Glover, 1989). Research
has found that repeated testing is a more effective strategy for increasing retention
than simple rereading of material (Glover, 1989; Karpicke and Rodeiger, 2007).
Several theories form the foundation for the “testing effect” construct. Transfer
Appropriate Processing (TAP) and retrieval induced facilitation are core to research
findings related to “testing effect”. According to Cranney, Ahn, McKinney, Morris and
Watts (2009), the level of retrieval effect can impact upon memory retention, such
that elaboration during the retrieval process can increase the strength of a memory
trace and increase the number of retrieval routes (p. 920). Therefore, a test is more
effective than rereading because it involves the active manipulation of information.
6
One article which is extremely germane to the evaluation of collaborative testing as a
teaching strategy is the study by Cranney, Ahn, McKinnon, and Watts (2009). This
article describes two studies which evaluated the effectiveness of collaborative
testing on memory recall. The two studies used psychology students as subjects at
the University of New South Wales. In the first study, 75 students viewed a video
called Discovering Psychology: The Behaving Brain. After viewing the video,
subjects were randomly assigned to one of four independent variable interventions:
group collaborative quiz with 4 to 5 on a team; individual quiz; restudy where
students read through a transcript of the video for eight minutes and were instructed
to highlight or underline all important information; and, a no-activity condition group
that did not re-engage with the video information. One-way ANOVA analysis for
both old and new test items demonstrated a significant difference in test performance
between groups. The collaborative testing group performed better than all groups on
both old and new items. The individual testing group performed better than the noactivity group; however, there was no difference between the individual quiz and
restudy condition. Consequently, the hypothesis related to testing effect, i.e., that
testing produced greater recall than restudy was not confirmed. The authors
identified several methodological factors which could have contributed to the lack of
significance related to individual testing vs. restudy, such as number of subjects in
each treatment group, number of old items, and level of difficulty of items on the final
quiz scores. Therefore, a second, more controlled, study was implemented to
examine these possible confounding variables.
Two hundred eighteen first year psychology students enrolled in the psychology
classes at the University of New South Wales participated in this study. The subjects
again viewed a video. Immediately after the video, subjects received their specific
review task depending upon which independent intervention group they were
randomly assigned: individual quiz, collaborative quiz, restudy, or no-activity. One
week later, a memory test (dependent variable) was given. It was hypothesized that
the quiz group (both individual and collaborative) would perform better than both the
restudy and no-activity group on the target material. The quiz group (individual and
collaborative) performed better than both the no-activity and restudy groups.
However, there was no significant main or interaction effect for quiz type, meaning
the collaborative did not perform better than the individuals.
In the study by Karpicke and Roediger (2006), in a simulated learning experience,
they found that repeated testing was more effective in long term retention than
restudying information. In another study by Butler and Roediger (2007), repeated
testing up to one month after the initial material was presented was more effective in
retaining the information than rereading the information.
To summarize the above findings, the studies yielded strong evidence for the effect
of testing upon memory recall in a classroom setting. In the Cranney, Ahn,
McKinnon and Watts (2009) study, collaborative testing produced better performance
than individual or restudying the material. However, when the sample size was
increased and possible confounding variables were controlled, testing regardless of
type produced better recall.
Findings support the retrieval theory of cognitive learning. This theory proposes that
repeated elaborative retrieval of material (testing) produces better retention. More
7
effortful or elaborative retrieval processes that are engaged in during the review
phase, the better the information will be remembered at a later time. A test
administered during a review phase is a more effortful form of learning than
rereading (e.g., passive rereading or highlighting of the information).
Nursing faculty consistently emphasize the importance of reading and rereading and
highlighting key information from textbooks. Many nursing faculty persist in the belief
that review of test is not an effective learning tool, but an assessment method of
mastery of knowledge. In addition, most faculty limits the number of tests given in a
course to 3 to 5 with each test covering a massive amount of new material.
However, retrieval theory and research demonstrates that this may not be the most
effective strategy to produce memory recall.
Review of Nursing Literature Related to Collaborative Testing
Sandahl’s (2009) article reviews the theoretical framework and literature related to
collaborative testing in nursing. Accordingly, collaborative learning is an active
process that involves students working in groups to explore information and engage
in consensus building while practicing assertive communication skills to convey
thoughts and feelings. Consequently, this type of learning addresses The Pew
Health Professions Commission (2000) competency related to working in
interdisciplinary teams and the Institute of Medicine (2001) competency related to
clinicians working with each other to share information and coordinate patient care.
Collaborative testing involves students working in groups to develop knowledge and
provide opportunities to practice skills in collaboration while taking a test.
The theoretical frameworks cited for collaborative testing include cognitive
development, behavioral learning, and social interdependence. Social interaction
focuses on the interpersonal interaction between two or more individuals to enhance
verbal discussion and enhance in-depth processing of information. Behavioral
learning theory supports the concepts of reward and punishment. In collaborative
groups, there is a positive incentive for students to process information and share
this information with others. According to social interdependence, students facilitate
and encourage each other toward a shared goal, i.e., a good grade on the test.
Nine nursing studies were identified by Sandahl (2009) that investigated the
effectiveness of collaborative testing. The findings of the nine nursing research
articles reviewed by Sandahl (2009) found collaborative testing to be an effective tool
in increasing exam scores or short-term memory. Its effectiveness in long-term
memory recall, i.e., final grades has not been consistently demonstrated. However,
students’ perception of the experience in terms of improving interpersonal skills and
decreasing anxiety is reported.
IV.
Strategy Description
Include a brief narrative for the best practice followed by a matrix for each of the four
major stages of its development.
8
Students who enter the SON at SFASU are admitted with low reading
comprehension scores on the Nursing Entrance Test (NET). In fact, the SON has
admitted students with reading comprehension scores on the NET in the 30’s, 40’s
and 50’s. Even when students have high reading comprehension scores, faculty in
the SON have found that students have difficulty reading, integrating, synthesizing,
and applying the information necessary to be effective in their courses. Students
have been traditionally educated to regurgitate facts, thereby focusing on knowledgebased competencies. Nursing curricula require that students be able to integrate
knowledge and be able to apply that knowledge to clinical situations. As noted
previously, the SON Testing Committee made recommendations about the
percentage of application questions which should be given at each level.
Historically, students entering first and second semester have higher failure rates in
those courses because of a lack of skill in taking application based tests. In addition,
students have difficulty reading and identifying the most critical information
necessary for application questions. The SON implemented two new teaching
strategies to address these issues in the Fundamentals of Nursing course (first
semester) and the Medical-Surgical Nursing course (second semester).
Best Practice No. 1: First Semester Fundamentals of Nursing (FON) Course
Students entering first semester in FON also face the same issues related to reading
comprehension and ability to successfully navigate a critical thinking application test.
When students take their first test in FON, the use of application questions, with
which they are unfamiliar, usually results in low test scores. This places the students
in a position of catch up for the remaining semester and increases their stress level.
In FON, faculty implemented a different teaching strategy to address the issue. In
this course, faculty allowed students to take their first test in collaborative teams. In
the first test, 25 questions of a 50-item test were collaboratively answered by the
team. The second 25 questions were answered independently by each student.
This strategy allowed the students to collaborate and learn from each other and
began the socialization process towards application based questions. In addition,
the collaborative approach decreased the stress level on students for the first test in
FON. Test scores for the first test improved and allowed the students the time to
adjust to application questions without having their first score condemn them to a
catch up approach. This led to a decrease in students’ anxiety and resulted in a
decrease in the failure rate in the FON course. The collaborative testing strategy
was implemented fall 2007. In the four semesters prior to implementation, there
were 37 failures out of 239 students for a 15.5% failure rate for the course. After
implementation, there were 11 failures out of 236 students, for a failure rate of 4.6%.
Best Practice No. 2: Second Semester Medical-Surgical Nursing (MSN) Course
9
In spring 2008 in the MSN course, the faculty developed an educational strategy to
address the same issues. Faculty in the MSN course allowed students to develop
resource cards. The resource cards were approximately 5” x 7”. These cards could
be taken into the first three tests in the MSN course. The cards required students to
read the textbooks and review lecture content in order to identify critical information.
This information was then recorded on the resource card. In the process of doing
this, the students were forced to read and critically analyze the information. While
the students believed the information on the cards facilitated their test taking skills, it
was truly the reading and critical analysis of information that was the driving force in
increasing grades. Reflective feedback by the students regarding the effectiveness
of this teaching strategy generally indicated that the resource cards were not used
when taking the test, since they knew the information. The students also perceived
the resource cards as a security blanket which they quickly identified as not
necessary. By the time the resource cards were no longer used for tests, the
students’ self-confidence and their sense of self-efficacy had increased. Pre-post
analysis of grades indicated that this teaching strategy constituted a best practice in
this course. In the three semesters prior to implementation of the resource cards,
the attrition rate for the MSN course was 9.5% (n=147). In the three semesters after
implementation of the resource cards, the attrition rate was 3.9% (n=151).
In summary, the reason first semester FON and second semester MSN courses
were chosen as critical points for intervention was related to the following facts: 1)
FON first semester and MSN second semester courses are foundational courses in
which students learn the critical thinking necessary to be successful in the nursing
program; 2) students who successfully complete first semester FON and second
semester MSN have an extremely high probability of completing the nursing
program. The two educational strategies described involve active learning and the
“testing effect” factor. Collaborative testing and resource cards are active learning
strategies. Resource cards are active learning due to the amount of active
implementation of information in the development of the cards. In addition, the
development of the resource cards immediately prior to the test served to reinforce
the testing effect.
10
A. Initiation – Describe the resources and activities needed prior to implementing the best practice.
Expected Outcome (s)/Targets
Position
Responsible
Target
Completion
Date
Cost ($)
Faculty will identify course(s) where
collaborative testing will be used
Course
coordinator
Prior to
beginning
of semester
-0-
2. Identify the dependent variable, i.e., outcome criteria,
for evaluating effectiveness (unit quizzes, midterm
grade, final grade)
Outcome variables identified
Course
coordinator
Prior to
beginning
of
semester
-0-
3. Identify methodology for collaborative testing, i.e., first
test, all tests, or combination of collaborative and
individual testing
Faculty consensus on
methodology
Course
coordinator
Prior to
beginning
of
semester
-0-
Faculty will identify course(s) where
resource cards will be used
Course
coordinator
Prior to
beginning
of
-0-
Activity Description
Collaborative Testing
1.
1.
Obtain faculty support for collaborative testing
Resource Cards
1.
Obtain faculty support for resource cards
11
semester
2.
Identify the dependent variable, i.e., outcome criteria, for
evaluating effectiveness (unit quizzes, midterm grade, final
grade)
Outcome variables identified
Course
coordinator
Prior to
beginning
of
semester
-0-
3.
Identify methodology for resource cards, i.e., type of cards,
what can be placed on cards, time allotted to use resource
cards during testing
Faculty consensus on
methodology
Course
coordinator
Prior to
beginning
of
semester
-0-
Expected Outcome (s)/Targets
Position
Responsible
Target
Completion
Date
Cost ($)
a) Implementation – Describe the steps needed to implement the best practice.
Activity Description
Collaborative Testing
1.
Socialize students to philosophy and process for
collaborative testing
Students will identify benefits of
collaborative testing
Course faculty
During
semester
-0-
2.
Students will be placed in collaborative teams, either by
faculty or by self-selection, as per identified methodology
Teams will be identified which will
facilitate development of
collaborative and interpersonal skills
Course faculty
During
semester
-0-
3.
Students will take collaborative tests as per identified
Test scores will demonstrate
increased recall of information,
Course faculty
During
-0-
12
methodology
collaborative and interpersonal skills
will be increased, and anxiety
decreased
semester
Resource Cards
1.
Socialize students to philosophy and process for
resource cards
Students will identify benefits of
resource cards
Course faculty
During
semester
-0-
2.
Students will read test material, identify critical
information, and write it on the resource card
Students will develop ability to
identify critical information from
assigned readings
Course faculty
During
semester
-0-
3.
Students will use resource cards while taking assigned
tests
Increased test scores, ability to
critically assess assigned material,
increased feeling of confidence,
decreased anxiety
Course faculty
During
semester
-0-
Position
Responsible
Target
Completion
Date
Cost ($)
b) Evaluation – Describe the activities needed to evaluate the best practice.
Activity Description
Expected Outcome (s)/Targets
Collaborative Testing
13
1.
Obtain baseline data for dependent variable used to evaluate
effectiveness of intervention, i.e., past semester scores on
unit quizzes, midterm exams, final exams, and failure of
course
Significant difference pre- / postimplementation of collaborative
testing on dependent variables
Course
coordinator
End of
semester
-0-
2.
Obtain qualitative data from students regarding the
effectiveness of collaborative testing upon other significant
variables, such as anxiety and self-efficacy.
Students will evaluate collaborative
testing as facilitating a positive
learning environment
Course
coordinator
End of
semester
-0-
Resource Cards
1.
Obtain baseline data for dependent variable used to evaluate
effectiveness of intervention, i.e., past semester scores on
unit quizzes, midterm exams, final exams, and failure of
course
Significant difference pre- / postimplementation of resource cards on
dependent variables
Course
coordinator
End of
semester
-0-
2.
Obtain qualitative data from students regarding the
effectiveness of resource cards upon other significant
variables, such as anxiety and self-efficacy.
Students will evaluate resource cards
as facilitating a positive learning
environment
Course
coordinator
End of
semester
-0-
c) Refinement and Modification – Describe the activities to refine and modify the best practice once initially introduced.
Activity Description
Expected Outcome (s)/Targets
14
Position
Responsible
Target
Completion
Date
Cost
($)
Collaborative Testing
1.
Review quantitative and qualitative data regarding the
effectiveness of collaborative testing to insure that course
objectives are met.
Determination of effectiveness of
intervention based upon qualitative and
quantitative data
Course faculty
End of
semester
-0-
2.
Identify problems, such as too many collaborative tests which
could lead to grade inflation, and students passing who have not
mastered the material
Problems will be identified
Course faculty
End of
semester
-0-
3.
Identify external standardized methods of assessment which
would insure mastery of material to prevent students
progressing who have not mastered the material, i.e., use of
course HESI’s as a quality control measure.
Measures will be put in place to insure
mastery of material related to
progression
Course faculty
End of
semester
-0-
4.
Modify methodology to address problems
Collaborative testing will continue as
originally designed or modified or be
discontinued based upon data analysis
Course faculty
End of
semester
-0-
1 Review quantitative and qualitative data regarding the effectiveness Determination of effectiveness of
intervention based upon qualitative and
of resource cards to insure that course objectives are met.
quantitative data
Course faculty
End of
semester
-0-
2 Identify problems, such as resource cards used for too many tests
which could lead to grade inflation, and students passing who have
not mastered the material
Problems will be identified
Course faculty
End of
semester
-0-
3 Identify external standardized methods of assessment which would
insure mastery of material to prevent students progressing who have
not mastered the material, i.e., use of course HESI’s as a quality
Measures will be put in place to insure
mastery of material related to
Course faculty
End of
semester
-0-
Resource Cards
15
control measure.
progression
4 Modify methodology to address problems
Resource cards will continue as
originally designed or modified or be
discontinued based upon data analysis
16
Course faculty
End of
semester
-0-
V.
Unanticipated Challenges and Benefits
Present these observations (and any of the institution’s possible responses) in bullet
format.
Challenges:
Resistance of faculty
Resistance of faculty to change
Difficulty with identifying best methodology, i.e., all items on the test taken in
collaborative team or splitting the test so that one half is collaborative; one half,
individual
Faculty concern about grade inflation
Benefits
Test score increase/failure rate decrease
Decrease anxiety of students
Development of interpersonal and collaborative skills of students when working in teams
Students motivated to read assigned material in order to develop their resource cards
Development of student sense of self-efficacy
17
References
Glover, J. A. (1989). The “testing” phenomenon: Not gone but nearly forgotten. Journal of Educational
Psychology, 81, pp. 392-399.
Roediger III, H. L., and Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves
long-term retention.Psychological Science, 17, pp. 249-255.
Butler, A. C., and Roediger, H. L. (2007). Testing improves long-term retention in a simulated classroom
setting. European Journal of Cognitive Psychology, 19, pp. 514-527.
Karpicke, J. D. & Roedigar III, H. L. (2007). Repeated retrieval during learning is the key to long-term
retention. Journal of Memory and Language, 57, pp. 151-162.
Cranney, J., Ahn, M., McKinnon, R., Morris, S., Watts, K. (2009). The testing effect, collaborative learning
and retrieval – induced facilitation in a classroom setting. The European Journal of Cognitive
Psychology, 21(6), pp. 919-940.
Sandahl, Sheryl S. (2009). Collaborative testing as a learning strategy in nursing education: A review of
the literature. Nursing Education Perspectives, Vol. 30, No. 3, pp. 171-175.
18
Download