Uploaded by Ruth Guerrier

EBSCO-FullText-01 09 2025 (1)

advertisement
Research Briefs
Evolving Licensure Examination: Assessing Student
Confidence and Accuracy With Next Generation NCLEX
Ashley Helvig Coombe, PhD, RN, CNE; LisaMarie Wands, PhD, RN, CHSE-A, CNE;
Shannon Stevenson, EdD, MSN, RNC-OB, RNC-MNN, CNE; and Rowena W. Elliott, PhD,
RN, CNN, AGNP-C, GS-C, CNE, LNC, ANEF, FAAN
ABSTRACT
Background: The Next Generation NCLEX (NGN) includes new item types. Little is known about nursing students’ confidence and accuracy in answering these questions. Method: A descriptive comparative study examined
prelicensure nursing students’ confidence and accuracy in
answering NGN-style items versus multiple-choice questions (MCQs) of the same content via a 12-item quiz.
Results: Less than one third of students (n = 194; 32.1%)
reported feeling confident in answering NGN questions.
Students’ confidence levels had no relationship on scores
with NGN items. When comparing NGN-style items to
MCQs, students’ (n = 221) scores on NGN-style items were
lower with bowtie or a select-all-that-apply questions but
higher with highlight table or matrix multiple-choice questions. Conclusion: Students’ lack of confidence with certain
item types suggests faculty should incorporate these item
types into classroom activities or course assignments. NGN
test-taking strategies also should be incorporated and frequently reinforced throughout the curriculum. [J Nurs Educ.
2024;63(4):252-255.]
T
he licensure examination for nurses has evolved significantly since the early 1900s. Although the goal of determining
competence to practice as a professional nurse has remained
consistent, the different format types and number of questions on
the examination have changed over time. The newest iteration of
the examination is a response to the increasing complexity in patient care needs and identified gaps in new nurses’ ability to use
clinical judgment (CJ) to meet those needs (Kavanagh & Szweda,
2017). To assess that entry-level nurses have the knowledge and
skills necessary to integrate sound CJ while providing nursing care,
Ashley Helvig Coombe, PhD, RN, CNE, is an Associate Professor.
LisaMarie Wands, PhD, RN, CHSE-A, CNE, is an Associate Professor. Shannon
Stevenson, EdD, MSN, RNC-OB, RNC-MNN, CNE, is an Assistant Professor.
Rowena W. Elliott, PhD, RN, CNN, AGNP-C, GS-C, CNE, LNC, ANEF, FAAN, is
a Professor. All contributors are affiliated with the Nell Hodgson Woodruff
School of Nursing, Emory University.
Address correspondence to Ashley Helvig Coombe, PhD, RN, CNE,
Nell Hodgson Woodruff School of Nursing, Emory University, 1520 Clifton
Road NE, Atlanta, GA 30322; email: ashley.coombe@emory.edu.
Disclosure: The authors have disclosed no potential conflicts of interest, financial or otherwise.
Acknowledgement: The authors thank Dr. Laura Kimble for assistance
with survey item construction.
Received: June 2, 2023; Accepted: July 23, 2023
doi:10.3928/01484834-20240207-10
252
the National Council for State Boards of Nursing (NCSBN) developed test items that more explicitly measure CJ. As with any major
change to the licensure examination, there can be a perception that
the new test items may be more challenging to answer; however,
little is known about nursing students’ confidence or perception of
difficulty in answering these new items.
Background
Early nursing licensure testing was not mandatory and included essay-type questions as well as practical skills tests (Benefiel,
2011). In the 1930s, the licensure examination moved to a more
objective test. In 1942, the National League for Nursing Education
(NLNE) National Committee on Nursing Tests began to coordinate
a test pool for state boards of nursing to use (National League for
Nursing [NLN], 2022). This set of 13 tests, later reduced to six
tests in 1949, were called the State Board Test Pool Examination
(SBTPE) (Benefiel, 2011). After the American Nurses Association
(ANA) started managing the SBTPE in 1955, the NLNE continued
to administer the examination (Matassarin-Jacobs, 1989), which
contained 720 questions; 600 questions were scored, and 120 questions were pilot items (Benefiel, 2011). The items on the SBTPE
evaluated principles of basic science, nursing skills, and nursing
care in various clinical situations, and the NLNE stated that “the
questions [were] designed to test the candidate’s ability to relate
and evaluate information which he or she has gained through classroom and clinical experience” (NLNE Department of Measurement and Guidance, 1952, p. 614).
The NCSBN (2024a) began overseeing the licensing examination in 1978 for better regulation. In 1981, the newly named
National Council Licensing Examination for Registered Nurses
(NCLEX-RN®) changed from a norm-referenced examination to
using criterion-referenced scoring and was first administered in
1982. The number of items was reduced to 480 (including 75 pilot
items), which was later reduced again to 370 (including 70 pilot
items) in 1983 (Matassarin-Jacobs, 1989). Updated formats of the
examination focused on the nursing process and were organized by
systems of decision making or locus of control (i.e., client-independent, client-dependent, and nurse/client shared), as well as including areas of human functioning, such as elimination, nutrition,
and fluid and gas transport (Matassarin-Jacobs, 1989).
The NCSBN started performing regular practice analyses of
new nursing graduates in 1984 to best confirm current practices for
the test plan. This practice analysis currently is conducted every 3
years (NCSBN, 2024b). Based on a job analysis from 1986, the
test plan was updated to incorporate client needs rather than including locus of control and areas of human functioning; however, the
nursing process was maintained as the focus (Matassarin-Jacobs,
1989), and the examination mainly consisted of single-response,
multiple-choice questions (MCQs; Wendt et al., 2007). Computer
Copyright © SLACK Incorporated
adaptive testing, which adapts to the test-taker’s ability, began
in 1994, and examination takers could answer anywhere from
75 to 265 MCQs, including 15 pilot items (Wendt et al., 2007;
NCSBN, 2024a). Six alternate format items were introduced in
2003 and consisted of “(a) multiple response items, (b) hot-spot
items, (c) fill-in-the-blank, (d) chart/exhibit format, (e) audio item
format, and (f) graphic options” (Benefiel, 2011, p. 18).
For nearly 20 years, the item types on the NCLEX have remained largely unchanged. However, an extensive literature review initiated by the NCSBN in 2012 examined decision making
in novice nurses and the factors influencing that process (Muntean, 2012); this started the journey toward development of the Next
Generation NCLEX® (NGN), which explicitly evaluates CJ cognitive processes. Muntean stated, “The literature reviewed made
it clear that nursing students are inadequately trained in critical
thinking and decision-making—at least decision-making found in
real life settings” (p. 20). The NCSBN (2017) practice analysis for
2013–2014 also found that CJ was an important skill needed by
nurses entering the field. The NCSBN developed a CJ measurement model (CJMM) to provide a practical method for measuring these higher-level cognitive processes; however, Dickison and
colleagues (2016) reported that higher-level cognitive processes
needed for CJ are difficult to measure by MCQs, which heralded
the creation of NGN test items.
Betts and colleagues (2019) provided nurse educators with one
of the first descriptions of the development of NGN items, how
these items measure the cognitive functions of the CJMM, and
new item types along with a template for faculty to use in item
construction. With each practice analysis and updated examination blueprint, educators are provided information to keep up with
current practice expectations of entry-level nurses as it is reflected
on the licensure examination. The newest update is quite robust
and caused us to be interested in understanding how students perceive new test items so that we can best guide students to success. The NCSBN has been regularly distributing information
about the changes coming to NCLEX since 2017; however, it has
taken some time to introduce students to the new NGN item types,
which are in formats that neither faculty nor students have ever
seen before. Because we were unable to find literature addressing
student confidence and perception of difficulty to changes in the
licensure examination over time or of new NGN items, we set out
to: (1) explore student confidence levels in answering NGN item
types; (2) describe student accuracy in answering NGN items versus MCQs covering similar concepts; and (3) compare confidence
and accuracy for any correlative relationship.
Method
Study Design and Participants
This study used a descriptive comparative design to examine NGN confidence in prelicensure students via a survey and to
compare accuracy in answering NGN-style items versus MCQs
via a 12-item quiz. The study was conducted at a large private
university in the southeastern United States during the Summer
2022 through Spring 2023 semesters. This initiative was deemed
exempt from full review by the university’s institutional review
board.
All of the participants were prelicensure and included firstdegree Bachelor of Science in Nursing (BSN) students, secondJournal of Nursing Education • Vol. 63, No. 4, 2024
degree Master of Nursing (MN) students, and second-degree students enrolled in a Distance-based Accelerated Bachelor of Science
in Nursing (DABSN) program. Participants were introduced to
NGN either through an in-person presentation or a video; both presentations were created by faculty members with expertise in NGN
test items. The presentation included a discussion of the purpose of
the licensure examination, a description of the NCSBN CJMM, an
overview of the NGN case study, and all NGN test item types with
examples. Following the presentation, students completed the NGN
Confidence Survey and the Item-Type Quiz (ITQ) on Canvas, the
learning management system used by the school of nursing.
Instruments
NGN Confidence Survey. The NGN Confidence Survey consisted of 10 items. First, students rated their overall confidence in
correctly answering NGN items on a 4-point Likert-type scale ranging from 1 = strongly disagree to 4 = strongly agree. Subsequent
items asked students to rate how difficult they perceived each NGN
item type to answer using a 5-point Likert-type scale ranging from
1 = much harder to answer to 5 = much easier to answer. The focus
of these questions was to have students evaluate perceived difficulty
based on the format of the item itself, rather than focusing on the
content of the question. NGN item types included on the survey
were: (a) bowtie; (b) drag-and-drop cloze; (c) drag-and-drop rationale; (d) drop-down cloze; (e) drop-down rationale; (f) highlight
text; (g) matrix multiple choice; (h) matrix multiple response; and
(i) multiple response.
Item-Type Quiz. The ITQ consisted of six NGN items from the
six main NGN item type categories (bowtie, drag-and-drop, dropdown, matrix, multiple response, and highlight) and six MCQs.
Content focused on fundamentals-level concepts. For each of the
NGN items, one MCQ was tested on the same content to compare
student performance between the two item types and control for
content mastery. Figure 1 provides an example of a matched content dyad question, with both assessing complications of enteral
feedings. Both the NGN Confidence Survey and the ITQ were created for this study and reviewed by expert nurse educators for face
validity (Taherdoost, 2016).
Data Analysis
Student responses were downloaded from Canvas as Excel®
spreadsheets and imported into SPSS®. Descriptive statistics analyzed frequencies for ordinal NGN Confidence Survey data, and
means and standard deviations for continuous quiz data. Pearson
chi-square analysis compared student confidence across groups.
ITQ data were analyzed as total mean score in percentage as well
as subtotal mean in percentage of NGN items and subtotal mean
in percentage of MCQs. A one-way analysis of variance was used
to compare group means of NGN item scores, MCQ item scores,
and total ITQ scores. Independent-samples Kruskal-Wallis tests
were used to compare distribution of ITQ scores by the corresponding survey item; for example, scores for the quiz item formatted as
a bowtie were compared with student confidence in answering a
bowtie-formatted item.
Results
A total of 221 students from five different cohorts completed the
ITQ; 194 students also completed the NGN Confidence Sur253
p < .001 level. Cohorts 3 and 4 were not statistically significantly different from each other, and Cohorts 1, 2, and 5 were
not statistically significantly different from each other.
Mean NGN item scores also were lowest for the same two
first-semester groups, Cohort 3 (M = 74.8, SD = 13.5) and Cohort 4 (M = 69.5, SD = 16.4). Differences in mean scores for
each of these two groups compared with the mean scores for
each of the other three groups (Cohort 1, M = 86.1, SD = 10.3;
Cohort 2, M = 89.4, SD = 11.0; Cohort 5, M = 88.2, SD = 7.7)
were statistically significant (p < .001). Cohorts 3 and 4 were
not statistically significantly different from each other, and Cohorts 1, 2, and 5 were not statistically significantly different
from each other.
Cohort 3 had the lowest mean score for MCQs (M = 75,
SD = 18.4), and Cohort 4 had the highest mean score for MCQs
(M = 86.4, SD = 16.7). Differences among the cohorts were not
statistically significant.
Comparing NGN Confidence Survey and ITQ Results
Figure 1. Example of a matched content dyad question. Note. NG =
nasogastric; BP = blood pressure.
vey. Students in Cohort 1 (n = 72 BSN students) and Cohort 2
(n = 69 MN students) were in the fourth and final semester of
their programs. Students in Cohort 3 (n = 26 DABSN students),
Cohort 4 (n = 27 DABSN students), and Cohort 5 (n = 27
DABSN students) were in the first semester of their program.
NGN Confidence Survey
Overall student confidence was significantly different between groups at the p = .016 level. Cohort 4 had the lowest
confidence, with 53.8% of the students responding that they
somewhat or strongly agreed they could correctly answer NGN
test items. Cohort 5 had the highest confidence, with 96.3% of
the students responding that they somewhat or strongly agreed
they could correctly answer NGN test items.
Students identified drag-and-drop cloze (53.7%), drag-anddrop rationale (47.9%), drop-down rationale (47.9%), and dropdown cloze (39.7%) items as being somewhat or much easier
to answer. Matrix multiple response (86.6%), matrix multiplechoice (72.7%), bowtie (53.4%), and highlight text (51.5%)
items were identified as being somewhat or much harder to answer. The cohorts differed in their perception of difficulty in
answering each NGN item type, except for select-all-that-apply
items. Cohort 5 perceived the highest number of NGN items
(six of 10) as being somewhat or much easier to answer, and
Cohort 1 perceived the highest number of NGN items (four of
10) as being somewhat or much harder to answer.
Item-Type Quiz
Mean total ITQ scores were lowest for two of the three firstsemester groups, Cohort 3 (M = 74.8, SD = 12.5) and Cohort 4
(M = 74.1, SD = 13.7). Differences in the means of each of these
groups compared with the means for the other three groups
(Cohort 1, M = 85.3, SD = 9.6; Cohort 2, M = 88.9, SD = 10.2;
Cohort 5, M = 86.6, SD 8.3) were statistically significant at the
254
There were no statistically significant differences in scores
for each specific NGN quiz item based on student perception
of difficulty with answering each item type respectively. Students performed better on the matrix multiple-choice items,
t(220) = 5.42, p < .001, and highlight text items, t(220) = 2.44,
p = .008, than on the MCQ counterparts for those items. Students
performed worse on the select-all-that-apply, t(220) = -7.85,
p < .001, and bowtie, t(220) = -7.79, p < .001, items than on
those MCQ counterparts for those items. There were no statistically significant differences between students’ scores on
drag-and-drop or drop-down items than on the MCQ counterparts for those items.
Discussion and Recommendations
With any change comes uncertainty, and overall, the findings
of this study demonstrate that students have limited self-confidence on NGN items, with only 32.1% of participants reporting
they somewhat or strongly feel confident in answering NGN
questions. However, despite feeling less confident on item types
such as bow-tie or matrix multiple-choice and response items,
students performed similarly on those item types compared
with other NGN item types. The group mean scores on the ITQ
had no relationship with the group confidence levels for each
item type, meaning that regardless of whether they felt the item
type was harder or easier to answer, the mean scores on the ITQ
for that item were similar. This finding is worth reflection, and
students should be reminded that even if they feel uncertainty
surrounding item types that move beyond MCQs, they have the
capacity to score well on those items.
To improve students’ confidence, intentional and early exposure to NGN item types is crucial. At the onset of their program, students should be introduced to NGN items through the
lens of CJ and informed on how these item types reflect nurses’
ability to recognize, analyze, prioritize, plan, take action, and
evaluate outcomes. Hensel and Billings (2020) emphasized the
importance of using active learning strategies that incorporate CJ
to give students opportunities to apply CJ in classroom and clinical settings. Students’ lack of confidence with certain item types
presents faculty with the opportunity to incorporate such items
Copyright © SLACK Incorporated
into their classroom activities or course assignments as a part of
those active learning strategies. By boosting engagement with
items in a bow-tie or matrix multiple-choice format, students
can gain confidence in their ability to answer such items.
Students in Cohort 5, despite being first-semester students,
felt the most confident in answering NGN item type questions.
To speculate on this finding, it is important to recognize that
the students in this cohort started nursing school in the Spring
2023 semester, following years of faculty development and
perhaps increased faculty comfort with describing the NGN
and the CJMM. These developments were no longer “new”
to faculty anymore and possibly were not presented as being
new to students, thereby avoiding a negative effect on students’ confidence.
When comparing an MCQ to an NGN item covering the
same content, students’ scores on the NGN item were lower
when the question was a bowtie or a select all that apply item
but higher when the item was a highlight table or matrix multiple-choice item. This presents faculty with the chance to not
only include these item types of questions in their courses but
also demonstrates the need to incorporate test-taking strategies
into the curriculum from the onset of the program. If students
are tested with items covering the same content but answer certain item types correctly more frequently than other types of
items, this is noteworthy and can be addressed by integrating
test-taking workshops throughout the nursing program as students cover more complex content and encounter more NGN
item types.
Limitations and Strengths
Several limitations were identified in this study. Of the five
cohorts, four cohorts were in accelerated programs for seconddegree students, and the fifth cohort was in a first-degree BSN
program. The level of perceived difficulty could have been influenced by life experiences and having earned a previous college degree. There was no continuity in the number of students
in each cohort, ranging from 26 to 72 students. Although students in their first semester and students in their final semester
were compared, data were not collected on students who were
in the middle of the program. All of the students received an
introduction to NGN; however, some students received verbal
instruction and other students received video instruction, which
precluded the opportunity to ask questions in real time. Lastly,
the questionnaires were administered in an environment that
was not proctored, which could have influenced students’ perceptions of the NGN questions.
Several strengths also were identified. The in-person and
video instructions were provided by the same faculty; therefore,
the message was not altered. Students were informed they were
not gauging the difficulty of the content of the items but the
types of questions using content they were previously taught.
Although the actual scores were important, comparing students’
perception of test item difficulty provided more information
on how to identify positive implications and address areas of
concern.
Journal of Nursing Education • Vol. 63, No. 4, 2024
Conclusion
Nursing practice and the NCLEX examination will continue
to evolve as health care changes. Therefore, nurse educators
should be proactive and take steps to be fully informed of these
current and future changes. Based on this study, students had
limited confidence when answering NGN test items, regardless
of the actual test score. These results can serve as a starting point
in addressing their confidence levels. Although it is imperative
for students to be aware of the NGN test items, it is more important for them to have self-confidence when answering questions
that test CJ. As nurse educators, we should be intentional and
incorporate teaching strategies in class and clinical experiences
that will increase students’ confidence and subsequently increase
their success.
References
Benefiel, D. (2011). The story of nurse licensure. Nurse Educator, 36(1), 16–
20. https://doi.org/10.1097/NNE.0b013e3182001e82 PMID:21135678
Betts, J., Muntean, W., Kim, D., Jorion, N., & Dickison, P. (2019). Building
a method for writing clinical judgment items for entry-level nursing exams. Journal of Applied Testing Technology, 20(2), 21–36. https://www.
ncsbn.org/public-files/Building_a_Method_for_Writing_Clinical_Judgment_It.pdf
Dickison, P., Luo, X., Kim, D., Woo, A., Muntean, W., & Bergstrom, B.
(2016). Assessing higher-order cognitive constructs by using an information-processing framework. Journal of Applied Testing Technology, 17(1),
1–19. https://jattjournal.net/index.php/atp/article/view/89187/67797
Hensel, D., & Billings, D. M. (2020). Strategies to teach the National Council of State Boards of Nursing Clinical Judgment Model. Nurse Educator, 45(3), 128–132. https://doi.org/10.1097/NNE.0000000000000773
PMID:31856142
Kavanagh, J. M., & Szweda, C. (2017). A crisis in competency: The strategic and ethical imperative to assessing new graduate nurses’ clinical
reasoning. Nursing Education Perspectives, 38(2), 57-62. https://doi.
org/10.1097/01.NEP.0000000000000112
Matassarin-Jacobs, E. (1989). The nursing licensure process and the NCLEXRN. Nurse Educator, 14(6), 32–35. https://doi.org/10.1097/00006223198911000-00008 PMID:2594231
Muntean, W. J. (2012). Nursing clinical decision-making: A literature review. National Council of State Boards of Nursing. https://www.ncsbn.
org/public-files/Clinical_Judgment_Lit_Review_Executive_Summary.
pdf
National Council of State Boards of Nursing. (2017, Fall). Next generation
NCLEX® news. https://www.ncsbn.org/publications/ngn-news-fall2017
National Council of State Boards of Nursing. (2024a). Explore NCSBN
through the years. https://timeline.ncsbn.org/
National Council of State Boards of Nursing (2024b). Practice analyses.
https://www.ncsbn.org/exams/exam-statistics-and-publications/practice-analyses.page
National League for Nursing. (2022). History of the National League for
Nursing 1893–2018. https://www.nln.org/about/history/abouthistory-ofnln/nln-historical-timeline
NLNE Department of Measurement and Guidance. (1952). State Board Test
Pool Examination. The American Journal of Nursing, 52(5), 613–615.
PMID:14933472
Taherdoost, H. (2016). Validity and reliability of the research instrument:
How to test the validation of a questionnaire/survey in research. International Journal of Academic Research in Management, 5(3), 28-36.
https://doi.org/10.2139/ssrn.3205040
Wendt, A., Kenny, L. E., & Marks, C. (Winter, 2007). Assessing critical thinking using a talk-aloud protocol. CLEAR Exam Review, 28(1),
18–27. https://www.ncsbn.org/public-files/Assessing_Critical_Thinking_Talk_Aloud_Protocol.pdf
255
Copyright of Journal of Nursing Education is the property of SLACK Incorporated and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email
articles for individual use.
Download