The Jimmy A Young Memorial Lecture July 17, 2014 7:00 to 8:30 AM Marco Island, FL 1 Jimmy Albert Young, MS, RRT 1935 –1975 The NBRC has honored Jimmy’s memory and the contributions he made to respiratory care through this program since 1978 . 2 Jimmy Albert Young, MS, RRT was one of the profession’s most outstanding and dedicated leaders In a 15-year career, -achieved the RRT -directed an education program -directed a hospital department -served as AARC President -served as an NBRC trustee – 1935 – born in South Carolina – 1960 – 1966 – served as Chief Inhalation Therapist at the Peter Bent Brigham Hospital in Boston – 1965 – earned the RRT credential, Registry #263 – 1966 – 1970 – served in several roles including director of the education program at Northeastern University in Boston – 1970 – became director of the Respiratory Therapy Department at Massachusetts General Hospital – 1973 – became the 22nd President of the American Association of Respiratory Care – 1975 – was serving as an NBRC Trustee and member of the Executive Committee when he passed away unexpectedly 3 The Clinical Simulation Examination Then and Now 4 Presenter • Robert C Shaw Jr PhD RRT FAARC – NBRC Assistant Executive Director and Psychometrician 5 Conflict of Interest I have no real or perceived conflicts of interest that relate to this presentation. Any use of brand names is not meant to endorse a specific product, but to merely illustrate a point of emphasis. Learning Objectives • Compare elements of the current RRT credentialing system to elements of the system that is planned for January 2015 • Compare the value of information that has been provided by results from the Clinical Simulation Examination to other elements of the RRT credentialing system • Describe features of the 20-problem Clinical Simulation Examination for which candidates should be prepared by January 2015 Compare elements of the current RRT credentialing system to elements of the system that is planned for January 2015 OBJECTIVE 1 Compare Current to Future RRT Program Elements Hours of testing time Examinations Testing sessions for a candidate who passes on first attempts Sets of test scores Passing points Multiple-choice items to assess competencies broadly Patient management problems to assess competencies deeply Typical number of sections in a patient management problem Number January Current 2015 9 7 3 2 3 2 4 2 4 3 240 140 10 20 10 (8-12) 5 (4-6) Compare the value of information that has been provided by results from the Clinical Simulation Examination to other elements of the RRT credentialing system OBJECTIVE 2 Is there a measurement reason for the Clinical Simulation Examination to exist? QUESTION Scores from the Clinical Simulation Examination added information beyond the information from multiple-choice examination scores when predicting membership in three groups for candidates who sought the RRT credential. RESEARCH HYPOTHESIS 1 What defined the three groups of candidates? Credential Status CRT RRT Examination Outcome Label for group Written Registry Clinical Simulation certification fail fail certification +1 pass fail fail pass pass pass registration CRT pass Defining the Population • Date range for examination attempts – October 22, 2009 through February 27, 2012 • A subset of 9,081 candidates had achieved CRT and made a first attempt at the remaining examinations for RRT (and were not outlying cases) – Written Registry – Clinical Simulation • Information gathering (IG) • Decision making (DM) Statistical Model and Method • Step-wise discriminant analysis with automatic variable selection – Predict group membership from multiple variables, each of which is continuously distributed – Dependent variable • certification, certification+1, and registration groups – Independent variables • First run included four sets of scores – CRT, Written Registry, Clin Sim IG, and Clin Sim DM • Second run included two scores – CRT and Written Registry Standardizing Examination Scores • Raw score ranges – CRT = 0 to 140 – Written Registry = 0 to 100 – Clinical Simulation, varied by test form • IG = a variable-min to a max in the range of 200-300 • DM = a variable-min to a max in the range of 140-170 • Each raw score was converted to a z-score where z = (x – mean) / S Results from Run 1 Step Test Scores Wilk’s Lambda Proportion of unexplained variance F Test for Entry Value p 1 Clinical Simulation DM .421 6251.4 <.0001 2 Written Registry .296 3803.5 <.0001 3 Clinical Simulation IG .295 2540.7 <.0001 4 CRT .295 1907.8 <.0001 Predictions about memberships in the registration group were accurate for 92.4% of the cases Discriminant Score Equation • Discriminant score = 1.026 (Clin Sim DM z-score) + 0.975 (Written Registry z-score) + 0.091 (CRT z-score) - 0.010 (Clin Sim IG z-score) - 0.689 • Clin Sim DM and Written Registry scores were nearly equal and the dominant contributors to predictions about group memberships Results from Run 2 Step Test Scores Wilk’s Lambda Proportion of unexplained variance F Test for Entry Value p 1 Written Registry .447 .42 5607.3 <.0001 2 CRT .433 .30 2361.6 <.0001 Predictions about memberships in the registration group were accurate for 85.4% of the cases 92.4% Conclusions • The research hypothesis was accepted – Scores from the Clinical Simulation Examination add information about RRT achievement beyond what is available from multiple-choice examination scores • If the Clinical Simulation Examination was removed from the system, there would be a 7% loss of accurate RRT classifications – Incompetent candidates would become RRT – Competent candidates would be denied RRT Although there were four sets of test scores, three tests, and two types of tests, RRT competencies were based on only one cognitive construct. RESEARCH HYPOTHESIS 2 Examination Type Characteristics Characteristic Multiple-Choice Clinical Simulation dichotomous (0 or 1) polytomous (-3 to 3) independent items independent problems, dependent sections Potential for branching units to which a subset of candidates are directed no yes Cut point determination method external to test development Integrated with test development $ $$ Option-response scoring Linkages between stimulus-response elements Cost to produce Risks from Using Multiple Examinations with Different Characteristics Type of Examination Simulation MultipleChoice Level of Examination Advanced Entry Statistical Model and Method • Principal components analysis with crossvalidation – Explore the underlying variance structure within four sets of test scores • CRT • Written Registry • Clinical Simulation – IG – DM – Is useful for confirming a hypothesis, in this case the assertion that there is a common characteristic expressed by the four test scores Preliminary Result 1 As an indicator of sampling adequacy -KMO should be at least .50 -Sig value should indicate statistical significance As indicators of positive crossvalidation -KMO values should be about the same Samples random whole split 1 9,081 4,557 random split 2 4,224 .777 .772 9358.25 8913.45 6 6 .000 .000 sample size Kaiser-Meyer-Olkin Measure of .775 Sampling Adequacy Chi18259.56 Square Bartlett's df 6 Test of Sig. .000 Sphericity Preliminary Result 2 As indicators of making a sufficient contribution to the principal component solution Communality Values Samples Scores -Communality values should be at least .50, otherwise a variable should be removed As indicators of positive crossvalidation -Values across each row should be similar Extraction whole random split 1 random split 2 CRT .768 .765 .773 Written Registry .765 .771 .759 Clin Sim IG .589 .595 .583 Clin Sim DM .701 .713 .690 Primary Result The threshold for a consequential eigenvalue is 1.0 or Components at the inflection point and beyond lack consequence Conclusions -The research hypothesis was accepted There was only one principal component to which all four sets of test scores were linked -Potential risks associated with using a multipleexamination system were avoided Summary from Both Studies • Within the population of new RRTs each year, accurate classifications occur more often because there are multiple examinations • Risks associated with a credentialing system based on multiple examinations were avoided Study Limitations • These were population studies involving a recent period of more than 2 years • Unless characteristics of candidates or examinations change, I expect these results will generalize into the future – Candidates: program admission criteria, program duration, program intensity – Examinations: number of instruments, types of measurements Describe features of the 20-problem Clinical Simulation Examination for which candidates should be prepared by January 2015 OBJECTIVE 3 Rationale for Changing the Simulation Examination • Instant scoring demands selection of problems for each new test form that have not changed – After a decade, keeping examination content current became an increasing challenge 32 Solution • Give the examination committee smaller content elements from which test forms are assembled – Halve the number of sections in problems – Double the number of problems • Hold testing time the same at 4 hours 33 As long as other changes will be made . . . ENHANCE PSYCHOMETRIC PROPERTIES 34 Standardize Test Forms More Thoroughly Type of Problem A1. COPD Conservative Care A2. COPD Critical Care B. Trauma C. Cardiovascular D. Neurological / Neurosurgical E. Pediatric F. Neonatal G. General Medical / Surgical Specifications Current 10-Problem Future 20-Problem 1 or 2 2 1 or 2 2 1 or 2 3 1 or 2 3 1 or 2 2 1 2 1 2 optional 4 35 Problems Each Candidate Will See • • • • • • 4 about COPD 4 about children 4 about general medical / surgical 3 about trauma 3 about cardiovascular 2 about neuro – Likely one neuromuscular and one neurologic Simulation Examination Scores Advantages of a one score and one cut system • A test with more items and more points than its predecessors will yield more accurate scores as indicators of candidates’ abilities – Pass and fail decisions become more accurate • Accuracy is gained without an increase in test administration time – Fee for the Clinical Simulation Examination stays the same A Potential Disadvantage of a Combined Score • Compensation can occur unless the cut score policy is changed – Someone within a few points of passing based on decision making performance could pass by acquiring a higher percentage of available information gathering points New Cut Score Policy The cut score for a test form must be the sum of MPLs from the two types of sections such that those section MPLs fall within the two ranges shown in the table Implementation has mandated addition of options labeled as required among positively-scored options in IG sections Cut Score Range Section Type Current New DM 60% to 70% 60% to 70% IG 60% to 70% 77% to 81% Conforming to the Policy One IG Section Option Current January 2015 1 -2 -2 2 1 1R 3 2R 2R 4 -1 -1 5 2R 2R 6 -1 -1 7 -2 -2 8 1 1 9 2R 2R 10 -2 -2 11 1 1 12 -1 -1 MPL, Max, % 6, 9, 67% 7, 9, 78% Illustrations that follow came from one test form WHY THE CUT SCORE POLICY CHANGE MATTERS DM Score Distribution 200 MPL range remains 60%-70% F re quenc y 150 100 50 Std. Dev = 11.4 2 M ean = 6 0 N = 23 31.0 0 0 0 10 20 30 40 50 DM % Score 60 70 80 90 IG Score Distribution 400 MPL range has been 60%-70% 350 F re quenc y 300 250 200 150 100 Std. Dev = 5.76 50 M ean = 8 0 N = 23 31.0 0 0 0 10 20 30 40 50 IG % Score 60 70 80 90 DM SCO RES SCATTERPLOT OF IG & DM SCORES Ref Lines @ IG & DM MPLs 130 120 Pass No case in this quadrant 110 100 90 80 70 60 50 40 30 20 10 0 0 20 40 60 80 100 120 140 IG SCORES 160 180 200 220 240 SCATTERPLOT OF IG & DM SCORES Ref Lines @ -0.05 Z MPLs 130 People in this quadrant would pass under the current system 120 110 DM SCO RES 100 Pass 90 80 70 60 50 40 30 20 10 0 0 20 40 60 80 100 120 140 IG SCORES 160 180 200 220 240 Highlights for Students • The numbers of problems by patient type will be constant for each candidate • Testing time remains 4 hours – 22 problems will be presented – Results will be based on responses to 20 problems • As a result of a problem-splitting procedure – Some problems will not offer IG sections – Candidates will see the same number of IG sections across the whole examination as they currently see Highlights for Students (cont.) • Responses will be summed across IG and DM sections that a candidate enters to produce one score to which a cut score will be compared – The cut will equal the sum of MPL values across sections along the critical path • Compared to the current examination, responses in IG sections will be consequential – Reduced tolerance for errors QUESTIONS