University of Massachusetts-Dartmouth College of Nursing Interim Report Project Implementation October 25, 2014 Project Title: Establishing preliminary psychometric analysis of a new instrument: Nurse Competency Assessment Tool (NCAT) based on MA Nurse of the Future Nursing Core Competencies Model. Partnership Members and Support Staff: UMASS-Dartmouth (UMD), Saint Anne’s Hospital-Steward (SAH), and Saint Vincent Hospital (SVH). Support staff: Eileen O’Neill PhD, RN, Nurse Researcher, and Amruta Meshram, computer science graduate student. Report Completed by: Kerry Fater, PhD, RN, CNE Project Coordinator, and Professor of Nursing at UMASS-Dartmouth College of Nursing. This report is submitted in compliance with the guidelines identified in the request for proposals (RFP) in 2014. Data collection for the study was completed on August 28, 2014, in advance of the projected timeline. Data analysis is currently underway. The requested interim report information is described below. 1. Progress made towards stated goals (as stated on page 5 of the proposal): A. Determine content validity of the NCAT items based on expert reviewer evaluation of the tool. Several steps contributed to determining content validity of the NCAT. Seven experts on the MA Nurse of the Future Nursing Core Competencies served as expert reviewers of the 50-item NCAT. They evaluated each test item and response options for clarity and relevance to the competencies. Percentage of agreement to retain each question was determined. The content validity index (I-CVI) was determined for each item base on expert reviewers’ evaluations. The mean I- CVI was .9416 indicating that the tool is valid (.90 is acceptable). The overall scale content validity index (S-CVI) was 72% indicating universal agreement on 72% of the items. This is acceptable but pointed out some areas in need of revision. The above findings along with previous reviews contributed to reducing the NCAT from 50 items to 30 to maximize its usefulness and ease of administration. Then, relevancy scores (RS) for each question were calculated. Twenty-eight test items had 100% relevancy, and two questions had 71% relevancy. Based on these findings, items were revised, some extensively. Also, all references to age, gender, and time were removed. In addition to reducing the number of items in the NCAT, the research team considered other approaches to improve ease of administration and economy of time without 1 sacrificing the validity of the tool. To this end, the nurse researcher reviewed several authoritative sources on test construction, and based on this review, the decision was made to delete one distractor (response option) from each question. Expert panel reviewers’ ratings and comments were used to determine deletions. B. Create a program for the electronic access of the NCAT, including demographic data for study participants. The program will include reporting data such as automatic scoring and test analysis data. The NCAT is then loaded onto the electronic platform. A computer science graduate student was hired to achieve this objective. She wrote the program for the online NCAT and associated demographic data, loaded it on the platform, configuring it to automatically score each test item and gathered the data on the variables. Towards the end of each phase of the study, the graduate student manually entered the data into Statistical Package for Social Science (SPSS) files for data analysis. C. Determine reliability of the NCAT through test-retest. The next step in the process of establishing NCAT psychometrics was to determine reliability. The Principal Investigators at St Vincent Hospital and Saint Anne’s Hospital were contacted to recruit about 10 nurses from each site. The data collection began on May 15th and ended on July 8th. The nurses recruited were described as a “fresh sample” meaning that they did not previously participate in the 2012 NCAT study. This step was Phase 1 of the study. Participants were staff nurses of any educational level who were willing to take the NCAT twice, approximately two weeks apart. After each participant completed the tool once, the computer science graduate student sent an email reminder to each participant about the need to complete the NCAT again. The test-retest reliability was r= .76, indicating that the tool is stable over repeated testing (DeVellis, 2011). D. Establish internal consistency (Cronbach’s alpha). While the number of study participants in Phase 1 was small (N=20), the Cronbach’s alpha was .63, respectable for a newly created tool. On July 20, after a thorough analysis of the NCAT findings, and minor revisions, Phase 2 of data collection began. In this phase a target number of 150 staff nurses were recruited to participate, which once again reflected a “fresh sample.” On August 28th, the study was closed after 160 participants completed the instrument. While the data analysis was strong on some indicators, the score range was 30% (an outlier) to 97% with a mean of 80%, the Cronbach’s alpha was low at .21. This may reflect the conceptual differences among the competencies with test items loading within the competency domains. To determine if this is the case, a factor analysis of the NCAT is the next step. While the need for this procedure was not anticipated in the proposal nor study, the factor analysis will provide us with further data on the NCAT’s measurement consistency. A statistician who has expertise in this statistical method will assist in completing the necessary statistical testing. 2 E. Contribute the NCAT to the NOFNCC Tool box once validity and reliability is established. Data analysis on the NCAT continues to be done. Once the factor analysis is completed the research team will prepare a publication-worthy manuscript. This step is critically important so that researchers and educators have the necessary psychometric data to make informed decisions about the use of the NCAT in their research/projects. To date, I have received five communications from nurse educators and graduate nursing students who read about the NCAT in the August publication in the Journal of Continuing Education in Nursing and would like to use the tool (Fater, Weatherford, Ready, Finn, & Tangney, 2014). 2. Update on coordination of activities per the proposed timeline. The delay in recruiting staff nurses for the test-retest was based on time to create and test the online platform for the NCAT. Recruitment began on May 15th rather than during April. Recruiting 20 nurses who completed the test on two occasions about 2 weeks apart was another cause for delay. About 6 nurses completed the NCAT only once. Despite reminders to complete it again, they did not do so. That data were then removed from the database. Since the graduate student entered data into the SPSS files in an ongoing way during phase 1, we were able to analyze the statistical files almost immediately. The data provided no indication that changes were needed to the NCAT. As a result, phase 2 data collection began about 2 weeks later, which was consistent with the timeline. From that point one, we were able to complete data collection more quickly that noted on the timeline, with phase 2 closing on August 28th, nearly a month earlier than expected. 3. Changes in the scope/impact of the project. The principle change in the scope of the project was the need to engage more healthcare organizations (from 3 to 6) for data collection. The issue of recruiting “fresh samples” of nurses lowered the numbers available in one of the agencies in particular (SAH), and recruitment was slow in the other two healthcare organizations (SVH, MWMC). A decision was made to invite a healthcare system consisting of three hospitals to participate in the study. As a result, the response rate from recruiting efforts became brisk and the target number was reached. 4. Status of relationship/support with practice partners. The relationship of the practice partners (SAH and SVH) and our additional partnering agencies of Metro West Medical Center and Southcoast Hospitals Group (SHG) is solid and supportive. They were invested in this project and delighted to have their nursing staff participate in a nursing research study on the MA NOFNCC. I would be comfortable approaching all of them to participate in future work on this topic or any other. 5. Budget issues. 3 There were no unexpected budget issues. While I would have liked to offer the same financial payment to my colleague at SHG (a late comer to the project) for his recruitment efforts ($720), the budget did not provide for that to occur. In addition, the total number of subjects in Phase 2 was 10 more than targeted. This was necessary to assure that all participants in this phase had not taken the NCAT previously. The 10 additional payments came from the Project Coordinator’s funds. 6. Unexpected/major issues or potential barriers to success. One difficulty was encountered by the UMASS-Dartmouth Grant office related to payment of study subjects. As explained to me, each participant would have been required to fill out a one to two page form as proof of their participation and for verification purposes. I expressed my concern that this requirement would be an unrealistic burden on subjects and likely serve as an impediment to recruiting willing subjects. As an alternative, I proposed using a separate business checking account established specifically for the NCAT study for the purpose of payments. After completing the NCAT, each participant was asked to provide their name and mailing address to which the payment was mailed. The study participants’ endorsement of the checks and the monthly statements which have photos of the checks provide the evidence of payment that the Grant office requires. This process was used to pay study participants. As a final note, we are pleased at the progress we have made, and feel that our work to date will further competency-based nursing education and practice. 4 References DeVellis, R. F. (2011). Scale development: Theory and applications. Thousand Oakes, CA; Sage. Fater. K, Weatherford, B., Ready R.W., Finn, & K, Tangney, B. (2014). Expanding Nurse of the Future Nursing Core Competencies across the Academic-Practice Transition: A Pilot Study. Journal of Continuing Education in Nursing 45 (8), 366-371. Haladyna, T. M., & Rodriques, M.C. (2013). Developing and validating test items. New York; Routledge. 5