Using PROMIS Tools in Clinical and Health Services Research Kevin Weinfurt, Ph.D. Duke University Medical Center Presented at the Academy Health Annual Meeting, Orlando, Florida, June 3, 2007 1 Overview • PROMIS products – Difference between standard measures and item bank • How PROMIS can affect typical research practices • Future challenges 2 PROMIS Products • Standard protocols used by PROMIS • Data – Focus groups – Cognitive interviews – De-identified response data from various populations • Item pedigrees and revision documentation • Item banks – Item wording and response categories – Each item’s IRT model parameters • Standardized norm scores for various disease populations and the general population • Software for constructing and administering PROs 3 Anthropology and PRO Measures Standard Measure Item Bank 4 Anthropology and PRO Measures Standard Measure Ready to go, Limited adaptability Item Bank 5 Anthropology and PRO Measures Standard Measure Ready to go, Limited adaptability Item Bank Longer socialization, Excellent adaptability 6 Item Banks PRO Measure Item Bank 7 Item Banks PRO Measure Item Bank Static Measure Dynamic Measure 8 Item Banks PRO Measure Item Bank Static Measure • Dynamic Measure Pick-a-PRO – General short forms for some or all PROMIS domains • Build-a-PRO – You create short forms tailored to your patient population 9 Item Banks PRO Measure Item Bank Static Measure • Pick-a-PRO – General short forms for some or all PROMIS domains • Build-a-PRO – You create short forms tailored to your patient population Dynamic Measure • Computerized Adaptive Test (CAT) • Set max # items or desired precision • CAT selects items based on previous responses to arrive at a precise estimate quickly 10 Item Banks PRO Measure Item Bank Static Measure Dynamic Measure All done via public domain PROMIS software 11 How PROMIS Will Affect Typical Research Practices (Not exhaustive) 12 Identifying Candidate Measures Standard + PROMIS • Do literature review and compare across published studies in specific populations – Comparisons challenging because of different metrics 13 Identifying Candidate Measures Standard • Do literature review and compare across published studies in specific populations – Comparisons challenging because of different metrics + PROMIS • Use PROMIS software and stored data to query properties of sets of items (including legacy measures) for specific populations – Comparison enhanced by common metrics – Quickly compare precision of different options 14 Piloting Measures Standard + PROMIS • Would like to include multiple measures for comparison – Seldom done because of burden and expense – Begin main study with measure that might lack precision and have floor/ceiling effects 15 Piloting Measures Standard • Would like to include multiple measures for comparison – Seldom done because of burden and expense – Begin main study with measure that might lack precision and have floor/ceiling effects + PROMIS • Determine population distribution on construct of interest – Administer general short form or CAT • Identify items that are most informative – Actual CAT in pilot – Simulated CAT using PROMIS software 16 Main Study Degree of Adaptation Pilot Time 0 Time 1 Time 2 None General Short Form General Short Form General Short Form General Short Form Moderate General Short Form Custom Short Form Custom Short Form Custom Short Form High CAT Custom Short Form Custom Short Form Custom Short Form Extreme CAT CAT CAT CAT 17 Main Study Degree of Adaptation Pilot Time 0 Time 1 Time 2 None General Short Form General Short Form General Short Form General Short Form Moderate General Short Form Custom Short Form Custom Short Form Custom Short Form High CAT Custom Short Form Custom Short Form Custom Short Form Extreme CAT CAT CAT CAT 18 Main Study Degree of Adaptation Pilot Time 0 Time 1 Time 2 None General Short Form General Short Form General Short Form General Short Form Moderate General Short Form Custom Short Form Custom Short Form Custom Short Form High CAT Custom Short Form Custom Short Form Custom Short Form Extreme CAT CAT CAT CAT 19 Main Study Degree of Adaptation Pilot Time 0 Time 1 Time 2 None General Short Form General Short Form General Short Form General Short Form Moderate General Short Form Custom Short Form Custom Short Form Custom Short Form High CAT Custom Short Form Custom Short Form Custom Short Form Extreme CAT CAT CAT CAT 20 Main Study Degree of Adaptation Pilot Time 0 Time 1 Time 2 None General Short Form General Short Form General Short Form General Short Form Moderate General Short Form Custom Short Form Custom Short Form Custom Short Form High CAT Custom Short Form Custom Short Form Custom Short Form Extreme CAT CAT CAT CAT 21 Varying Length Measures in Longitudinal Studies Standard + PROMIS • Use brief measures more frequently, longer measures less frequently – Scores on brief and longer measures are on different metrics – Cannot be combined for more powerful longitudinal analyses 22 Varying Length Measures in Longitudinal Studies Standard • Use brief measures more frequently, longer measures less frequently – Scores on brief and longer measures are on different metrics – Cannot be combined for more powerful longitudinal analyses + PROMIS • Use brief measures more frequently, longer measures less frequently – Scores on brief and longer measures are on the same metric – Maximum, efficient use of information 23 collected over time Single Item Measures of PROs • Frequently used – Large population studies • Little room for more than one item – CRFs in clinical trials • Item banks Identify single item best suited to your population – From previous studies, pilot work, etc. • Link to score used by multi-item measure from same bank – Example: Could combine e-diary with data from assessment completed at clinic visit 24 Improving Meta-Analysis of Primary Datasets • Item banks can contain multiple wellaccepted PROs (e.g., SF-36, FACT) – Co-calibration means cross-walk is possible between different measures • Primary data from different studies using different PROs can be combined using common item bank metric 25 Practical Challenges to Proposing Use of Item Banks in Grants • The IRT Assumption – Non-overlapping subsets of items are equally valid measures of the same construct • Property of a well-fitting IRT model • Not all items in the bank will have equal amounts of validity data – Need to keep track of validity data at item level – Initially, short forms will probably be the most defensible for grant applications 26 PROMIS Website http://www.nihPROMIS.org/ NIH Program Contact for PROMIS: William (Bill) Riley, PhD Acting Program Director, PROMIS National Institute of Mental Health wiriley@mail.nih.gov 27