Quick Start: How to Get a Clinical Research Project Started at UT-Houston Medical School Kathleen A. Kennedy, MD, MPH Jon E. Tyson, MD, MPH Robert E. Lasky, PhD This PowerPoint presentation, provided by the Center for Clinical Research and Evidence-Based Medicine, is a brief outline of the steps involved in getting a clinical research project off the ground at UT-Houston. More detailed information is available in the following textbooks: Hulley SB et al. Designing Clinical Research, 3rd ed, Lippincott Williams & Wilkins, 2006. Fletcher RH et al. Clinical Epidemiology – The Essentials, 4th ed, Williams & Wilkins, 2005. Friedman LM et al. Fundamentals of Clinical Trials, 3rd ed, Springer-Verlag, 1998. Investigators without prior training in clinical research methods should at least read the Hulley text before embarking on a clinical research project. Additional training is available in the Clinical Research Curriculum (CRCA) courses, including Introduction to Epidemiology Research (Observational Studies) – offered in the fall of odd years Clinical Trial Design (Interventional Studies) – offered in the winter at the end of odd years Clinical Research Ethics – offered in the spring of odd years Additional references are included (at the bottom of the page) for specific components in this outline. Videotapes for CRCA lectures can be borrowed by contacting Claudette Ocampo at (713) 500-6708. Steps in Launching a Research Project Idea Detailed Literature Review Study Question Detailed Study Protocol Funding (if needed) Approval from Physicians, IRB, Hospital Implementation Steps in Launching a Research Project This process is almost always iterative, with multiple revisions and changes in the study question. To avoid unnecessary delays, seek advice about the sample size requirements, costs, and acceptance by clinicians early in the process. Stating the Question/Hypothesis A research idea (for an observational or intervention study) should be structured into a well-built clinical research question or hypothesis with the following PICO components: Population of interest Intervention to be tested Comparison strategy (Jump ahead or return to “Components of Clinical Research Proposal”) Outcome(s) Richardson WS et al. The well built clinical question: a key to evidence-based decisions. ACP Journal Club. 1995; 123(3):A12-3. CRCA Lectures: 11/7/07 (Interventional Studies); 9/3/03 (Observational Studies) Study Questions The type of question determines the study design that should be used to answer the question. The following slides elaborate the definitions and important study design features for each of the most common types of study questions. These features increase the methodologic quality of the study; designs lacking these features are more subject to bias. Hulley textbook, Chapter 2 (Question) Types of Study Questions Therapy/Prevention/Intervention (skip ahead) Diagnosis/Diagnostic Tests (skip ahead) Etiology/Harm/Risk Factors/Mechanism of Disease (skip ahead) Prognosis (skip ahead) Descriptive/Prevalence (skip ahead) Systematic Reviews (skip ahead) Economic Evaluations (skip ahead) Therapy/Intervention/Prevention Study A study intended to evaluate the safety, efficacy, or effectiveness of an intervention, including educational or behavioral interventions Could include healthy people, patients, or health care providers as subjects Includes studies labeled as pilot, phase 2, preliminary or feasibility study Hulley textbook, Chapters 10 & 11, (Clinical Trials) Friedman LM et al. Fundamentals of Clinical Trials, 3rd ed, Springer,1998. Jadad A. Randomized Controlled Trials: A User’s Guide, BMJ Books, 1998. CRCA Lectures: 11/28/07, 12/5/07, 12/12/07 (from Clinical Trial Design Course) Therapy/Intervention/Prevention Study Design Features Prospective cohort* study Randomized allocation If randomized, concealed allocation If not randomized, steps taken to make sure the groups are as similar as feasible with respect to important prognostic variables at the start of the study * A cohort study assembles subjects on the basis of eligibility criteria and the presence or absence of exposure status; subjects are then evaluated, usually prospectively, for the presence or absence of the outcome of interest. A clinical trial is a cohort study in which the investigator, usually by random assignment, determines the exposure or treatment status of the subjects. Therapy/Intervention/Prevention Study Design Features (cont) Patients, clinicians, and investigators masked to treatment to the extent feasible Outcome evaluators masked to treatment to the extent feasible Groups treated equally, apart from the treatment under investigation Therapy/Intervention/Prevention Study Design Features (cont) Inter-observer reliability of the tests evaluated if a test or evaluation is being used Follow-up of patients after treatment sufficiently long and complete to identify important benefits and hazards Unbiased stopping rules for ending the study and a power calculation for the planned sample size Intention-to-treat analysis (analysis of all patients in the group to which they were randomized) for management trials Therapy/Intervention/Prevention Study Design Features (cont) For pilot studies, a clear and credible plan for a definitive study to address the study question For drug or device studies, subjects in both groups provided all care that is considered to be routine and of proven benefit For drug or device studies, evaluation in an appropriate spectrum of patients (like those in whom it would be used in practice) Skip ahead to protocol writing Study of Diagnosis/Diagnostic Test A study intended to evaluate a diagnostic test with respect to whether the test provides reliable information about whether the subject has the disease or condition of interest When the utility or benefit of a diagnostic test is evaluated as a management strategy, it should be categorized as a therapy/intervention/ prevention study. Fletcher textbook, Chapter 3 (Diagnosis) Hulley textbook, Chapter 12 (Medical Tests) Users’ Guides to the Medical Literature IIIA&B. Diagnostic Tests. JAMA 271:389-391, 703-707, 1994; XVII Screening. JAMA 281:2029-2034, 1999. CRCA Lectures: 10/17/07 (Diagnosis), 10/24/07 (Screening) Study of Diagnosis/Diagnostic Test Design Features An independent masked comparison to a reference (“gold”) standard of diagnosis An appropriate spectrum of patients (like those in whom it would be used in practice) The reference standard applied to all patients regardless of the diagnostic test results Inter-observer reliability of the tests evaluated if a test or evaluation is being used Study of Diagnosis/Diagnostic Test Design Features (cont) The test (or cluster of tests) validated in a second, independent group of patients (preferable but not critical) Skip ahead to protocol writing Study of Etiology/Harm/Risk Factors/ Mechanism of Disease A study intended to evaluate the etiology (cause), predictors, or risk factors for a disease or condition Fletcher textbook, Chapter 11 (Cause) Hulley textbook, Chapters 7 (Cohort Studies), 9(Cross-sectional and Case-Control Studies), and 9 (Causal Inference) Users’ Guides to the Medical Literature IV. Harm. JAMA 271:1615-1619, 1994. CRCA Lectures: 9/12/07 (Prospective studies); 9/19/07 (Case-control studies) Study of Etiology/Harm/Risk Factors/ Mechanism of Disease – Design Features Cohort, case-control*, or cross-sectional* Clearly defined groups of patients, with measures to ensure that they are similar in all important ways other than exposure/treatment/ risk factor under investigation Exposures/treatments/risk factors and outcomes measured in the same ways in both groups * A case-control study assembles subjects on the basis of the presence or absence of the outcome of interest; subjects are then evaluated, usually retrospectively, for the presence or absence of the an exposure or exposures. A cross-sectional study evaluates subjects at a single time point for exposure and outcome status. Study of Etiology/Harm/Risk Factors/ Mechanism of Disease - Design Features (cont) Assessment of outcomes objective and masked to exposure/treatment/risk factor to the extent feasible (for cohort studies and cross-sectional studies) Assessment of exposure/treatment/risk factor that is objective and masked to the outcome, to the extent feasible (for case-control and crosssectional studies) Follow-up complete and long enough to answer the study question Skip ahead to protocol writing Prognosis Study A study intended to evaluate the expected outcome for subjects with a particular condition or treatment, including studies that evaluate the risks associated with particular test There should be no intention of drawing conclusions regarding a causal relationship between the condition and the outcomes. If causal inferences are to be made, it should be categorized as an etiology study type. Fletcher textbook, Chapters 5 (Risk) and 6 (Prognosis) Hulley textbook, Chapter 7 (Cohort Studies) Users’ Guides to the Medical Literature V. Prognosis. JAMA 272:234-237, 1994 . CRCA Lectures: 9/26/07 (Causation); 10/3/07 (Risk and Prognosis) Prognosis Study Design Features Cohort study A defined, representative sample of patients assembled at a common (usually early) point in the course of their disease Follow-up of patients complete and long enough to answer the study question Objective outcome criteria applied in a masked fashion (as feasible) for prognostic factors Adjustment for important prognostic factors if subgroups identified Prognosis Study Design Features (cont) Validation of the predictors of outcome in an independent group (“test set”) of patients (not the group in which the predictors were defined) (preferable but not critical) Skip ahead to protocol writing Descriptive/Prevalence Study A study that describes the characteristics of a population without testing a hypothesis about the population or its subgroups Fletcher textbook, Chapter 4 (Frequency) CRCA Lecture: 9/5/07 Descriptive/Prevalence Study Design Features Population described in sufficient detail for the study to be replicated Subgroups (if used) described in sufficient detail for the study to be replicated Delineation of disease status or condition described in sufficient detail for the study to be replicated Inter-observer reliability of the tests evaluated If a test or evaluation is being used Skip ahead to protocol writing Systematic Reviews A review of the literature using explicitly stated search methods and criteria for evaluation of methodologic quality May or may not include summary statistical analyses (meta-analysis) Hulley textbook, Chapter 13 (Secondary Studies and Systematic Reviews) Users’ Guides to the Medical Literature VI. Overviews. JAMA 272:1367-1371, 1994. CRCA Lectures: 1/16/08 Systematic Reviews Design Features Focused clinical question Explicit and thorough methods for searching the literature Explicit and appropriate criteria for including and excluding studies in the review All clinically important outcomes considered Subgroups considered when appropriate Skip ahead to protocol writing Economic Evaluations A study that includes an assessment of the costs of alternative health care strategies Drummond MF et al. Methods for the Economic Evaluation of Health Care Programmes, 2nd Ed, Oxford, 1997. Users’ Guides to the Medical Literature XIIIA. Economic Analysis. JAMA 277:1552-1557, 1997; XIIIB. Economic Analysis. JAMA 277:1802-1806, 1997. CRCA Lectures: 2/13/02 (Quality of Life); 2/20/02 (Economic Evaluations) Economic Evaluations Design Features Comparison of well-defined alternative courses of action Specified point of view Clinically important outcomes considered Valid evidence for the efficacy or accuracy of the alternatives Identification and valid measurement of all relevant costs Skip ahead to protocol writing Protocol Writing Components of a Clinical Research Protocol Background/Significance, including systematic review of the literature Question or hypothesis (review PICO format) Methods Population (Inclusion/Exclusion Criteria) Recruitment methods Tracking of eligible non-enrolled subjects (for management trials) Hulley textbook, Chapters 1 (Getting Started) and 3 (Study Subjects) Moher D et al. The CONSORT Statement: Revised Recommendation for Improving the Quality of Reports of Parallel-Group Randomized Trials. JAMA 285:1987-1991, 2001. Components of a Clinical Research Protocol Methods (cont) Procedures for group assignment Study and control interventions if applicable Management of Co-interventions and /or Confounding variables Masking Procedures for monitoring recruitment and protocol compliance Schultz KF et al. Empirical evidence of bias. JAMA 273:408-412, 1995. CRCA Lectures: 1/30/02 (Practical Aspects); 2/6/02 (Data Management) Components of a Clinical Research Protocol Methods (cont) Outcomes (primary and secondary) with methods of assessment Analysis plan (including sample size) Procedures for safety monitoring and early termination Limitations References Fletcher Textbook, Chapter 2 (Abnormality) Hulley textbook, Chapters 4 (Measurements), 5 & 6, (Sample Size and Power) CRCA lectures: 10/10/07 (Measurement); 9/6/06-10/4/06 (Biostatistics for Clinical Investigators Course). Final Preparation Present proposal to division/department for practical advice and approval of procedures that impact patient management should be done earlier in the process (before finalizing protocol) if problems are anticipated Present to nursing staff for practical advice suggestions about flyers, reminders, preprinted order sheets IRB/CPHS Approval Approval required before enrollment begins Required for all research (retrospective or prospective, observational or interventional) Common (albeit controversial) definition of research: any activity involving human subjects that is designed to yield generalizable knowledge Hulley textbook, Chapter 14 (Ethical Issues) CRCA Lectures: 2/21/07-4/4/07 (Ethical Issues in Clinical Research Course) IRB/CPHS Approval Process Expedited approval possible if: project does not affect patient management no risk to patient confidentiality no informed consent required Full committee review (with or without informed consent) for all other projects See CPHS web page for official policies, application instructions, and web page for electronic submission IRB/CPHS Approval Process There is no longer a submission deadline. Proposals may undergo “pre-review” before submission to the committee. Committee meetings are held three times a month. Notification of decision is usually available the following week. Approval with modifications is usually given at first review for proposals that are well-designed and well-written with no (or easily solvable) research ethics problems. Others are likely to be deferred until the following month. IRB/CPHS Approval Process Major considerations for committee Risks to participants vs. benefits for participants and society Consent process (should present risk/benefit information adequately and fairly and minimize potential for coercion) Consent form must be translated into Spanish if applicable (after final approval). All investigators must provide certificate of Education on the Protection of Human Subjects (available online) before beginning research. IRB/CPHS Application Components CPHS application (electronic) Study protocol Consent form Letters of approval/cooperation (if applicable) Recruitment materials Survey/questionnaire forms HIPAA form(s) (part of electronic application) Pediatric Risk Assessment form (if applicable, part of electronic application) Hospital/Facility Approval Process Must be received before research can begin Focuses primarily on costs to facility, potential public relations problems For Memorial Hermann Electronic application is submitted with CPHS forms Takes weeks-months longer than CPHS approval (Contact Marianna Riggs [713 704-4256] at Memorial Hermann if there is a prolonged delay.) Hospital/Facility Approval Process (cont) For LBJ Form is submitted directly to Harris County Hospital District. Form, instructions, and contact information are available on line. Cannot be submitted until CPHS approval has been obtained with approved consent forms in English and Spanish Takes weeks-months longer than CPHS approval Clinical Research Unit (CRU) [formerly General Clinical Research Center (GCRC)] NIH funds that can be used to provide support for unfunded investigator-initiated studies or to supplement funded studies See CRU web page for details regarding eligibility and application procedures. Submission deadline is the last Friday of the month. Scientific Advisory Committee meetings are held on the fourth Thursday of the next month. Notification of decision is usually available the following week. Clinical Research Unit (CRU) Application are now submitted online in conjunction with IRB submission. Pilot grants have recently become available through the CCTS to support pilot clinical research projects conducted by junior faculty and fellows. CRU Approval Process Approval with modifications is usually given at the first review for proposals that are scientifically meritorious, well-designed, and well-written. Others are likely to be deferred until the following month. Major considerations for committee Scientific merit Appropriate use of the CRU Getting the Study Started Planning for Study Procedures Study procedures should be detailed in writing to ensure consistency among personnel and over time (should deal with all plausible contingencies). Level of detail should be greatest and tolerance for error lowest for the study and control interventions and for the determination of the primary outcome variable. Modify written procedures as needed when problems arise during the study. Hulley textbook, Chapter 17 (Implementing Study) Getting the Study Started Preparation for Data Collection Data analysis and data collection should be driven by hypotheses. Use analysis plan to design data collection. Data definitions should be detailed in writing to ensure consistency among personnel and over time. Level of detail should be greatest and tolerance for error lowest for key data items, especially outcome variables. Hulley textbook, Chapter 16 (Data Management) Data Collection Collect data items needed for important baseline characteristics (population description) and analysis of primary and secondary outcomes. Select a limited number of predictor/risk adjustment variables that can be accurately determined on all or most subjects. Avoid the temptation to collect more data than you need or more than you can carefully collect. Put more effort into accuracy and completeness of limited data items. Data Entry Tips Every data item should have an answer (include “other”, “not applicable”, “permanently missing”) so that an item is not left blank. Use procedures to check for missing data on an ongoing basis. Use procedures to identify or prevent implausible responses as data are entered. Hulley textbook, Chapter 15 (Questionnaires and Data Instruments) Spreadsheets Often used as substitutes for databases, particularly for small studies Simpler to set up than databases Easy to do relatively simple calculations Data entry errors are easy to make, especially with a large number of subjects. Sorting is possible but can be risky. Can be exported into statistical programs Online module on use of Excel: CRCA Computer Course Databases More complex to set up Allow data entry forms that resemble paper forms data entry more convenient Multiple options for control of data entry validation rules required entry look-up tables radio buttons, checkboxes, etc Online module on use of Access: CRCA Computer Course Databases Include definitions and instructions on form as feasible. Test forms before using on study subjects. Easier sorting/selecting even for complex sorting criteria Automatic record saving Can be exported into statistical programs Quality Control for Outcome Measures Error/variance might be reduced by Written procedures Training sessions Testing and certification of examiners Centralized reader Monitoring Adherence Poor protocol adherence results in a bias toward the null. Monitor protocol adherence on an ongoing basis. Goals for adherence Very high in efficacy/explanatory trial “Real world” in management trial Dealing with Missing Data and Loss to Follow-up Nonadherent participants generally have worse outcomes than adherent participants, even if the treatment is placebo. Survival analysis is useful only if survival is the outcome of interest and if the reason for withdrawal/censoring is unrelated to the intervention. Plan procedures to minimize loss to follow-up. Plan for competing events – eg, death before outcome evaluation. Early Termination Reasons for early termination For small/medium trials • None (exception if mortality difference is clearly demonstrated, unlikely with small sample sizes) For large trials • Therapy more effective than projected • Adverse effects outweigh potential benefits • No realistic expectation of a difference Early Termination Statistical adjustments must be made for each interim analysis (will increase sample size or decrease power). Decisions for early termination should be made with great caution. Consider impact on credibility and acceptance of results. Consider impact on usual practice. Individual help is also available for CRCA participants through the Research Support Services in the Center for Clinical Research and Evidence-Based Medicine.