Down to Earth Educational Research Questions and Study Design Catherine Horn, Ph.D., Blanca Plazas Snyder, Ed.D. John H. Coverdale, M.D., M.Ed., F.R.A.N.Z.C.P., Alan K. Louie, M.D. Laura Weiss Roberts, M.D., M.A. E ducational research is the systematic inquiry into a research question of interest. What generally differentiates such research from other types of rigorous inquiry is, most typically, the problem on which the work is centered. Although the field is broad, leaving much room for rigorous analysis of a wide variety of questions under one umbrella, ultimately all such studies carry with them an assumption of benefiting education. In the case of medical education, educational research has the potential to ultimately benefit patients. One of the first steps in educational research design is to determine the adequacy of the design for answering the research question. This column reviews several of the main quantitative and qualitative educational research designs and their utility. It follows from an earlier editorial (1) that emphasized the responsibility of authors to provide a relatively comprehensive description of the study design to enable critical appraisal and assessment of the validity of the findings. One of our goals is to educate the reader and prospective researcher about the range of educational research designs, their applicability, and strengths and weaknesses. In understanding the value of each of the methodological approaches discussed in this column, the prospective Received August 18, 2008; accepted August 21, 2008. Dr. Horn is affiliated with the College of Education at the University of Houston; Dr. Plazas is affiliated with the University of Houston; Dr. Coverdale is affiliated with the Department of Psychiatry and Behavioral Sciences at Baylor College of Medicine in Houston; Dr. Louie is affiliated with Psychiatry Residency Training at San Mateo County Mental Health Services and with the Department of Psychiatry at UCSF; Dr. Roberts is affiliated with the Department of Psychiatry and Behavioral Medicine at the Medical College of Wisconsin in Milwaukee. Address correspondence to Catherine Horn, College of Education, Department of Educational Psychology, University of Houston, 4800 Calhoun Rd., Farish Hall 437, Houston, TX 77204 –5038; clhorn2@uh.edu (e-mail). Copyright © 2009 Academic Psychiatry researcher must first recognize the importance of choosing a design that best matches the question of interest. Although various approaches to design have cycled in and out of vogue in the academic community, fundamentally sound empirical pursuit matches the optimal methodology with the research question, goals, or hypotheses. The design helps the investigator focus on the research question(s) and plan a disciplined approach to the collection, analysis, and interpretation of the related data (2). Design issues are fundamental to reducing the possibility of bias, decided in advance, and not readily rectified once the study is begun. Study design and conduct are important considerations in reviewers’ and editors’ deliberations concerning the worthiness of publication of a manuscript. For example, Bordage (3) found that timeliness of the problem studied, excellence of writing, and soundness of study design were important strengths noted in accepted manuscripts. Alternatively, some of the reasons for rejecting medical educational research included insufficient problem statements, choice of the sample and sampling limitations, and inadequacy of research design (3). This column begins, then, with a discussion of key steps in designing a study and follows with a description of several of the most prominent methodological approaches, both quantitative and qualitative, to analyzing and reaching conclusions from collected data. However, it is not our intention to provide a comprehensive discussion of all possible research designs or to describe appropriate statistical methodologies for each design option. Instead, we will address research questions and sampling methods, as well as experimental, quasi-experimental, meta-analysis, nonexperimental, and qualitative research methods in education. Some examples are provided from Academic Psychiatry of published studies and designs. When applicable, Academic Psychiatry, 33:3, May-June 2009 http://ap.psychiatryonline.org 261 RESEARCH QUESTIONS AND STUDY DESIGN we will use Campbell and Stanley’s (4) model for depicting the different forms of educational research designs. Before turning to this discussion, the reader should note that although this column presents quantitative and qualitative methods separately, the two may be used together. A mixed-methods approach allows for the intersection of quantitative and qualitative paradigms such that one can foster or extend the work of the other toward answering a question. Indeed, each approach is useful in its own right and is often most productive when complementary (5). Research Questions To construct a well-designed and executed study, the researcher must first establish clear, well-focused research questions, goals, or hypotheses (6). Such efforts are important for several reasons. Primarily, the research question guides the choice of methodological approach(es) and serves to clearly delineate the parameters of the study (7). Additionally, a well-crafted research question allows the study to be embedded in the context of a broader field of complementary research. In turn, this enables a discussion of how the proposed design replicates or alternatively intends to improve on earlier methodologies and designs. Finally, such efforts help ensure that appropriate variables of interest are both identified and operationalized in such a way that data collection yields useful information for analysis. However, it should be appreciated that limitations in funding opportunities for educational research (8) may prohibit optimization of design. The goal then becomes finding the right balance between what is optimal and what is achievable given financial and practical limitations and given the research question. No study is perfect, after all; accordingly, reviewers and readers should protect against a particular form of bias that unduly focuses on perceived weaknesses as opposed to strengths (9). Sampling In defining the parameters of the study, the researcher must clearly define the population of interest. Establishing these boundaries allows the researcher (and consumers of research) to delineate both what is included and excluded from the study at hand. Once that population has been defined, an appropriate sampling procedure is identified and applied to select a determined number of participants. Several important characteristics drive sampling in educational (in fact, in all) research. Sampling provides the 262 http://ap.psychiatryonline.org advantages of saving time and money, particularly in the context of large populations of interest. Both quantitative and qualitative educational research use sampling procedures. For most quantitative work, researchers seek a representative sample— one that does not vary in any important way from the full population. The most efficient way to achieve such a sample is through a simple random sample. Simple random sampling capitalizes on known probability theory to establish a relatively smaller group that accurately represents a larger population of interest. If the approach adheres to the principle of equal opportunity for selection, this sampling technique allows generalizations to be made within a small margin of error. More complicated sampling strategies (e.g., cluster sampling, stratified random sampling) have built on the foundation of simple random sampling to provide representativeness when circumstances do not allow for easy selection simply at random. When convenience samples are used, participants are included simply because they are easily accessible; attending to the match between the subjects and the target population facilitates understanding of the generalizability of the results (10). A power analysis allows a determination of the needed size of a sample to generalize findings with statistical confidence to the population of interest. This undertaking uses known or estimated characteristics of the extent to which the outcome of interest manifests in the population. This is typically represented through an effect size. A researcher may use effect size to determine the magnitude or strength of a relationship between two groups. However, it is important to note that effect size is independent of sample size (11). Additionally, considerations of a willingness to reject a true null hypothesis (referred to as a type I error) and, alternatively, retain a false null (type II error) are made. Ultimately, sample size becomes a balance between need for statistical sensitivity with more practical considerations. Where power analysis attends to statistical significance, it is important to differentiate such efforts with an understanding of substantive significance. Two examples serve to make this point. As indicated in the paragraph above, statistical significance is influenced by the number of participants included in the study. In other words, if the researcher increases the sample size, he or she will have a greater probability of finding statistical significance. As a result, increasing sample size alone may result in findings achieving statistical significance. Conversely, the most common reason for the lack of statistical significance is a lack of power to identify differences. When interpreting Academic Psychiatry, 33:3, May-June 2009 HORN ET AL. statistical significance, the researcher needs to consider the substantive implications of the findings. Where very small differences may be statistically significant with large samples, the relevance or importance of those findings to educational settings can be tenuous. Having a clear understanding of the artificial contribution of a sample is therefore important in attributing meaning to findings. For qualitative educational research, sampling is purposive (12) in that it aims to cover a range of potentially relevant social phenomena and perspectives from an appropriate array of data sources. Although convenience sampling is the simplest and is least subject to complexity of interpretation, reasons for describing and justifying the participant selection strategies should be provided (13). Qualitative research is especially valuable when the intent of the study is to delve deeply into a particular group to garner perspective from either its unique or typical characteristics relative to a broader population. An important limitation of such an approach, however, is the lack of generalizability of findings. Quantitative Methods Experimental Design Randomized Control Trials. Heralded as the gold standard of experimental research, randomized controlled trials function as the primary mechanisms for drawing causal conclusions with key features including randomization and control. Randomization comes in the form of random placement into a treatment or control group so that each participant has an equal and independent chance to be placed in either group. When randomization is conducted to acceptable standards (13), the experiment maximizes the opportunity to attribute changes to the treatment because, by chance, the two (or more groups) are likely equivalent in every other respect. Direct manipulation of a single treatment variable allows the researcher to evaluate the effects of that treatment on the participants when compared with the control group. The determination of validity of randomized controlled trials is described in terms of a range of considerations. These include whether the assignment of subjects to treatments was appropriately randomized, whether groups were similar at the start of treatment, and whether all the subjects who entered the trial were properly accounted for at its conclusion. In addition, the assessment of validity includes whether groups were treated equally (aside from the intervention) and whether the outcomes were meaAcademic Psychiatry, 33:3, May-June 2009 FIGURE 1. Pretest-Posttest Randomized Control Group Design R O X O R O X O sured without bias. Outcomes should be clearly defined and replicable; the measurement of outcomes should be similar between groups, and follow-up should be sufficiently long. Several types of experimental designs are used in educational research. The pretest-posttest randomized control group and the Solomon four group research design are the most rigorous. The classic pretest-posttest randomized control group design formulates the experiment as represented in Figure 1. In Figure 1, R denotes randomization, and O represents observational measures for both the control and treatment groups. In educational research, observational measures often come in the form of assessments such as standardized tests. The X in the first series represents the treatment (educational intervention); its absence in the second represents the absence of treatment and designates the control group. One example of a pretest-posttest randomized control group design was an exploration of the effects of attitudes to and knowledge of ECT when learning by live observation compared with learning by an instructional videotape (14). The Solomon four group research design is a variation on this original design and tries to moderate Hawthorne effects—that is, when improvement occurs as an artifact of participation rather than actual changes. As represented in Figure 2, the Solomon design includes four groups, two of which receive treatment and two of which do not. By combining varying observations with the presence or absence of treatment, the Solomon design “deservedly has higher prestige and represents the first explicit consideration of” (4) factors that may threaten validity. Sometimes pretesting is not feasible. The posttest-only control group design is exactly the same as the pretestposttest design except that the observations are completed FIGURE 2. Solomon Four Group Research Design R O R R O R http://ap.psychiatryonline.org X O X O O O 263 RESEARCH QUESTIONS AND STUDY DESIGN only after the educational intervention (4). Another potential advantage of a posttest-only design is that the absence of pretesting enhances the generalizability of study results, because with the pretest-posttest design, results may only be generalizable when participants are primed by pretesting (15, 16). Randomized controlled trials in educational research are not easy to perform and have limitations. For one, there are sometimes ethical concerns about selecting students for a treatment when denying others that same opportunity. Second, blinding of participants and educators can be challenging. For example, educational researchers have historically struggled to evaluate problem-based learning because a myriad of factors, including educational facilities and teacher and student characteristics, are not within the control of the researcher (17). Third, educational settings that might be logically suited for experimental design, such as individual residency training programs, often do not contain sufficient numbers to power a randomized study. Although experimental designs have much to offer in attributing causality to interventions of interest, they have not historically been utilized as the most appropriate or efficacious for educational research. In fact, because of the natural threats created by educational settings, educational research has advanced other research methodologies in an effort to limit those threats while not substantially comprising rigor. Quasi-experimental Design. Quasi-experimental design is different from experimental design in that participants are not randomized to treatments or conditions although they are assigned to an intervention and comparison group (Figure 3). Quasi-experimental design is a natural extension of the classic experimental design approach that attends to limitations in educational settings. The distinct disadvantage of quasi-experimental designs is that randomization is not explicitly attended to, and thus the potential for significant differences between groups and for confounding (when factors other than the experimental intervention affect outcomes) is increased. One example of a quasi-experimental design is the nonrandomized, pretest-posttest control group research design. Here, the independent variable is manipulated, although participants are not randomly selected and FIGURE 3. Quasi-Experimental Design 264 O X O O X O http://ap.psychiatryonline.org FIGURE 4. Nonexperimental Design Oa = Ob assigned to the experimental and control groups. A pretest is administered, the intervention is administered to the experimental group, and no intervention or an alternative intervention is administered to the control group, followed by posttesting. Relevant baseline differences should be recorded, therefore, to allow for a determination of the degree to which confounding might occur. One example of a nonrandomized control group design is the effect of personality characteristics of medical students on clinical judgment and whether brief cognitive behavior interventions can modify this process (18). Nonexperimental Designs. Correlational studies, also known by educational researchers as associational studies, seek to identify the extent to which relationships may exist between variables. This can occur by assessing the relationship between variables at one point in time or by assessing how scores on one variable predict another (19, 20). As opposed to experimental and quasi-experimental studies, variables are not manipulated in correlational research, and random assignment is not possible. This means that the variables must be observed as they occur naturalistically (21). Ultimately, the goal is to determine whether a relationship among variables exists or can be predicted and what the strength of that relationship is. It is important to note that causal inferences cannot be made from the findings. When interpreting correlational data, a researcher will look to the scores to determine the relationship. As depicted in Figure 4, the relationship is multidirectional and can be positive or negative. One example concerned the assessment of attitudes toward psychiatry by Kenyan medical students (22). Other forms of nonexperimental designs that include a comparison group are cohort and case-control studies. Cohort studies proceed in a logical fashion from exposure to outcome, whereas case-control studies work backward (23). Cohort studies start with an exposure of interest and follow this group and another without the exposure to determine which groups develop a higher risk of a particular outcome (23). Alternatively, case control studies start with the outcome and look backward to discover particular exposures that might be associated with the outcome (23). Case-control studies are most useful when the outcome of Academic Psychiatry, 33:3, May-June 2009 HORN ET AL. interest is unusual or rare. As an example of a casecontrol, current methods of evaluating residency applicants were assessed as to whether they enabled later prediction of impairment. Impaired physicians who had been previously evaluated as a resident were compared with a matched nonimpaired control group when records of their earlier residency applications were available (24). Survey research provides another form of nonexperimental research (25). Information can be collected from a group at one time, which is called a cross-sectional survey (19). For example, one study examined associations of personality and other characteristics with clinical evaluations of third-year medical students in a psychiatry clerkship (26). Another option is for a researcher to obtain data over a period of time, which is called a longitudinal survey. Types of longitudinal studies include the trend and panel study (20). During a trend study, a survey is given to different samples of a particular population at different times. Within that specific population group, membership may change, such as when a researcher is interested in studying patient satisfaction with physician bedside manner over a 2-year period. Despite the fact that the specific patients will change throughout the course of the study, the population being studied (i.e., patients) remains the same. The goal of a panel study is to survey a sample of the same individuals over different points in time (19). Researchers must be aware that if nonexperimental designs are chosen, the findings are only applicable to the specific group or groups from which the data were extracted. Nonexperimental designs can attempt to infer causality among variables or to predict outcomes, but are limited in their ability to do so compared with experimental designs. mined the degree to which medical humanities have been integrated into various medical fields by analyzing articles in selected psychiatry and internal medicine journals from 1950 to 2000 (28). Action Research Derived from multiple traditions (e.g., psychology, anthropology, sociology), action research has become a practical approach to encourage the use of research in creating positive social changes (27). Requiring high levels of rigor, action research is highly reflective and interpretive, focuses on the experience, and empowers and emancipates both the participants and the researcher to improve social conditions. Action research is best applied to situations in which a problem needs to be solved or gathering the data will inform practice (19, 27). Ultimately, the use of action research is intended to address an immediate issue, assumes that those involved in the particular situation are able to identify and respond appropriately to problems that arise, and requires that the information gathered only be applied to that specific situation (19). Historical Research Historical studies are similar to biographies in that a biography requires historical context. A key difference is that when conducting a historiography, the researcher looks to records, accounts, and archives to study an event or series of events that have already occurred (27). Despite the fact that historical research as a method has existed for a long time, only over the last 10 years or so has it become appreciated as a serious qualitative method (27). Researchers have realized that examining a historical event or set of events is advantageous in that they can examine the meaning of symbols used in the past and understand past behaviors and thoughts (27). For example, one study deter- Case Study Here, instead of looking at subjects or participants, the researcher is focused on a case. An individual, a residency program, or a medical school can all be a considered a case. Conducting a case study is most appropriate “when ‘how’ or ‘why’ questions are being posed, when the investigator has little control over events, and when the focus is on a contemporary phenomenon within some reallife context” (29). When conducting a case study, having control over behavioral events is not required or necessarily desired (29). There are three different types of case studies: intrinsic, instrumental, and multiple/collective (19). Often applied for explanatory research purposes, an intrinsic case study examines one specific person or situation to understand every aspect of the case, both internally and externally (19), for instance, when studying one medical student’s journey. The focus of an instrumental case study is much broader than the intrinsic study in that the goal is more global. A researcher is interested in examining a specific case only insofar as the data can be applied to a larger goal. Finally, a multiple or collective case study involves looking at more than one case at the same time. One example of a case study concerned the evaluation of a mentoring model for junior faculty (30). Academic Psychiatry, 33:3, May-June 2009 http://ap.psychiatryonline.org Qualitative Methods 265 RESEARCH QUESTIONS AND STUDY DESIGN Participant Observation Most appropriately applied in theoretical, descriptive, and exploratory studies, participant observation is an excellent method “for studying processes, relationships among people and events, the organization of people and events, continuities over time, and patterns, as well as the immediate sociocultural contexts in which human existence unfolds” (31). At the root of this method is the insider’s or participant’s perspective on his or her everyday life or world and the reality in which he or she lives. A researcher takes an unobtrusive role and attempts to avoid disrupting or intruding on the individual’s world (31). The role of a participant observer can range from overt, where the individual or group is aware of the researcher’s presence, or covert, in which the group or individual is not aware that the “new” person is the researcher (31). A fundamental aspect of this method is forming and maintaining the relationships between the researcher and the insiders to build trust, especially when participating overtly. Gaining trust is key to ensuring that the insiders do not alter their actions because of the researcher’s presence. Given the time required to gain this trust, participant observation can be extremely time consuming and exhausting (31). This method is similar to ethnography, which allows a researcher to examine and observe individuals in their natural environment to attempt to uncover what “truly” goes on in a particular situation (19). Grounded Theory One of the most commonly used methods in qualitative research is that of grounded theory. The term theory is misleading in that this method is not based on an already existing theory (20). To the contrary, grounded theory is an inductive approach that analyzes and considers data as a way to form theory to explain the observed findings (20). In other words, the researcher does not begin the study with a theoretical perspective or framework in mind; rather, he or she lets the theory grow out of/be grounded in the data collected. When grounded theory is applied, researchers use a constant comparative method in which there is continuous interaction among the data, the researcher, and the developing theory (19). As new data are collected, the researcher compares them with older data to analyze any possible similarities or differences between the older and newer data (20). Using this inductive method is very beneficial when a researcher is interested in letting the data dictate the theory. One example is when research266 http://ap.psychiatryonline.org ers interviewed psychiatric residents to uncover key themes in daily experiences with educators (32). Systematic Reviews (Meta-Analysis) In cases where much work has been done to explore the same or similar questions of interest, researchers may employ a systematic approach to empirically synthesizing that work. A systematic review provides advantage over a single study for two important reasons. First, when combining studies, the resultant statistical power is greater than any single study. Second, understanding an intervention or phenomenon across multiple studies allows for a better opportunity to identify and understand the influence of moderating variables (33–36). Although there are many empirical approaches to systematic reviews, a meta-analysis typically utilizes effect size as a statistical standardization to understand the relationship between outcomes and predictor variables of interest (34). Conclusion Current and prospective researchers must fundamentally grapple with the link between the research question and design. To do this well, researchers will need a downto-earth appreciation of the fundamentals of key design approaches and issues. On one hand, because research is time consuming, it is paramount that researchers have also carefully considered the design in advance so as to protect against insurmountable errors. On the other hand, no study can be perfect; the goal is to minimize defects but not to eliminate them entirely. Educational research design is critical to rigorous, meaningful scholarship and should be assiduously attended to by all researchers. At the time of submission, Drs. Horn and Plazas Snyder disclosed no competing interests. Disclosures of Academic Psychiatry editors are published in each January issue. References 1. Coverdale J, Roberts L, Louie A, et al: Writing the methods. Acad Psychiatry 2006; 30:361–364 2. McGaghie WG, Bordage G, Crandall S, et al: Research design. Acad Med 2001; 76:929 –930 3. Bordage G: Reasons reviewers reject and accept manuscripts: the strengths and weaknesses in medical education reports. Acad Med 2001; 76:889 – 893 4. Campbell DT, Stanley JC: Experimental and Quasi-experimental Designs for Research. Chicago, Rand McNally, 1963 Academic Psychiatry, 33:3, May-June 2009 HORN ET AL. 5. Bordage G: Moving the field forward: going beyond quantitative-qualitative. Acad Psychiatry 2007; 82:S126 –S128 6. McGaghie WC, Bordage G, Shea JA: Problem statement, conceptual framework, and research questions. Acad Med 2001; 76:923–924 7. Sackeh DL, Wernberg JE: Choosing the best research design for each question. It’s time to stop squabbling over the “best” methods. BMJ 1997; 325:1636 8. Reed DA, Kern DE, Levine RB, et al: Costs and funding for published medical education research. JAMA 2005; 294: 1052–1057 9. Owen R: Reader bias. JAMA 1982; 247:2533–2534 10. McGaghie WC, Crandall S: Population and sample. Acad Med 2001; 76:934 –935 11. Gay LR, Airasian P: Educational Research: Competencies for Analysis and Applications. Columbus, Ohio, Merrill Prentice Hall, 2003 12. Giacomini MK, Cook DJ, Evidence-Based Medicine Working Group: Users’ guide to the medical literature: XXIII. Qualitative research in health care. B. What are the results and how do they help me care for my patients? JAMA 2000; 284:478 – 482 13. Ogundipe LO, Boardman AP, Masterson A: Randomization in clinical trials. Br J Psychiatry 1999; 175:581–584 14. Warnell RL, Duk AD, Christison GW, et al: Teaching electroconvulsive therapy to medical students: effects of instructional method on knowledge and attitudes. Acad Psychiatry 2005; 29:433– 436 15. Gall MD, Gall JP, Borg WR: Educational Research: An Introduction. White Plains, NY, Longman, 2003 16. Regehr G: The experimental tradition, in International Handbook of Research in Medical Education. Edited by Norman GR, Van der Vleuten CPM, Newble DI. Boston, Kluwer Academic, 2002 17. Prideaux D: Researching the outcomes of educational interventions: a matter of design. RCTs have important limitations in evaluating educational interventions. BMJ 2002; 324:126 –127 18. Campo AE, Williams V, Williams RB, et al: Effects of lifeskills training on medical students’ performance in dealing with complex clinical cases. Acad Psychiatry 2008; 32:188 –193 19. Fraenkel JR, Wallen NE: How to design and evaluate research in education. Boston, McGraw-Hill, 2006 20. Patten ML: Understanding research methods: an overview of the essentials. Glendale, Pyrczak Publishing, 2005 21. Johnson B: Toward a new classification of non-experimental quantitative research. Educational Researcher 2001; 30:3–13 22. Ndetei DM, Khasakhala L, Ongecha-Owuor F, et al: Attitudes toward psychiatry: a survey of medical students at the University of Nairobi, Kenya. Acad Psychiatry 2008; 32:154 –159 23. Grimes DA, Schulz KF: An overview of clinical research: the lay of the land. Lancet 2002; 359:57– 61 24. Dubovsky SL, Gendel M: Do data obtained from admissions interviews and resident evaluations predict later personal and practice problems? Acad Psychiatry 2005; 29:443– 447 25. Sierles FS: How to do research with self-administered surveys. Acad Psychiatry 2003; 27:104 –113 26. Chibnall JT, Blaskiewicz RJ: Do clinical evaluations in a psychiatry clerkship favor students with positive personality characteristics? Acad Psychiatry 2008; 32:199 –205 27. Berg BL: Qualitative research methods for the social sciences. Boston, Allyn & Bacon, 2001 28. Rutherford BR, Hellerstein DJ: Divergent fates of medical humanities in psychiatry and internal medicine. Should psychiatry be rehumanized? Acad Psychiatry 2008; 32:206 –213 29. Yin RK: Case Study Research: Design and Methods. London, Sage Publications, 1994 30. Moss J, Teshima J, Leszcz M: Peer group mentoring of junior faculty. Acad Psychiatry 2008; 32:230 –235 31. Jorgensen DL: Participant Observation: a methodology for human studies. London, Sage Publications, 1989 32. Hilty DM, Maynes SM, Kellner M, et al: A day in the life of a psychiatry resident: a pilot qualitative analysis. Acad Psychiatry 2005; 29:405– 407 33. Glass G, McGaw G, Smith ML: Meta-Analysis in Social Research. Beverly Hills, Sage Publications, 1981 34. Krathwohl DR: Methods of Educational and Social Science Research: An Integrated Approach. London, Longman Publishing Group, 1993 35. Reeves S, Koppel I, Barr H, et al: Twelve tips for undertaking a systematic review. Med Teach 2002; 24:358 –363 36. Reed D, Price EG, Windish DM, et al: Challenges in systematic reviews of educational intervention studies. Ann Intern Med 2005; 142:1080 –1089 Academic Psychiatry, 33:3, May-June 2009 http://ap.psychiatryonline.org 267
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )