Investigative Studies • One type of investigative study is the observational study, which is a study based on data in which no manipulation of factors has been employed. • Observational studies cannot show causeand-effect relationships, but experiments can. Slide 8-1 Observational Studies • A prospective study is an observational study in which units are followed to observe future outcomes. • A retrospective study is an observational study in which units are selected and then their previous conditions or behaviors are determined. • Remember, though, neither prospective nor retrospective studies can show cause-and-effect relationships. Slide 8-2 Randomized, Comparative Experiments • An experiment: – Manipulates factor levels to create treatments. – Randomly assigns subjects to these treatment levels. – Compares the responses of the subject groups across treatment levels. • In an experiment, the experimenter must identify at least one explanatory variable, called a factor, to manipulate and at least one response variable to measure. Slide 8-3 Randomized, Comparative Experiments (cont.) • In general, the individuals on whom or which we experiment are called experimental units. • The specific values that the experimenter chooses for a factor are called the levels of the factor. • The combination of specific levels from all the factors that an experimental unit receives is known as its treatment. Slide 8-4 The Four Principles of Experimental Design 1. Control: – We control sources of variation other than the factors we are testing by making conditions as similar as possible for all treatment groups. 2. Randomize: – – Randomization allows us to equalize the effects of unknown or uncontrollable sources of variation. If experimental units are not assigned to treatments at random, you do not have a valid experiment and will not be able to use statistical methods to draw conclusions from your study. Slide 8-5 The Four Principles of Experimental Design (cont.) 3. Replicate: – – – – – Different individuals are likely to give different responses “The outcome of an experiment on a single subject is an anecdote, not data” Replication increases reliability of results, e.g. parameter estimates Replicates must be independent Replicates must not form part of a time series and must not be grouped together Slide 8-6 The Four Principles of Experimental Design (cont.) 4. Block: – – – Sometimes, attributes of the experimental units that we are not studying and that we can’t control may nevertheless affect the outcomes of an experiment. If we group similar individuals together and then randomize within each of these blocks, we can remove much of the variability due to the difference among the blocks. (randomised block design) Note: Blocking is an important compromise between randomization and control, but, unlike the first three principles, is not required in an experimental design. Slide 8-7 Examples of randomised block designs Experimental Units Blocks Treatments Response Variable Cattle Herds Food Additives Weight Gain Mice Litters Cancer Therapies Survival Time Golfers Handicap Groups Practice Methods Length of Drive Osteoporotic Patients Body Fat Groups Exercise Regimes Bone Density Changes Slide 8-8 Does the Difference Make a Difference? • How large do the differences need to be to say that there is a difference in the treatments? • Differences that are larger than we’d get just from the randomization alone are called statistically significant. • It is also important to assess whether differences are scientifically significant. Slide 8-9 Examples of Treatment Comparisons Comparison of Relaxation Methods Randomised Block - Sprout Example 2.5 10 Residue of Spr.Suppre. Reduction in Resting Heart Rate (beats/min) on Runners' Resting Heart Rates 5 0 A B C D 2.0 Airing Method 1.5 1.0 -5 1 No Treatment 2 Meditation 3 Prog. Muscle Relax. 1 2 3 4 5 Batch of Potatoes Slide 8-10 Diagrams of Experiments • It’s often helpful to diagram the procedure of an experiment. • The following diagram emphasizes the random allocation of subjects to treatment groups, the separate treatments applied to these groups, and the ultimate comparison of results: Slide 8-11 Experiments and Samples • Both experiments and sample surveys use randomization to get unbiased data. But they do so in different ways and for different purposes. • Sample surveys try to estimate population parameters, so the sample needs to be as representative of the population as possible. • Experiments try to assess the effects of treatments, and experimental units are not always drawn randomly from a population. Slide 8-12 Control Treatments • Often, we want to compare a situation involving a specific treatment to the status quo situation. • This baseline group is called the control group, and its treatment is called a control treatment. Slide 8-13 Adding More Factors • We can examine more than one factor in an experiment. In fact, it is often important to include multiple factors in the same experiment in order to examine what happens when the factor levels are applied in different combinations. Slide 8-14 Confounding • When the levels of one factor are associated with the levels of another factor, we say that these two factors are confounded. • When we have confounded factors, we cannot separate out the effects of one factor from the effects of the other factor. Slide 8-15 What Can Go Wrong? • Beware of confounding. • Bad things can happen even to good experiments – record other potentially relevant information, e.g. climate, etc • Don’t spend your entire budget on the first run — try a pilot experiment before running the full-scale experiment. Slide 8-16