Chapter 7 Using Nonexperimental Research © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Developing Behavioral Categories A behavioral category includes the general and specific classes of behavior to be observed Categories must be operationally defined Developing behavioral categories may be easy or challenging Behavioral categories must be clearly defined to avoid confusion Begin with clear goals for research Clearly define all hypotheses Keep categories as simple as possible Avoid temptation to accomplish too much in one study © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Quantifying Behavior in Observational Research Frequency Method Duration Method Record the frequency with which a behavior occurs within a time period Record how long a behavior lasts Intervals Method Divide the observation period into several discrete time intervals (e.g., ten 2-minute intervals), and record whether a behavior occurs within each interval © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Coping With Complexity in Observational Research Time Sampling Scan subjects for a specific period (e.g., 30 seconds), and then record your observations during the next period Individual Sampling Select a subject and observe behavior for a given period (e.g., 30 seconds), and then shift to another subject and repeat observations © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Event Sampling Select one behavior for observation and record all instances of that behavior It is best if one behavior can be specified as more important than others Recording Use a recording device to make a record of behavior for later review © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Evaluating Interrater Reliability You must establish reliability of observations from multiple observers (interrater reliability) Methods for evaluating interrater reliability Percent agreement Simplest method Percent agreement should be around 70% Percent agreement may underestimate agreement Cohen’s Kappa Popular method Allows you to determine if agreement observed is due to chance © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Pearson Product-Moment Correlation Correlate ratings of multiple observers with Pearson r Simple and easy method to evaluate interrater reliability Two sets of scores may correlate highly, but may still differ markedly Intraclass Correlation (ICC) Extension of Analysis of Variance logic to interrater reliability A powerful and flexible tool for evaluating interrater reliability © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Interrater Reliability: Using Cohen’s Kappa Tabulate frequencies of interrater agreement and disagreement in a CONFUSION MATRIX Determine the proportion of actual agreement by summing the values along the diagonal of the confusion matrix and dividing by the total number of observations Find the proportion of expected agreement by multiplying corresponding row and column totals and dividing by the number of observations squared Enter resulting numbers in the formula for Cohen’s Kappa A Cohen’s Kappa of .70 or more indicates acceptable interrater reliability © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Nonexperimental Approaches to Data Collection Naturalistic Observation Ethnography Unobtrusive observations of subjects’ naturally occurring behavior are made The researcher becomes immersed in the behavioral or social system being studied. May be conducted as a participant or non-participant observation study Sociometry You identify and measure interpersonal relationships within a group © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Case History Archival Research You observe and report on a single case You use existing records (e.g., police records) as your source of data Content Analysis You analyze spoken or written records for the occurrence of specific categories of events (e.g., a word or phrase) Both RECORDING and CONTEXT UNITS are evaluated © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Issues to Be Considered in Ethnography Observing as a participant or non-participant Gaining access to a field setting Gaining entry into the group Becoming invisible Making observations and recording data Analyzing ethnographic data © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Content Analysis: Defining Characteristics Used to analyze a written or spoken record for occurrence of specific behaviors or events Archival sources often used as sources for data Appears simple, but may be complex Should be used within a clearly developed study, including hypotheses to be tested Response categories must be clearly defined A method for quantifying behavior must be defined © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Performing a Content Analysis Clearly defined response categories are essential Two units of analysis Recording unit: Element of the material you are going to record (e.g., instances of a certain word) Context unit: Context within which material analyzed appears Observers doing content analysis must be blind so that bias will not enter the analysis Materials to be analyzed should be chosen carefully to increase generality Cannot be used to establish causal connections among variables © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Factors to Include When Meta-Analyzing Literature Full reference citation Names and addresses of authors Sex of experimenter Sex of subjects used in each experiment Characteristics of subject sample (e.g., how obtained, number) Task required of subjects and other details about the dependent variable © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Design of the study (including any unusual features) Control groups and procedures included to reduce confoundings Results from statistical tests that bear directly on the issue being considered in the meta-analysis (effect sizes, values of inferential statistics, p values) © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved.