FF8 Fortnight Analysis of Discrete Choice Data Sheffield, September of 2007 Can we use RUM and don’ get DRUNK? Jorge E. Araña University of Las Palmas de Gran Canaria Collaborators: Carmelo J. León (ULPGC), W. Michael Hanemann (UC Berkeley) 1 1 Outline 1. 2. 3. 4. RUM and DC experiments Sources of mistakes in Citizens choices An Extended Frame: Bayesian Modelling Example: Heuristics and DCE 4.1. STUDY 1: Is it really a practical problem? A Verbal Protocol Analysis. 4.2. STUDY 2: A Bayesian Finite Mixture Model in the WTP space. The effects of Complexity and Emotional Load on the use of Heuristics. 4.3. STUDY 3: Heuristics Heterogeneity and Preference Reversals in Choice-Ranking: An Alternative Explanation. 4.4. STUDY 4: Can we use RUM and don’t get DRUNK?. A Monte Carlo Study 5. Discussion and Further Research 22 DCE and Non-Market Valuation Valuation of Health = appropriate methods DCE are increasingly used and accepted. Decision Making Process Individual Preferences Coherent Results for CBA or CEA 33 The Underlying Economic Theory • Morishima (METRO,59) Lancaster (JPE, 66) B f P – Value from characteristics B= observed/stated choices P= Preferences (Fundamental Value) E= Random term (Context) THE TWO MAIN ISSUES 1. MEASURING PREFERENCES: (defining P) i) Experienced vs Choice Utility ii) Absolute vs. relative utility (prospect theory) iii) … 2. LINK CHOICES AND PREFERENCES: f (.) 44 The Departing Point From the Economic theory point of view • Lancaster (1966) – Value for characteristics B f P, MAIN ISSUES 1. MEASURING PREFERENCES: (defining P) i) Experienced vs Choice Utility ii) Direct Utility iii) Absolute vs. relative utility (prospect theory) iv) happiness vs. utility … 2. LINK CHOICES AND PREFERENCES: f (.) How can we link Choices and Preferences? f (.) Traditional Answer: (RUM) “Individuals have a single set of well-defined goals, and her behavior is driven by the choice of the best way to achieve those goals”. choose i* iff V (i* ) V (i) i i* where V(i) xi β General Simple An accurate explanation of agents choices in a wide range of situations Intuitive 66 However… Strong and large evidence that citizens don’t choose what make them happy? Why? Failing Predicting Future Experiences - Projection bias, Distinction bias, Memory bias, Belief bias, Impact bias Failing Following Predictions - Procrastination , Self-control bias, Overconfidence, Anchoring Effects, Simplifyng Decision Rules,… 77 However… - Preference Reversals (Slovic and Lichtenstein, 1971,1973) - Framing effects (Tversky and Kahneman, 1981, 1986) -… Do f(.) exists? or just B = ε ? Our belief: YES, f(.) do exists. The Challenge: Defining f(.) in a way that can accommodate these deviations. Research Strategy: Thinking in a Hyper-rationality concept Context matters… but Fundamental values too (McFadden, 2001; Grether & Plott, 1979, Slovic, 2002;…) 88 Solutions NEED to be … Multidisciplinary - Economic Theory - Social Psychology - Statistics - Cognitive Psychology - Neurology - Political Science,… We need an Extended Frame that integrate contributions from these different areas. 99 Why not Bayesian? One Elegant and Robust way of integrating Multidisciplinary contributions to DC Theory and Data Analysis: Bayesian Econometrics 10 10 Potential Bayesian Contributions to DCE Can use prior information (there is a lot of prior info available!. previous research, experts, Benefit Transfer, Optimal Designs,…). Able to tackle more complex/sophisticated models More accurate results (e.g. Exact theory in finite samples) More informational results (reports full posterior distributions instead of just one or two moments) Sample means are inefficient and sensitive to outliers (this is especially important when studying heterogeneity in behaviour. The role of tails have been long ignored) Bayesian methods can quantify and account for several kinds of components of uncertainty. More interpretable inferences (probabilities, confidence?,…) 11 11 EXAMPLE: Heterogeneous Decision Rules and DC 12 12 The Heterogeneity in Decision Rules Argument -Decision Making requires an Information Process Simon (1956) Kahnemann and Tversky (1974) Individuals have a set of decision strategies h1, h2,…, hH at their disposal that vary in terms of: - Effort=EC (how much cognitive work is necessary to make the decision using that strategy) - Accuracy=EU (the ability of that strategy to produce a good outcome). 13 13 Literature on Heuristics The Adaptive Decision Maker (Payne, Bettman, and Johnson, 1993) • Toolbox of possible choice heuristics in multi-attribute choice •WADD: Weighted additive rule •EQW: equal weight heuristic •SAT: Satisficing Rule (Simon, 1955) •LEX: Lexicographic Heuristics •EBA: Elimination by Aspects (Tversky, 1972) •ANC: Anchoring Heuristic (Tversky and Kahneman) •MCD: Majority Confirming dimensions (Russo & Dosser, 1983) •ADDIF: Additive difference model (Tversky, 1969) •FRQ: Freq. of good and bad features (Alba and Marmorstein, 1987) •AH: Affect Heuristic. Slovic (2002) •Combined Strategies 14 14 Choosing How to Choose (CHTC) TWO STEP PROCESS STEP 1. Choosing How to Choose. (Choice of the D. Rule) D* iff EU( D* ) EC( D* ) EU( D j ) EC( D j ) D j D* STEP 2. Applying the Decision Rule. i* iff D* (i* ) D* (i) i i* Applications: Manski (1977), Gensch (1987), Chiang et al (1999), Gilbride and Allenby (2004), Beach and Potter(1992) Swait and Adamovicz (2001), Amaya and Ryan (2004) Araña, Hanemann and León (2005) 15 15 The Theoretical Model For a well-behaved preference map, a general indirect utility function of individual i, given an alternative j: Vij X j i i 1,...n; j 1,...k if the individual faces a multi attribute discrete choice problem, the researcher will observe that individual i chooses alternative j* if, Vij* X j* i* Vij X j i j j * such that I ij (.) 1 Different specifications of I(.) makes the model collapse to alternative decision rules 16 16 Different Heuristics Model Specification Decision Rule to choose alternative j M1: Full Compensatory Rule V j Vl l j M2: Complete Ignorance V j Vl l j and m =0 m M3: Conjunctive Rule V j Vl l j such that I X ijm , γim 1 M4: Satisfaction Rule V j Vl l j such that I X ijm , γim 1 and m =0 m M m1 M m1 17 17 Non regularity Problem 1: The likelihood surface for a heuristic is discontinuous, and therefore, the global concavity can not be guaranteed. Solution: Rewriting the probability as the product of a second step of the choice process and a marginal heuristic probability. That is, . ProbYij 1, h ProbYij 1 | h Probh By adding the likelihood functions over the different decision rules, resulting in a globally concave likelihood surface, ProbYij 1 ProbYij 1 | h Probh H h 1 f(.) is a mixture distribution 18 18 Evaluate an Intractable Function From Bayes’ theorem, | Y LY | Problem 2: The posterior distribution is intractable and difficult to evaluate . Solution: Here we deal with that complication by employing MCMC methods as is proposed in discrete choice by Albert and Chib (1993) by combining… GS Algorithm (Geman and Geman, 1984) DA Technique (Tanner and Wong, 1987) 19 19 Prior Distributions 20 20 MCMC Algorithm Model 1. Linear Compensatory rule i) WTPij from equation (A2.1) ii) i from equation (A2.2) iii) i from equation (A2.3) iv) from equation (A2.4) v) from equation (A2.5) 21 21 MCMC Algorithm Model 3. Elimination by aspects i) WTPij from equation (A2.6) ii) im from equation (A2.7) iii) i from equation (A2.2) iv) i from equation (A2.3) v) from equation (A2.4) vi) from equation (A2.5) vii) m from equation (A2.8) viii) from equation (A2.9) ix) from equation (A2.10) 22 22 MCMC Algorithm Model 4. Satisfaction Rule i) WTPij from equation (A2.6) ii) im iii) from equation (A2.5) iv) m from equation (A2.8) v) from equation (A2.9) vi) from equation (A2.10) from equation (A2.7) 23 23 Different Studies that have been discussed during FF8 Study 1: Determinants of Choosing Decision Rules (task complexity, emotional load,…) Study 2: Heuristics and Preference Reversals in Ranking vs Choice. Study 3: Testing the Validity of the Model to screen out Heuristics Study 4: Monte Carlo Simulation Study Study 5: Verbal Protocol and Emotional Load 24 24 STUDY 1: The Data Good to be valued Valuation of a set of programs designed to improve health care conditions for the elderly in the island of Gran Canaria. Programmes link Survey Process - 2 Focus Groups (From Jun-2004 - 3 Pre-Test Questionnaires To Ap-2005) - Final Questionnaire Sample Size 550 Individuals Survey Design • D-optimal design method (Huber & Zwerina,96) • Elicitation Technique: Choice Experiment • Scenario were successfully tested in prior research Testing Complexity effects on CHTC TWO SPLIT SAMPLES SAMPLE I 2 pairs of alternatives + status quo SAMPLE II 4 pairs of alternatives + status quo Testing Emotional load effects on CHTC MEASURING EMOTIONS - Content (what we remember) - Process (how we reason) Individuals emotional intensity Scale (EIS) Emotional Intensity -------- mood experience ----- individual decision making Def. Emotion: “ Stable individual differences in the strenght with which individuals experience their emotions” (Larsen and Diener, 1987) EIS-R (Geuens and Pelsmacker, 2002) Results & Discussion Introduction The Model The MC Experiment Results Application Conclusion TEST I: COMPLEXITY AND VALUATION RESULTS Table 3. Welfare Estimation Results for M1 (€) Introduction Programs The Model DRUGS The MC Experiment DAY CARE Results HOSPITAL Application Conclusion 2 alter. + SQ 4 alter. + SQ 43.45 38.34 (32.45, 54.44) (31.65, 45.02) 19.51 9.54 (11.02, 27.99) (3.24, 15.83) 51.28 67.88 (39.10, 63.45) (61.56, 74.19) RESULT 1: Complexity seems to affects absolute values of Welfare Estimations, BUT DO NOT affect programs ranking. TEST I: COMPLEXITY AND VALUATION RESULTS Table 3. Welfare Estimation Results for M1 (€) Introduction Programs The Model DRUGS The MC Experiment DAY CARE Results HOSPITAL Application 2 alter. + SQ 4 alter. + SQ 43.45 38.34 (32.45, 54.44) (31.65, 45.02) 19.51 9.54 (15.52, 24.49) (4.24, 14.83) 51.28 67.88 (39.10, 63.45) (61.56, 74.19) Conclusion RESULT 2: Complexity makes people focus on the most appreciate attributes, what leads to higher valuations for most valued prog. (HOSPITAL) and lower valuations for less valued prog. (DAY CARE). TEST II: Complexity and Choosing how to Choose Introduction Decision Rule 2 alter + SQ 4 alter + SQ The Model Full Compensatory 44.36 28.33 Complete Ignorance 6.21 11.19 EBA (Conjunctive) 31.13 36.11 Satisfaction 14.63 19.45 Disjunctive 3.66 4.92 The MC Experiment Results Application Conclusion RESULT 3: The proportion of people responding in a totally random way is low. TEST II: Complexity and Choosing how to Choose Introduction Decision Rule 2 alter + SQ 4 alter + SQ The Model Full Compensatory 44.36 28.33 Complete Ignorance 6.21 11.19 EBA (Conjunctive) 31.13 36.11 Satisfaction 14.63 19.45 Disjunctive 3.66 4.92 The MC Experiment Results Application Conclusion RESULT 4: Deviations from M1 are extended in the sample (55%), although M1 has the larger proportion. TEST II: Complexity and Choosing how to Choose Introduction Decision Rule 2 alter + SQ 4 alter + SQ The Model Full Compensatory 44.36 28.33 Complete Ignorance 6.21 11.19 EBA (Conjunctive) 31.13 36.11 Satisfaction 14.63 19.45 Disjunctive 3.66 4.92 The MC Experiment Results Application Conclusion RESULT 5: Complexity does increase the likelihood that Individuals follow non compensatory decision rules. TEST II: Complexity and Choosing how to Choose Introduction Decision Rule 2 alter + SQ 4 alter + SQ The Model Full Compensatory 44.36 28.33 Complete Ignorance 6.21 11.19 EBA (Conjunctive) 31.13 36.11 Satisfaction 14.63 19.45 Disjunctive 3.66 4.92 The MC Experiment Results Application Conclusion RESULT 5: Complexity does increase the likelihood that Individuals follow non compensatory decision rules. TEST III: Emotional Intensity and Choosing how to choose Introduction Table 5. Individuals assigned to non-compensatory rules According to the degree of EIS (%) The Model The MC Experiment Results Emotional Level 2 alter + SQ 4 alter + SQ Low EIS 58.32 59.30 Avg. EIS 42.38 35.70 High EIS 71.15 77.45 Application Conclusion RESULT 6: Emotional Sensitivity does affect the use of Alternative decision rules TEST III: Emotional Intensity and Choosing how to choose Introduction Table 5. Individuals assigned to non-compensatory rules According to the degree of EIS (%) The Model The MC Experiment Results Emotional Level 2 alter + SQ 4 alter + SQ Low EIS 58.30 59.30 Avg. EIS 42.38 35.70 High EIS 71.15 77.45 Application Conclusion RESULT 7: Extreme EIS (high or low) induces a larger departure from M1 than average EIS. STUDY 3: RK-Choice Preference Reversals Summary of Results Shows that Decision Rules are different in Choice and in Ranking. When we take responses to ranking that are worse than status quo out of the sample, decision rules and mean WTP are very similar (although variances are lower in RK since it uses more information) 37 37 The Data Good to be valued Valuation of a set of environmental actions in a vast rural park in the island of Gran Canaria called “The Guiniguada valley”. Population Gran Canaria Island Population Survey Proccess - 3 Focus Group (14 months in total) - Pre-Test Questionnaire - 1 Focus Group - Final Questionnaire Sample Size Survey Design 540 Individuals •D-optimal design method (Huber and Zwerina, 1996). •Elicitation Techniques: Choice and Ranking. •Scenario (verbal and photos) were tested in prior research.. Results Table 3. Welfare Estimations from M1(RUM) for Choice and Ranking E[WTP] PATHS BOTGARDEN SUSTPARK PAINT CAGES RURALANDS ENDFORESTS Choice Ranking 44.29 21.43 [42.11, 46.47] [19.29, 23.57] 49.60 25.26 [47.31,51.89] [23.12, 27.39] 38.56 34.93 [36.40, 40.72] [32.84, 37.02] 74.33 35.74 [71.94,76.71] [35.59, 37.89] 8.36 18.13 [6.02,10.69] [16.00, 20.26] 56.75 41.07 [54.34, 59.17] [38.89, 32.26] 72,07 39.52 [71.61, 72.53] [36.98, 42.06] Table 4. Proportion of individuals assigned to each decision rule in each model Choice Ranking Ranking Ranking (better) (worse) M1: Full Compensatory Rule 26.68 % 22.96 % 27.80 % 16.46 % M2: Complete Ignorance 9.49 % 18.57 % 9.36 % 30.93 % M3: Conjunctive Rule 33.57 % 30.68 % 42.29 % 15.1 % M4: Satisfaction Rule 19.17 % 19.66 % 16.63 % 23.73 % M5: Disjunctive Rule 11.09 % 8.13 % 3.92 % 13.78 % 40 40 Table 4. Proportion of individuals assigned to each decision rule in each model Choice Ranking Ranking Ranking (better) (worse) M1: Full Compensatory Rule 26.68 % 22.96 % 27.80 % 16.46 % M2: Complete Ignorance 9.49 % 18.57 % 9.36 % 30.93 % M3: Conjunctive Rule 33.57 % 30.68 % 42.29 % 15.1 % M4: Satisfaction Rule 19.17 % 19.66 % 16.63 % 23.73 % M5: Disjunctive Rule 11.09 % 8.13 % 3.92 % 13.78 % 41 41 Table 4. Proportion of individuals assigned to each decision rule in each model Choice Ranking Ranking Ranking (better) (worse) M1: Full Compensatory Rule 26.68 % 22.96 % 27.80 % 16.46 % M2: Complete Ignorance 9.49 % 18.57 % 9.36 % 30.93 % M3: Conjunctive Rule 33.57 % 30.68 % 42.29 % 15.1 % M4: Satisfaction Rule 19.17 % 19.66 % 16.63 % 23.73 % M5: Disjunctive Rule 11.09 % 8.13 % 3.92 % 13.78 % 42 42 Results Table 5. Welfare Estimations from Aggregated Model for Choice and Ranking E[WT P] PATHS BOTGARDEN SUSTPARK PAINT CAGES RURALANDS ENDFORESTS Choice Ranking 49.17 48.17 [47.28, 51.05] [44.91, 51.43] 53.83 48.25 [51.88, 55.78] [44.16, 52.53] 42.00 38.08 [40.13, 43.88] [35.65, 40.51] 83.63 64.20 [81.64, 85.62] [57.24, 75.15] 3.62 7.79 [1.64, 5.59] [4.13, 8.41] 61.45 61.08 [59.44, 63.46] [38.80, 82.99] 83.46 71.90 [81.44, 85.49] [60.14, 83.65] 43 43 Conclusions In this application, the EBA is the most predominant heuristic (over the FLC) A small % of subjects follows the Completely Random Heuristic. Heuristics Heterogeneity is different between Choice and Ranking (in particular between RK below SQ). When the Heuristics Heterogeneity is incorporated in the model the gap between Choice and Ranking is drastically reduced. 44 44 GENERAL DISCUSSION AN FURTHER RESEARCH 1. The model seems to do a good job detecting people that use these heuristics (average efficiency 85% MC study) 2. It can be used as a test to further explore the validity of a specific DCE are good enough to be used in PUBLIC POLICY (friendly code will be available very soon). 3. Results from these studies can also help to decide several aspects of the DCE design: number of attributes, levels,…) 4. First further research would be to use this information in the DCE design using a Bayesian approach so we can improve the accuracy of the results (respondent eficiency vs statistical efficiency). 5. Results also have implications for Benefit Transfer. It is possible to reduce the cost of these studies by transferring results from previous studies to new ones. The Bayesian framework seem to be the most adequate approach to do so. 45 45 Thanks !!!!! 46 46 STUDY 3: Testing the Validity of the Model to screen out Heuristics I. Five different treatments were assigned to the samples: Treatment 1: All the simulated respondents follow the FLC rule Treatment 2: All the simulated respondents follow the EBA rule Treatment 3: All the simulated respondents follow the Completely Ignorance Rule Treatment 4: All the simulated respondents follow the satisfactory rule. Treatment 5: 25 % of the simulated respondents follow the FLC rule, other 25% follow the EBA rule; other 25 % follow the satisfactory Rule and other 25% follow the completely ignorance rule. 47 47 STUDY 3: Testing the Validity of the Model to screen out Heuristics I. The true utility function was defined with values as close as possible to the ones estimated in the current application. That is, B(DRUGS) =3; B(COST)=-0.01; B(HOSPITAL)=3.5, B(DAY CARE) =1.5. For Treatments 2, 4 and 5 we randomly assigned the cut-off values for each split sample. II. In order to simulate responses to the “Monte Carlo survey”, we employed the same experimental designs that were used in the field data experiment. Then, 100 samples were simulated for each treatment and for each condition (e.g. Condition A: 2 options +SQ; and Condition B: 4 options +SQ). In total 1000 samples were simulated (100 samples for 5 treatments in the 2 conditions). III. After the final responses were collected for each sample, the proposed Bayesian mixture model was estimated for each one of them, and therefore the probability that each individual follows each decision rule. Results on the average proportion of individuals correctly assigned to each decision rule among the samples are presented in the Table R1 in this reply. 48 48 STUDY 3: Testing the Validity of the Model to screen out Heuristics Table R1. Proportions of individuals correctly assigned to their decision rule by using the Bayesian Mixture Model Condition A Condition B 2 options + SQ 4 options + SQ Treatment 1: FLC 92 % 95 % Treatment 2: EBA 69 % 74 % Treatment 3: Completely Ignorance 58 % 64 % Treatment 4: Satisfaction 70 % 76 % Treatment 5: Mixture Model 82 % 85 % Average efficiency: 85% Notes: No prior info and no respondent efficient design have been applied 49 49 STUDY 4: Monte Carlo Study. People follow alternative heuristics…. So what are the consequences? • A conventional Conditional Logit model and a Hierarchical Bayes Model are estimated in 900 samples following same idea that study 2. • Samples differ in terms of the % of citizens following each decision rule (e.g. 10, 20, 30, 40, 50, 60, 70, 80, 90%). 50 50 STUDY 4: Monte Carlo Study. People follow alternative heuristics…. So what are the consequences? Bias E(WT P) 40 35 30 25 EBA CI Sat 20 15 10 5 0 10% 30% 50% 70% % people following Heuristics 51 51 STUDY 4: Monte Carlo Study. People follow alternative heuristics…. So what are the consequences? •It is found that for the most predominant heuristics (EBA, Satisficing), the % of individuals that would generate a significant bias in welfare results (10%) is 70% or higher (what is unusual in practice). •However, a 20 % of people following the COMPLETELY IGNORANCE heuristic is enough to seriously bias the results. •When we use a Hierarchical Bayes Model, we get smaller bias for any % of people following alternative heuristics. 52 52 STUDY 5:EXP. I: Valuation of Externalities Good to be valued Valuation of a set of policy proposals to ameliorate externalities of a Stone Mining Facility in the suburbs of Las Palmas de Gran Canaria (Gran Canaria). Population 8000 individuals (total surrounding population) Survey Process - 2 Focus Groups - 2 Pre-Test Questionnaires - Final Questionnaire Sample Size 288 Individuals (very familiar with the externalities) Survey Design • D-optimal design method (Huber & Zwerina,96) • Elicitation Technique: Choice Experiment • Scenario (verbal and photos) where tested in prior research EXPERIMENT I: Valuation of Externalities MEASURING HEURISTICS Verbal Protocol (Ericsson and Simon, 1980) -DCCV (Hanemann, 92, Schkade and Payne, 93) Concurrent Protocol Approach: “Respondents are asked to verbalize their thoughts and explain how they arrive at the final choice while they are completing the task”. Evaluation Process Responses were recorded, transcribed and evaluated by 2 judges who where unaware of our hypotheses. (3rd judge for disagreements) EXPERIMENT I: Valuation of Externalities MEASURING EMOTIONS - Content (what we remember) - Process (how we reason) Individuals emotional intensity Scale (EIS) Emotional Intensity -------- mood experience ----- individual decision making Def. Emotion: “ Stable individual differences in the strenght with which individuals experience their emotions” (Larsen and Diener, 1987) EIS-R (Geuens and Pelsmacker, 2002) EXPERIMENT I: Valuation of Externalities Attribute Negative Emotional Load Scale (ANEL) This scale indicates the amount of affect involved in making trade-offs between an specific attribute and money. The ANEL scale is generated as a confirmatory analysis of the following measures adapted from Lazarus (1991): 1. Severity of the worst potential consequence (scale 0 to 100) 2. Likelihood of negative outcomes (scale 0 to 100) 3. Degree of Threat (scale 0 to 100) Results Introduction The Model The MC Experiment TEST I: Effects of the Verbal Protocol approach Swait and Louviere (1993) EQUAL PARAMETER TEST: -2 [312.8172-148.5683-160.4279] = 7.642 X8 . Results EQUAL SCALE TEST: Application -2 [312.5553 - 312.8172] = 0.5238 X1 Conclusion RESULT 1: The use of verbal protocol in this context seems that would not affect individuals’ behaviour. TEST II: Explaining the use of Compensatory D. Rules Introduction The Model The MC Experiment Results Application Conclusion Table 3. Results of the Probit model Estimations Covariates Coefficient (s. e.) p-value Constant -0.1623 (0.2455) 0.5084 Income 0.0297 (0.0284) .2960 Age 0.1489 (0.0375) 0.0875 Gender 0.0392 (0.0421) 0.3514 Education -0.0703 (0.0137) 0.0000 EIS 0.5291 (0.1094) 0.0000 EIS^2 -0.1791 (0.0040) 0.0000 ANEL -0.6491 (0.1094) 0.0000 Log-likel. -2554.651 TEST II: Explaining the use of Compensatory D. Rules Introduction The Model The MC Experiment Results Application Conclusion Table 3. Results of the Probit model Estimations Covariates Coefficient (s. e.) p-value Constant -0.1623 (0.2455) 0.5084 Income 0.0297 (0.0284) .2960 Age 0.1489 (0.0375) 0.0875 Gender 0.0392 (0.0421) 0.3514 Education -0.0703 (0.0137) 0.0000 EIS 0.5291 (0.1094) 0.0000 EIS^2 -0.1791 (0.0040) 0.0000 ANEL -0.6491 (0.1094) 0.0000 Log-likel. -2554.651 TEST II: Explaining the use of Compensatory D. Rules Estimations Introduction The Model The MC Experiment Results Application Conclusion Covariates Coefficient (s. e.) p-value Constant -0.1623 (0.2455) 0.5084 Income 0.0297 (0.0284) .2960 Age 0.1489 (0.0375) 0.0875 Gender 0.0392 (0.0421) 0.3514 Education - 0.0703 (0.0137) 0.0000 EIS 0.5291 (0.1094) 0.0000 EIS^2 -0.1791 (0.0040) 0.0000 ANEL -0.6491 (0.1094) 0.0000 Log-likel. -2554.651 RESULT 2: Educated people are more likely to use non compensatory decision rules (which raise doubts about the cognitive ability explanation: Swait and Adamowicz, 2001) TEST II: Explaining the use of Compensatory D. Rules Estimations Introduction The Model The MC Experiment Results Application Conclusion Covariates Coefficient (s. e.) p-value Constant -0.1623 (0.2455) 0.5084 Income 0.0297 (0.0284) .2960 Age 0.1489 (0.0375) 0.0875 Gender 0.0392 (0.0421) 0.3514 Education -0.0703 (0.0137) 0.0000 EIS 0.5291 (0.1094) 0.0000 EIS^2 -0.1791 (0.0040) 0.0000 ANEL -0.6491 (0.1094) 0.0000 Log-likel. -2554.651 RESULT 3: Extreme bounds of EIS are less likely to the choice of compensatory decision rules (related with the evidence that EIS has on task performance – ”Yerkes-Dodson Law”, 1908) TEST II: Explaining the use of Compensatory D. Rules Estimations Introduction The Model The MC Experiment Results Application Conclusion Covariates Coefficient (s. e.) p-value Constant -0.1623 (0.2455) 0.5084 Income 0.0297 (0.0284) .2960 Age 0.1489 (0.0375) 0.0875 Gender 0.0392 (0.0421) 0.3514 Education -0.0703 (0.0137) 0.0000 EIS 0.5291 (0.1094) 0.0000 EIS^2 -0.1791 (0.0040) 0.0000 ANEL -0.6491 (0.1094) 0.0000 Log-likel. -2554.651 RESULT 4: Individuals are more likely to avoid trade-offs when negative emotional load is high among the task attributes (exploring levels of trade-offs and ANEL levels) Table 4. Valuation functions for compensatory and non compensatory heuristics Compensatory heuristic Non-compensatory heuristics Pooled Coefficient (s. e.) Coefficient (s. e.) Coefficient (s. e.) Explosions 0.8084*** (0.1527) 0.3077* (0.1699) 0.4744*** (0.1068) Noise 1.1555*** (0.1153) 0.0822 (0.1219) 0.5874*** (0.0747) Airdust 1.3352*** (0.1354) Smokes 0.5775*** (0.1137) 0.2987** (0.1227) 0.2911*** (0.0767) Odours 1.2385*** (0.1252) 0.4775*** (0.1134) 0.7327*** (0.0752) Cost -0.0135*** (0.0022) -0.0006 (0.0027) -0.0066*** (0.0016) Log-likel. -558.4755 -394.9577 -997.5038 % of individuals 68 32 100 Covariates 0.5138*** (0.1247) 0.7871*** (0.0825) Welfare Estimates for compensatory and non compensatory heuristics Pooled Compensatory Non Compensatory Attribute Mean WTP Mean WTP Mean WTP Explosions 71.2448 59.4739 512.83 Noise 88.214 85.0077 137.00 Airdust 118.189 98.2298 856.33 Smokes 43.7179 42.4851 497.83 Odours 110.02 91.1173 795.83 RESULT 5: The validity of SPM results for guiding public policy is affected by the proportions of individuals using non compensatory decision rules. (Therefore affected by the levels of EIS and ANEL) 64 64 Welfare Estimates for compensatory and non compensatory heuristics Pooled Compensatory Non Compensatory Attribute Mean WTP Mean WTP Mean WTP Explosions 71.2448 59.4739 512.83 Noise 88.214 85.0077 137.00 Airdust 118.189 98.2298 256.33 Smokes 43.7179 42.4851 49.83 Odours 110.02 91.1173 195.83 65 65 EXPERIMENT II: EMOTIONS MANIPULATION Why a 2nd experiment? -Check results out in a more controlled setting. -Testing effects of alternative emotional states. TREATMENTS Lerner, Small and Loewestein (2004; Psych. Science) -Sadness -Disgust -Neutral Sample Size 129 Participants randomly assigned to treatments Overall Experiment Details 2 unrelated studies with 2 different researchers. STUDY 1 “imagination study” by a psychologist STUDY 2 “Externalities Valuation study” by an economist. 66 66 EXPERIMENT II: EMOTIONS MANIPULATION PROCEDURE 1. Welcome and Introduction by researcher in Psycho. 2. Signing Consent Form for STUDY 1. 3. Asking EIS questions 4. Watching a film clip (Lerner et al, 2004) SAD – “The Champ” DISGUST – “Trainspotting” NEUTRAL – “National Geographic” Sample Size 5. Writing down how they would feel in the clip situation 129 Participants randomly assigned to treatments 6. Collecting materials and going to another room ---------------------------------------------------------------------------- Experiment Details 7. Welcome by the researcher in economics. 8. Signing the Consent form for STUDY 2. 9. Replicating experiment I. 10. Emotion Manipulation check (10 affective states) 11. What do you think is the aim of the study? 12. Subjects get paid (≈15€ for ≈ 45-50 minutes) 67 67 EXPERIMENT II: EMOTIONS MANIPULATION Figure 3. Self-reported emotion in the three emotion conditions Z-score self reported Emotion Disgust Sad 1.2 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 DISGUST NEUTRAL SAD 68 68 EXPERIMENT II: EMOTIONS MANIPULATION Choice Decision Rules under the Alternative Emotion Induction 100.00 90.00 80.00 70.00 60.00 50.00 40.00 30.00 20.00 10.00 0.00 Neutral Sadness Conpensatory Disgust Non-Compensatory Neutral Sadness Disgust % % % Conpensatory 63.98 58.23 74.17 Non-Compensatory 36.02 41.77 25.83 Decision Rule