MTO Research Sandwich Tuesday, March 17, PZ 45 at 12:45h 12:45h Chris Hartgerink (Dept. Methodology & Statistics, Tilburg University) ‘Too good to be false: Nonsignificant results revisited’ Statistically significant research results in psychology have sometimes been considered "too good to be true", reflecting false positive findings. In this paper, we examine the other false findings, i.e., false negatives, which are a threat to scientific progress as well. We inspected eight flagship psychological journals during 1985-2013 (30,710 papers). With the R package statcheck, 54,595 nonsignificant test results were automatically extracted. Evidence for false negatives was inspected both at the journal and paper level. We applied the Fisher test at the paper level, which has high power to detect false negatives when there are at least three nonsignificant results and the population effect size is medium. All eight journals and 66.7% of papers reporting nonsignificant results show evidence for possible false negatives. Whereas more nonsignificant results are being reported across the years, evidence for false negatives showed a decreasing trend (1985 2013). We also manually investigate results for one specific effect. We conclude from our findings that concern for statistical power and false negatives in psychological science is still warranted. 13:15h Dino Dittrich (Dept. Methodology & Statistics, Tilburg University) ‘Bayesian estimation of the network autocorrelation model’ This talk will discuss Bayesian estimation techniques for the network autocorrelation model (NAM). Originally developed by geographers, the NAM has been used to identify and estimate network influence on individual behavior ever since. After a brief introduction into the NAM, we will move to Bayesian estimation of the model. First, several prior distribution choices for the model parameters will be motivated and their most important properties are presented. Next, we develop efficient computation techniques for posterior sampling relying on Metropolis-Hastings and Gibbs-sampling schemes. We evaluate the performance of various Bayesian estimators in a simulation study, focusing on the estimation of the key parameter of the model, the network autocorrelation. We conclude by showing that Bayesian methods can outperform Maximum Likelihood estimators in bias and coverage probability of the network autocorrelation parameter.