Listen

Description

Episode 4 - Reproducibility Now

This week we dive into the Open Science Collaboration’s (2015) paper “Estimating the reproducibility of psychological science”
http://science.sciencemag.org/content/349/6251/aac4716

Highlights:

[1:00] This paper has all of the authors
[1:30] Direct vs conceptual replications
[4:30] PhD students running replications as the basis of extending a paradigm
[6:00] The 100 studies paper methods in brief
[8:00] everything’s available for this collaborative effort, and that is awesome
(https://osf.io/ezcuj)
[9:00] Reproducibility vs Replicability - what are we actually talking about
[9:30] Oxford summer school in reproducibiltiy
(https://www.eventbrite.co.uk/e/oxford-reproducibility-school-tickets-48405892327)
[11:00] paper discussing the computational reproducibility of papers
[15:00] Replication is not only about the p value folks!
[17:30] Sam brings up Bayes purely to be a douchebag
[19:30] A bayesian approach - Sophia gives us the paper (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0149794) and we move on
[20:00] Replications as a method to diagnose problems in science. Are replications a viable problem solver?
[24:00] Psychology is only a teenager really
[26:00] If the original paper is trash, there’s probably no need to replicate it. Maybe just burn it down?
[27:00] Figure 1 - the average effect size halved in the replication attempt and most effects did not replicate.
[31:30] Do the results hint at more than publication bias? Are other QRPs involved?
[33:00] Comparing reproducibility across subfields of psychology. But, are these studies representative of an entire subfield
[35:30] Does journal impact factor mean anything?
[39:30] Are we actually being critical of previous research in general?
[41:00] “Our foundations have as many holes as a swiss cheese”

Music credit: Kevin MacLeod - Funkeriffic
freepd.com/misc.php