1.3 The credibility crisis

One of the most cited papers of the last decade (7,836 citations in August 2023) (Open Science Collaboration, 2015) conducted a large-scale study in which 100 classic psychological experiments were put to a test using an international, multi-lab approach. The main goal of the study was to estimate the reproducibility of psychological science. Unfortunately, the results showed that most of the classic psychological studies that psychology students learn in college and that most researchers cite as settled science couldn't be replicated. This meta-scientific endeavor has been conducted with similar results in other fields such as the behavioral, cognitive, economic, health, and medical sciences.

The most telling results showed that:

  • psychologists tend to use under-powered samples that overestimate the effect size and produce low reproducibility of results
  • there is a lack of access to full methods
  • there is a lack of access to analytic procedures and code
  • the file-drawer problem is pervasive as editors and journals are keen to "find results" (i.e., rejection of the Null Hypothesis, rather that publish reports with null effects)
  • there is a lack of access to publications
  • there is discontent with reporting and use of standard statistics