The reproducibility crisis in science has raised concerns about the reliability of research findings. Biases, questionable practices, and insufficient have led to doubts about published results. Scientists are now grappling with ways to improve research quality and .
practices offer solutions to these challenges. , , and aim to increase transparency and reliability. These approaches help researchers detect and correct errors, fostering a more robust scientific process.
Reproducibility Issues
Biases and Questionable Research Practices
Top images from around the web for Biases and Questionable Research Practices
Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential ... View original
Is this image relevant?
Frontiers | Commentary: The Extent and Consequences of P-Hacking in Science View original
Is this image relevant?
Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential ... View original
Is this image relevant?
Frontiers | Commentary: The Extent and Consequences of P-Hacking in Science View original
Is this image relevant?
1 of 2
Top images from around the web for Biases and Questionable Research Practices
Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential ... View original
Is this image relevant?
Frontiers | Commentary: The Extent and Consequences of P-Hacking in Science View original
Is this image relevant?
Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential ... View original
Is this image relevant?
Frontiers | Commentary: The Extent and Consequences of P-Hacking in Science View original
Is this image relevant?
1 of 2
occurs when studies with positive or significant results are more likely to be published than studies with negative or non-significant results, leading to an overrepresentation of positive findings in the literature
involves manipulating data or analysis methods until a statistically significant result is obtained, often by running multiple tests or selectively reporting results, inflating the likelihood of
(Hypothesizing After Results are Known) is the practice of presenting a post-hoc hypothesis as if it were an a priori hypothesis, which can make results appear more convincing than they actually are
Researchers may modify their hypotheses to fit the observed data, leading to a false sense of confirmation
Effect Sizes and Power
is crucial for understanding the magnitude and practical significance of a study's findings, but is often neglected in favor of focusing solely on statistical significance
Reporting effect sizes (, ) allows readers to assess the strength of the relationship between variables
involves determining the sample size needed to detect an effect of a specific size with a desired level of statistical power
Conducting a power analysis before collecting data helps ensure that a study has sufficient statistical power to detect meaningful effects, reducing the risk of
Open Science Solutions
Open Science Practices
Open science is a movement that aims to make scientific research more transparent, accessible, and reproducible by promoting practices such as open access publishing, data sharing, and preregistration
Preregistration involves specifying a study's hypotheses, methods, and analysis plan before data collection begins, which helps prevent p-hacking and HARKing by committing researchers to a specific course of action
Preregistration platforms (, ) allow researchers to create time-stamped, publicly available study protocols
Data Sharing and Transparency
Data sharing involves making a study's raw data and analysis code publicly available, allowing other researchers to verify results, conduct alternative analyses, and build upon the original work
(, ) provide a platform for researchers to store and share their data
Transparency in methods requires providing detailed descriptions of a study's procedures, materials, and analysis techniques, enabling other researchers to understand and potentially replicate the work
Sharing study materials (stimuli, questionnaires) and analysis scripts (R, Python) facilitates reproducibility
Replication
Replication Studies
Replication studies involve repeating a previous study's methods as closely as possible to determine whether the original findings can be reproduced
aim to duplicate the original study's methods exactly, while test the same hypothesis using different methods
Successful replications increase confidence in the original findings, while failed replications suggest that the original results may have been false positives or influenced by contextual factors
Large-scale replication projects (, ) have attempted to replicate multiple studies simultaneously, with mixed results
Encouraging replication studies helps identify robust findings and contributes to the self-correcting nature of science, but incentives for conducting replications are often lacking
, a publication format in which the methods and analysis plan are peer-reviewed before data collection, can incentivize replication studies by guaranteeing publication regardless of the results