This course aims to help you to ask better statistical questions when performing empirical research. We will discuss how to design informative studies, both when your predictions are correct, as when your predictions are wrong. We will question norms, and reflect on how we can improve research practices to ask more interesting questions. In practical hands on assignments you will learn techniques and tools that can be immediately implemented in your own research, such as thinking about the smallest effect size you are interested in, justifying your sample size, evaluate findings in the literature while keeping publication bias into account, performing a meta-analysis, and making your analyses computationally reproducible.
If you have the time, it is recommended that you complete my course 'Improving Your Statistical Inferences' before enrolling in this course, although this course is completely self-contained.
Module 1: Improving Your Statistical Questions
One of the biggest improvements most researchers can make is to more clearly specify their statistical questions. When you perform a study, what is it you really want to know?
What are different types of questions we can ask? Which question does a hypothesis test really answer, and is this answer actually what you are interested in, or is the question you are asking more about exploration, description, or prediction? How can we make riskier predictions than null-hypothesis tests, and why is this useful?
Module 2: Falsifying Predictions
There is little use in making predictions if you can never be wrong - so how do we make sure your predictions are falsifiable? We discuss why falsifiable predictions are important, and how to make your predictions falsifiable in practice. One important aspect of making predictions falsifiable is to specify a range of values that is not predicted, and we will examine different approaches to specifying a smallest effect size of interest.
Module 3: Designing Informative Studies
If studies are designed to answer a question, you should make sure the answer you will get after collecting data is informative. Instead of mindlessly setting Type 1 and Type 2 error rates, we will learn why it is important to be able to justify error rates, and some approaches how to do so. We discuss the benefits of using your smallest effect size of interest in power analyses, and why learning to simulate data is a useful tool. Simulations can help you to improve your understanding of statistics, enable you to design informative studies, and even ask novel questions.
Module 4: Meta-Analysis and Bias Detection
Regrettably we work in a scientific enterprise where the published literature does not reflect real research. Publication bias and selection biases lead to a scientific literature that can’t be interpreted without taking these biases into account. We will discuss what real research lines look like, and how to meta-analytically evaluate the literature while keeping bias in mind.
Module 5: Computational Reproducibility, Philosophy of Science, and Scientific Integrity
We discuss three last topics. First, we will make sure other people can use your data to ask new questions, by making sure your data analysis is computationally reproducible. Then, we will reflect on how your philosophy of science influences the types of questions you will ask, and what you value as you do research. Finally, we discuss scientific integrity, and reflect on why our research practice is not always aligned with the best possible ways to provide reliable answers to scientific questions.
Module 6: Final Exam
This module contains a graded exam. It covers content from the entire course. We recommend making this exam only after you went through all the other modules.