There are many situations in which we may be interested in doing “multiple comparisons.” For example, after an ANOVA, we may want to compare individual groups to one another (post-hoc pairwise comparisons), or we may be running a variety of models within the same experiment. When making these comparisons, however, we increase the chance of a false positive, also called a Type 1 error.
To counter this increase in the chance of a false positive, various p-value corrections have been created. In this workshop, we will discuss how tools like the Tukey test, Fisher’s Least Significant Difference, Bonferroni Correction, and false discovery rate (fdr) address this concern.
We will discuss when different approaches would be appropriate, as well as the implications of not utilizing them to ensure that you can be confident in the reliability of your own statistical results. Examples of implementation will be shown in R, but the ideas will generalize to other statistical software.