Measurements of Agreement and Reliability

In many fields of research, it is common to have several individuals rate a common set of study participants or objects. These measurements are almost always prone to various sorts of errors. Agreement statistics gauge how close the repeated measurements are by estimating the measurement error. Reliability statistics assess how well study participants\objects can be distinguished from one another, despite measurement error. In this workshop, we will cover the following concepts:

  • Test re-test reliability
  • Inter-rater and Intra-rater reliability
  • Interclass correlation coefficient
  • Cohen’s kappa coefficient