top banner top banner

Search publications

Inter-Rater Agreement, Data Reliability, and The Crisis of Confidence in Psychological Research

Full text PDF
Bibliographic information: BibTEX format RIS format XML format APA style
Cited references information: BibTEX format APA style
Doi: 10.20982/tqmp.16.5.p467

Button, Cathryn M. , Snook, Brent , Grant, Malcolm J.
Keywords: reliability; inter-rater agreement; Kappa; Percent Agreement; research methods , confidence intervals
(no sample data)   (no appendix)

In response to the crisis of confidence in psychology, a plethora of solutions have been proposed to improve the way research is conducted (e.g., increasing statistical power, focusing on confidence intervals, enhancing the disclosure of methods). One area that has received little attention is the reliability of data. We note that while it is well understood that reliability of measures is essential to replicability, there is a failure to apply some measure of data reliability consistently, or to correct for chance when assessing agreement. We discuss the problem of relying on Percent Agreement between observers as a measure of reliability and describe a dilemma that researchers encounter when assessing contradictory indicators of reliability. We conclude with some pedagogical strategies that might make the need for reliability measures and chance correction more likely to be understood and implemented. By so doing, researchers can contribute to solving some aspects of the crisis of confidence in psychological research.

Pages © TQMP;
Website last modified: 2022-07-18.
Template last modified: 2022-03-04 18h27.
Page consulted on .
Be informed of the upcoming issues with RSS feed: RSS icon RSS