top banner top banner
index
RegularArticles
ReplicationStudies
SpecialIssues
Vignettes
EditorialBoard
Instructions4Authors
JournalGuidelines
Messages
Submission

Search publications

Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

Full text PDF
Bibliographic information: BibTEX format RIS format XML format APA style
Cited references information: BibTEX format APA style
Doi: 10.20982/tqmp.08.1.p023

Hallgren, Kevin A.
23-34
Keywords: inter-rater reliability , Cohen's kappa
(no sample data)   (no appendix)

There has been an erratum in volume 9(2), p. 95

Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen's kappa and intra-class correlations to assess IRR.


Pages © TQMP;
Website last modified: 2024-03-28.
Template last modified: 2022-03-04 18h27.
Page consulted on .
Be informed of the upcoming issues with RSS feed: RSS icon RSS