top banner top banner
index
RegularArticles
ReplicationStudies
SpecialIssues
Vignettes
EditorialBoard
Instructions4Authors
JournalGuidelines
Messages
Submission

Search publications

How prevalent is overfitting of regression models? A survey of recent articles in three psychology journals

Full text PDF
Bibliographic information: BibTEX format RIS format XML format APA style
Cited references information: BibTEX format APA style
Doi: 10.20982/tqmp.17.1.p001

Dalicandro, Lauren , Harder, Jane A. , Mazmanian, Dwight , Weaver, Bruce
1-6
Keywords: overfitting , replication crisis , regression , statistical best practices
(no sample data)   (no appendix)

Since 2011, there has been much discussion and concern about a "replication crisis" in psychology. An inability to reproduce findings in new samples can undermine even basic tenets of psychology. Much attention has been paid to the following practices, which \textcite {b19} described as "the four horsemen of the reproducibility apocalypse": Publication bias, low statistical power, p-hacking \parencite {sns11} and HARKing \parencite [i.e., hypothesizing after the results are known;][]{k98}. Another practice that has received less attention is overfitting of regression models. \textcite {b04} described overfitting as "capitalizing on the idiosyncratic characteristics of the sample at hand", and argued that it results in findings that "don't really exist in the population and hence will not replicate." The following common data-analytic practices increase the likelihood of model overfitting: Having too few observations (or events) per explanatory variable (OPV/EPV); automated algorithmic selection of variables; univariable pretesting of candidate predictor variables; categorization of quantitative variables; and sequential testing of multiple confounders. We reviewed 170 recent articles from three major psychology journals and found that 96 of them included at least one of the types of regression models \textcite {b04} discussed. We reviewed more fully those 96 articles and found that they reported 286 regression models. Regarding OPV/EPV, Babyak recommended 10 -15 as the minimum number needed to reduce the likelihood of overfitting. When we used the 10 OPV/EPV cut-off, 97 of the 286 models (33.9%) used at least one practice that leads to overfitting; and when we used 15 OPV/EPV as the cut-off, that number rose to 109 models (38.1%). The most frequently occurring practice that yields overfitted models was univariable pretesting of candidate predictor variables: It was found in 61 of the 286 models (21.3%). These findings suggest that overfitting of regression models remains a problem in psychology research, and that we must increase our efforts to educate researchers and students about this important issue.


Pages © TQMP;
Website last modified: 2021-06-22.
Template last modified: 2019-03-03>.
Page consulted on .
Be informed of the upcoming issues with RSS feed: RSS icon RSS