Meta-Analysis and the Myth of Generalizability

Robert P. Tett, Nathan A. Hundley, Neil D. Christiansen

Research output: Contribution to journalReview articlepeer-review

25 Scopus citations

Abstract

Rejecting situational specificity (SS) in meta-analysis requires assuming that residual variance in observed correlations is due to uncorrected artifacts (e.g., calculation errors). To test that assumption, 741 aggregations from 24 meta-analytic articles representing seven industrial and organizational (I-O) psychology domains (e.g., cognitive ability, job interviews) were coded for moderator subgroup specificity. In support of SS, increasing subgroup specificity yields lower mean residual variance per domain, averaging a 73.1% drop. Precision in mean rho (i.e., low SD(rho)) adequate to permit generalizability is typically reached at SS levels high enough to challenge generalizability inferences (hence, the myth of generalizability). Further, and somewhat paradoxically, decreasing K with increasing precision undermines certainty in mean r and Var(r) as meta-analytic starting points. In support of the noted concerns, only 4.6% of the 741 aggregations met defensibly rigorous generalizability standards. Four key questions guiding generalizability inferences are identified in advancing meta-analysis as a knowledge source.

Original languageEnglish
Pages (from-to)421-456
Number of pages36
JournalIndustrial and Organizational Psychology
Volume10
Issue number3
DOIs
StatePublished - Sep 1 2017

Keywords

  • meta-analysis
  • moderator subgroup
  • quantitative literature review
  • situational specificity
  • validity generalization

Fingerprint

Dive into the research topics of 'Meta-Analysis and the Myth of Generalizability'. Together they form a unique fingerprint.

Cite this