Despite being an effective predictor of job performance, empirically keyed biodata assessments have been criticized as black box empiricism unlikely to generalize to new contexts. This paper introduces a model that challenges this perspective, explicating how biodata content, job demands, and criterion variables collectively influence the construct validity, and generalizability of empirically scored biodata. Across two field studies, expected changes in scale correlations with external measures were found that coincided with changes in the contextual similarity between calibration and holdout contexts, the criteria used, and the content validity of biodata items. Collectively, this paper offers a framework that helps understand and optimize empirical biodata keying in practice, furthering confidence for their use in applied settings.
|Number of pages||17|
|Journal||International Journal of Selection and Assessment|
|State||Published - Mar 1 2020|
- big data
- empirical keying
- trait activation theory