TY - JOUR
T1 - Normative scoring of multidimensional pairwise preference personality scales using IRT
T2 - Empirical comparisons with other formats
AU - Chernyshenko, Oleksandr S.
AU - Stark, Stephen
AU - Prewett, Matthew S.
AU - Gray, Ashley A.
AU - Stilson, Frederick R.
AU - Tuttle, Matthew D.
PY - 2009/4
Y1 - 2009/4
N2 - In this article, we offer some suggestions as to why tetrads and pentads have become the dominant formats for administering multidimensional forced choice (MFC) items but, in turn, raise questions regarding the underlying psychometric model and means of addressing item quality and scoring accuracy. We then focus our attention on multidimensional pairwise preference (MDPP) items and present an item response theory-based approach to constructing and modeling MDPP responses directly, assessing information at the item and scale levels, and a way of computing standard errors for trait scores and estimating scale reliability. To demonstrate the viability of this method for applied use, we show that the correspondence between MDPP scores derived from direct modeling with those obtained using single statement and unidimensional pairwise preference measures administered in a laboratory setting. Trait score correlations and criterion related validities are compared across testing formats and rating sources (i.e., self and other), and the usefulness of our model-based approach is further demonstrated by some illustrative results involving computerized adaptive tests (CAT).
AB - In this article, we offer some suggestions as to why tetrads and pentads have become the dominant formats for administering multidimensional forced choice (MFC) items but, in turn, raise questions regarding the underlying psychometric model and means of addressing item quality and scoring accuracy. We then focus our attention on multidimensional pairwise preference (MDPP) items and present an item response theory-based approach to constructing and modeling MDPP responses directly, assessing information at the item and scale levels, and a way of computing standard errors for trait scores and estimating scale reliability. To demonstrate the viability of this method for applied use, we show that the correspondence between MDPP scores derived from direct modeling with those obtained using single statement and unidimensional pairwise preference measures administered in a laboratory setting. Trait score correlations and criterion related validities are compared across testing formats and rating sources (i.e., self and other), and the usefulness of our model-based approach is further demonstrated by some illustrative results involving computerized adaptive tests (CAT).
UR - http://www.scopus.com/inward/record.url?scp=70449553320&partnerID=8YFLogxK
U2 - 10.1080/08959280902743303
DO - 10.1080/08959280902743303
M3 - Article
AN - SCOPUS:70449553320
VL - 22
SP - 105
EP - 127
JO - Human Performance
JF - Human Performance
SN - 0895-9285
IS - 2
ER -