<b>Problem</b>: There is an unmet need for economically feasible, valid, reliable, and contextually relevant assessments of interprofessional collaborative knowledge and skills, particularly at the early stages of health professions education. This study sought to develop and gather content and internal structure validity for an Interprofessional Situational Judgement Test (IPSJT), a tool for the measurement of students’ interprofessional collaborative intentions during the early stages of their professional development.<br><br><b>Approach</b>: After engaging in an item development and refinement process (January–June 2018), an 18 question IPSJT was administered to 953 first-year students enrolled in 10 different health professions degree programs at the University of Florida Health Science Center in October 2018. The IPSJT’s performance was evaluated using item-level analyses, item difficulty, test-retest reliability, and exploratory factor analysis.<br><br><b>Outcomes</b>: Seven hundred thirty-seven (77.3%) students consented to the use of their data. Student IPSJT scores ranged from 0–69, averaging 42.68 (standard deviation = 12.28), with some statistically significant differences in student performance by health professions degree program. IPSJT item difficulties ranged from .13–.91. Once one item with poor properties was excluded from analysis, the IPSJT demonstrated an overall reliability of .63. Students were slightly more successful at identifying the least effective than the most effective responses. Test-retest reliability provided evidence of consistency (r = .50, <i>P</i> < .001) and similar item-difficulty across administrations. An exploratory factor analysis indicated a 3-factor model with multiple cross-factor loadings.<br><br><b>Next Steps</b>: This work represents the first step toward the development of a valid, reliable IPSJT for early learners. The emergent 3-factor model provides evidence that multiple competencies can be assessed in early learners via this tool. Additional research is necessary to build a more robust question bank, explore different scoring and response methods, and gather additional sources of validity evidence, including relations to other variables.
|State||Published - Apr 6 2021|