Professional Poster

Use of an Observer-based Assessment Measuring Individual Student Interprofessional Competency

Log in to view the attachment.
Some experience with IPE
assessment and evaluation

Poster Description: Evaluations of interprofessional competencies of students in didactic activities often use self-assessment measurements or focus on the whole team rather than individual abilities. Methods to measure interprofessional competencies using objective observers that evaluate individual students are lacking. This study tested a newly-developed evaluation tool called IPEC-Competency Assessment Tool for Individual Students (I-CATIS) used by trained third party observers to evaluate pharmacy, dental, and dental hygiene students collaborating on a patient case for a didactic course.

The aims of the study were to:
1. Effectively train the observers on the instrument and accurately evaluate each participating student individually within an assigned group
2. Provide the I-CATIS evaluation results to each participant for feedback on the value of the observers' ratings.

The third-party observers were five residents and graduate students in the pharmacy and dental schools. During the course, each of the 5 observers reviewed videos of four student groups, consisting of 9-10 pharmacy, dental, and dental hygiene students as they collaborated on an interprofessional patient case. Observers rated each participating student on 13 pre-selected IPEC competencies using the following scale: minimal, developing, competent, or not observable. Students completed a self-evaluation on the same IPEC competencies after the activity. Evaluation results were sent to students, who were surveyed for their perspectives on the value of the observer ratings. Evaluators were surveyed on the ease and utility of the I-CATIS process.

A total of 115 students were assigned to 20 interprofessional groups from the following disciplines: 65 pharmacy, 35 dental, and 15 dental hygiene. The most frequent rating given was "competent" (38%), followed by "developing" (32%), then "minimal" (9%). "Not observable" was used for 22% of competencies across all students. Analysis of individual competency ratings was completed. Comparison to student self-assessments were made to determine consistencies or disparities. Intra- and inter-rater reliability was measured via select evaluators re-evaluating one of their own groups as well as different group. Study findings demonstrate the ability of the I-CATIS to provide efficient and valuable observer-based evaluations of individual student interprofessional competency across multiple health profession education programs using trained observers. Future investigations should include validation of the I-CATIS instrument and study in other student-based activities.