With the growing importance of professionalism in medical education, it is imperative to develop professionalism assessments that demonstrate robust validity evidence. The Professionalism ...Mini-Evaluation Exercise (P-MEX) is an assessment that has demonstrated validity evidence in the authentic clinical setting. Identifying the factorial structure of professionalism assessments determines professionalism constructs that can be used to provide diagnostic and actionable feedback. This study examines validity evidence for the P-MEX, a focused and standardized assessment of professionalism, in a simulated patient setting.
The P-MEX was administered to 275 pediatric residency applicants as part of a 3-station standardized patient encounter, pooling data over an 8-year period (2012 to 2019 residency admission years). Reliability and construct validity for the P-MEX were evaluated using Cronbach's alpha, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA).
Cronbach's alpha for the P-MEX was 0.91. The EFA yielded 4 factors: doctor-patient relationship skills, interprofessional skills, professional demeanor, and reflective skills. The CFA demonstrated good model fit with a root-mean-square error of approximation of .058 and a comparative fit index of .92, confirming the reproducibility of the 4-factor structure of professionalism.
The P-MEX demonstrates construct validity as an assessment of professionalism, with 4 underlying subdomains in doctor-patient relationship skills, interprofessional skills, professional demeanor, and reflective skills. These results yield new confidence in providing diagnostic and actionable subscores within the P-MEX assessment. Educators may wish to integrate the P-MEX assessment into their professionalism curricula.
Competency-based medical education (CBME) is being implemented worldwide. In CMBE, residency training is designed around competencies required for unsupervised practice and use entrustable ...professional activities (EPAs) as workplace “units of assessment”. Well-designed workplace-based assessment (WBA) tools are required to document competence of trainees in authentic clinical environments. In this study, we developed a WBA instrument to assess residents’ performance of intra-operative pathology consultations and conducted a validity investigation. The entrustment-aligned pathology assessment instrument for intra-operative consultations (EPA-IC) was developed through a national iterative consultation and used clinical supervisors to assess residents’ performance at an anatomical pathology program. Psychometric analyses and focus groups were conducted to explore the sources of evidence using modern validity theory: content, response process, internal structure, relations to other variables, and consequences of assessment. The content was considered appropriate, the assessment was feasible and acceptable by residents and supervisors, and it had a positive educational impact by improving performance of intra-operative consultations and feedback to learners. The results had low reliability, which seemed to be related to assessment biases, and supervisors were reluctant to fully entrust trainees due to cultural issues. With CBME implementation, new workplace-based assessment tools are needed in pathology. In this study, we showcased the development of the first instrument for assessing resident’s performance of a prototypical entrustable professional activity in pathology using modern education principles and validity theory.
Health care errors are a national concern. Although considerable attention has been placed on reducing errors since a 2000 Institute of Medicine report, adverse events persist. The purpose of this ...pilot study was to evaluate the effect of mindfulness training, employing the standardized approach of an eight-week mindfulness-based, stress reduction program on reduction of nurse errors in simulated clinical scenarios. An experimental, pre- and post-test control group design was employed with 20 staff nurses and senior nursing students. Although not statistically significant, there were numerical differences in clinical performance scores from baseline when comparing mindfulness and control groups immediately following mindfulness training and after three months. A number of benefits of mindfulness training, such as improved listening skills, were identified. This pilot study supports the benefits of mindfulness training in improving nurse clinical performance and illustrates a novel approach to employ in future research.
•Performance in communication and interpersonal skills (CIS) are case specific.•Our CIS scale aligns with content assessed in medical licensure test.•Standard setting should be used to establish ...defensible passing levels.•Passing standards for CIS may require using the Angoff method.
Communication and interpersonal skills (CIS) are essential elements of competency-based education. We examined defensible CIS passing levels for medical students completing basic sciences (second-year students) and clinical training (fourth-year students), using five standard setting methods.
A 14-item CIS scale was used. Data from second-year (n = 190) and fourth-year (n = 170) students were analyzed using descriptive statistics and generalizability studies. Fifteen judges defined borderline CIS performance. Cut scores and fail rates from five standard setting methods (Angoff, Borderline-Group, Borderline-Regression, Contrasting-Groups, and Normative methods) were examined.
CIS performance was similar during second-year (Mean = 74%, SD = 6%) and fourth-year (Mean = 72%, SD = 5%) students. Judges using the Angoff method expected greater competence at the fourth-year level, as reflected in the Angoff cut scores (second-year = 53% with 0% fail, fourth-year = 66% with 10% fail). Cut scores from the remaining methods did not differentiate between training levels. We found evidence of case specificity.
Performance on CIS may be case specific. Passing standards for communication skills may require employing approaches such as the Angoff method that are sensitive to expectations of learner performance for different levels of training, competencies, and milestone levels.
Institutions that want to encourage continued growth in CIS should apply appropriate standard setting methods.
The unexpected discontinuation of the United States Medical Licensing Examination Step 2 Clinical Skills (CS) exam in January 2021 carries both risks and opportunities for medical education in the ...United States. Step 2 CS had far-reaching effects on medical school curricula and school-based clinical skills assessments. Absent the need to prepare students for this high-stakes exam, will the rigor of foundational clinical skills instruction and assessment remain a priority at medical schools? In this article, the authors consider the potential losses and gains from the elimination of Step 2 CS and explore opportunities to expand local summative assessments beyond the narrow bounds of Step 2 CS. The responsibility for implementing a rigorous and credible summative assessment of clinical skills that are critical for patient safety as medical students transition to residency now lies squarely with medical schools. Robust human simulation (standardized patient) programs, including regional and virtual simulation consortia, can provide infrastructure and expertise for innovative and creative local assessments to meet this need. Novel applications of human simulation and traditional formative assessment methods, such as workplace-based assessments and virtual patients, can contribute to defensible summative decisions about medical students' clinical skills. The need to establish validity evidence for decisions based on these novel assessment methods comprises a timely and relevant focus for medical education research.
Context Diagnostic accuracy is maximised by having clinical signs and diagnostic hypotheses in mind during the physical examination (PE). This diagnostic reasoning approach contrasts with the rote, ...hypothesis‐free screening PE learned by many medical students. A hypothesis‐driven PE (HDPE) learning and assessment procedure was developed to provide targeted practice and assessment in anticipating, eliciting and interpreting critical aspects of the PE in the context of diagnostic challenges.
Objectives This study was designed to obtain initial content validity evidence, performance and reliability estimates, and impact data for the HDPE procedure.
Methods Nineteen clinical scenarios were developed, covering 160 PE manoeuvres. A total of 66 Year 3 medical students prepared for and encountered three clinical scenarios during required formative assessments. For each case, students listed anticipated positive PE findings for two plausible diagnoses before examining the patient; examined a standardised patient (SP) simulating one of the diagnoses; received immediate feedback from the SP, and documented their findings and working diagnosis. The same students later encountered some of the scenarios during their Year 4 clinical skills examination.
Results On average, Year 3 students anticipated 65% of the positive findings, correctly performed 88% of the PE manoeuvres and documented 61% of the findings. Year 4 students anticipated and elicited fewer findings overall, but achieved proportionally more discriminating findings, thereby more efficiently achieving a diagnostic accuracy equivalent to that of students in Year 3. Year 4 students performed better on cases on which they had received feedback as Year 3 students. Twelve cases would provide a reliability of 0.80, based on discriminating checklist items only.
Conclusions The HDPE provided medical students with a thoughtful, deliberate approach to learning and assessing PE skills in a valid and reliable manner.
Sensor technology in assessments of clinical skill Laufer, Shlomi; Cohen, Elaine R; Kwan, Calvin ...
New England journal of medicine/The New England journal of medicine,
02/2015, Letnik:
372, Številka:
8
Journal Article
Background Detection of melanoma by physicians via opportunistic surveillance during focused physical examinations may reduce mortality. Medical students may not encounter a clinical case of melanoma ...during a dermatology clerkship. Objective This study examined the proficiency of fourth-year University of Illinois at Chicago medical students at detecting melanomas. Methods Melanoma moulages were applied to the second digit of the left hand of standardized patients (SPs) participating in a wrist pain scenario during a required clinical skills examination. An observer reviewed videotapes of the examination, written SP checklists, and student notes for evidence that the student noticed the moulage, obtained a history, or provided counseling. Results Among the 190 fourth-year medical students, 56 students were observed noticing the lesion; however, 13 failed to write it in their notes or advise the patient. The detection rate was 22.6% (43 of 190 students). Students who detected the probable melanoma consistently inquired about changes in the lesion and symptoms, but did not examine the rest of the skin or regularly palpate for adenopathy. Limitations Testing one class of students from a single medical school with a time-restricted SP encounter while focusing the students’ attention toward a different presenting symptom may hinder exploration of medical issues. Conclusion The low detection rate and failure of students who noticed the moulage to identify the lesion as atypical represents a lost opportunity to provide a patient intervention. Use of SP examinations may help physicians in training build confidence and competence in cutaneous malignancy screening.
Internists are required to perform a number of procedures that require mastery of technical and non-technical skills, however, formal assessment of these skills is often lacking. The purpose of this ...study was to develop, implement, and gather validity evidence for a procedural skills objective structured clinical examination (PS-OSCE) for internal medicine (IM) residents to assess their technical and non-technical skills when performing procedures. Thirty-five first to third-year IM residents participated in a 5-station PS-OSCE, which combined partial task models, standardized patients, and allied health professionals. Formal blueprinting was performed and content experts were used to develop the cases and rating instruments. Examiners underwent a frame-of-reference training session to prepare them for their rater role. Scores were compared by levels of training, experience, and to evaluation data from a non-procedural OSCE (IM-OSCE). Reliability was calculated using Generalizability analyses. Reliabilities for the technical and non-technical scores were 0.68 and 0.76, respectively. Third-year residents scored significantly higher than first-year residents on the technical (73.5 vs. 62.2 %) and non-technical (83.2 vs. 75.1 %) components of the PS-OSCE (
p
< 0.05). Residents who had performed the procedures more frequently scored higher on three of the five stations (
p
< 0.05). There was a moderate disattenuated correlation (r = 0.77) between the IM-OSCE and the technical component of the PS-OSCE scores. The PS-OSCE is a feasible method for assessing multiple competencies related to performing procedures and this study provides validity evidence to support its use as an in-training examination.