Serious Games are increasingly being used in undergraduate medical education. They are usually intended to enhance learning with a focus on knowledge acquisition and skills development. According to ...the current literature, few studies have assessed their effectiveness regarding clinical reasoning (CR). The aim of this prospective study was to compare a Serious Game, the virtual Accident & Emergency department 'EMERGE' to small-group problem-based learning (PBL) regarding student learning outcome on clinical reasoning in the short term.
A total of 112 final-year medical students self-selected to participate in ten 90-minute sessions of either small-group PBL or playing EMERGE. CR was assessed in a formative examination consisting of six key feature cases and a final 45-minute EMERGE session.
Overall, the EMERGE group (n = 78) scored significantly higher than the PBL group (n = 34) in the key feature examination (62.5 (IQR: 17.7)% vs. 54.2 (IQR: 21.9)%; p = 0.015). There was no significant difference in performance levels between groups regarding those cases which had been discussed in both instructional formats during the training phase. In the final EMERGE session, the EMERGE group achieved significantly better results than the PBL group in all four cases regarding the total score as well as in three of four cases regarding the final diagnosis and the correct therapeutic interventions.
EMERGE can be used effectively for CR training in undergraduate medical education. The difference in key feature exam scores was driven by additional exposure to more cases in EMERGE compared to PBL despite identical learning time in both instructional formats. EMERGE is a potential alternative to intensive small-group teaching. Further work is needed to establish how Serious Games enhance CR most effectively.
The coronavirus pandemic has led to increased use of digital teaching formats in medical education. A number of studies have assessed student satisfaction with these resources. However, there is a ...lack of studies investigating changes in student performance following the switch from contact to virtual teaching. Specifically, there are no studies linking student use of digital resources to learning outcome and examining predictors of failure.
Student performance before (winter term 2019/20: contact teaching) and during (summer term 2020: no contact teaching) the pandemic was compared prospectively in a cohort of 162 medical students enrolled in the clinical phase of a five-year undergraduate curriculum. Use of and performance in various digital resources (case-based teaching in a modified flipped classroom approach; formative key feature examinations of clinical reasoning; daily multiple choice quizzes) was recorded in summer 2020. Student scores in summative examinations were compared to examination scores in the previous term. Associations between student characteristics, resource use and summative examination results were used to identify predictors of performance.
Not all students made complete use of the digital learning resources provided. Timely completion of tasks was associated with superior performance compared to delayed completion. Female students scored significantly fewer points in formative key feature examinations and digital quizzes. Overall, higher rankings within the student cohort (according to summative exams) in winter term 2019/20 as well as male gender predicted summative exam performance in summer 2020. Scores achieved in the first formative key feature examination predicted summative end-of-module exam scores.
The association between timely completion of tasks as well as early performance in a module and summative exams might help to identify students at risk and offering help early on. The unexpected gender difference requires further study to determine whether the shift to a digital-only curriculum disadvantages female students.
BackgroundThe coronavirus pandemic has led to increased use of digital teaching formats in medical education. A number of studies have assessed student satisfaction with these resources. However, ...there is a lack of studies investigating changes in student performance following the switch from contact to virtual teaching. Specifically, there are no studies linking student use of digital resources to learning outcome and examining predictors of failure.MethodsStudent performance before (winter term 2019/20: contact teaching) and during (summer term 2020: no contact teaching) the pandemic was compared prospectively in a cohort of 162 medical students enrolled in the clinical phase of a five-year undergraduate curriculum. Use of and performance in various digital resources (case-based teaching in a modified flipped classroom approach; formative key feature examinations of clinical reasoning; daily multiple choice quizzes) was recorded in summer 2020. Student scores in summative examinations were compared to examination scores in the previous term. Associations between student characteristics, resource use and summative examination results were used to identify predictors of performance.ResultsNot all students made complete use of the digital learning resources provided. Timely completion of tasks was associated with superior performance compared to delayed completion. Female students scored significantly fewer points in formative key feature examinations and digital quizzes. Overall, higher rankings within the student cohort (according to summative exams) in winter term 2019/20 as well as male gender predicted summative exam performance in summer 2020. Scores achieved in the first formative key feature examination predicted summative end-of-module exam scores.ConclusionsThe association between timely completion of tasks as well as early performance in a module and summative exams might help to identify students at risk and offering help early on. The unexpected gender difference requires further study to determine whether the shift to a digital-only curriculum disadvantages female students.
The Choosing Wisely campaign highlights the importance of clinical reasoning abilities for competent and reflective physicians. The principles of this campaign should be addressed in undergraduate ...medical education. Recent research suggests that answering questions on important steps in patient management promotes knowledge retention. It is less clear whether increasing the authenticity of educational material by the inclusion of videos further enhances learning outcome.
In a prospective randomised controlled cross-over study, we assessed whether repeated video-based testing is more effective than repeated text-based testing in training students to choose appropriate diagnostic tests, arrive at correct diagnoses and identify advisable therapies. Following an entry exam, fourth-year undergraduate medical students attended 10 weekly computer-based seminars during which they studied patient case histories. Each case contained five key feature questions (items) on the diagnosis and treatment of the presented patient. Students were randomly allocated to read text cases (control condition) or watch videos (intervention), and assignment to either text or video was switched between groups every week. Using a within-subjects design, student performance on video-based and text-based items was assessed 13 weeks (exit exam) and 9 months (retention test) after the first day of term. The primary outcome was the within-subject difference in performance on video-based and text-based items in the exit exam.
Of 125 eligible students, 93 provided data for all three exams (response rate 74.4%). Percent scores were significantly higher for video-based than for text-based items in the exit exam (76.2 ± 19.4% vs. 72.4 ± 19.1%, p = 0.026) but not the retention test (69.2 ± 20.2% vs. 66.4 ± 20.3%, p = 0.108). An additional Bayesian analysis of this retention test suggested that video-based training is marginally more effective than text-based training in the long term (Bayes factor 2.36). Regardless of presentation format, student responses revealed a high prevalence of erroneous beliefs that, if applied to the clinical context, could place patients at risk.
Repeated video-based key feature testing produces superior short-term learning outcome compared to text-based testing. Given the high prevalence of misconceptions, efforts to improve clinical reasoning training in medical education are warranted. The Choosing Wisely campaign lends itself to being part of this process.
Artificial intelligence (AI) is becoming increasingly important in healthcare. It is therefore crucial that today's medical students have certain basic AI skills that enable them to use AI ...applications successfully. These basic skills are often referred to as "AI literacy". Previous research projects that aimed to investigate medical students' AI literacy and attitudes towards AI have not used reliable and validated assessment instruments.
We used two validated self-assessment scales to measure AI literacy (31 Likert-type items) and attitudes towards AI (5 Likert-type items) at two German medical schools. The scales were distributed to the medical students through an online questionnaire. The final sample consisted of a total of 377 medical students. We conducted a confirmatory factor analysis and calculated the internal consistency of the scales to check whether the scales were sufficiently reliable to be used in our sample. In addition, we calculated t-tests to determine group differences and Pearson's and Kendall's correlation coefficients to examine associations between individual variables.
The model fit and internal consistency of the scales were satisfactory. Within the concept of AI literacy, we found that medical students at both medical schools rated their technical understanding of AI significantly lower (M
= 2.85 and M
= 2.50) than their ability to critically appraise (M
= 4.99 and M
= 4.83) or practically use AI (M
= 4.52 and M
= 4.32), which reveals a discrepancy of skills. In addition, female medical students rated their overall AI literacy significantly lower than male medical students, t(217.96) = -3.65, p <.001. Students in both samples seemed to be more accepting of AI than fearful of the technology, t(745.42) = 11.72, p <.001. Furthermore, we discovered a strong positive correlation between AI literacy and positive attitudes towards AI and a weak negative correlation between AI literacy and negative attitudes. Finally, we found that prior AI education and interest in AI is positively correlated with medical students' AI literacy.
Courses to increase the AI literacy of medical students should focus more on technical aspects. There also appears to be a correlation between AI literacy and attitudes towards AI, which should be considered when planning AI courses.
Nicotine replacement therapy (NRT) bought over the counter (OTC) appears to be largely ineffective for smoking cessation, which may be partially explained by poor adherence. We developed and ...evaluated the NRT2Quit smartphone app (for iOS) designed to improve quit attempts with OTC NRT by improving adherence to the medications.
This study was a pragmatic double-blind randomised controlled trial with remote recruitment through leaflets distributed to over 300 UK-based community pharmacies. The study recruited adult daily smokers (≥10 cigarettes per day) who bought NRT, wanted to quit smoking, downloaded NTR2Quit and completed the registration process within the app. Participants were automatically randomly assigned within the app to the intervention (full) version of NRT2Quit or to its control (minimal) versions. The primary outcome was biochemically verified 4-week abstinence assessed at 8-week follow-up using Russell Standard criteria and intention to treat. Bayes factors were calculated for the cessation outcome. Secondary outcomes were self-reported abstinence, NRT use, app use and satisfaction with the app.
The study under-recruited. Only 41 participants (3.5% of the target sample) were randomly assigned to NRT2Quit (n = 16) or the control (n = 25) app versions between March 2015 and September 2016. The follow-up rate was 51.2%. The intervention participants had numerically higher biochemically verified quit rates (25.0% versus 8.0%, P = 0.19, odds ratio = 3.83, 0.61-24.02). The calculated Bayes factor, 1.92, showed that the data were insensitive to test for the hypothesis that the intervention app version aided cessation. The intervention participants had higher median logins (2.5 versus 0, P = 0.01) and were more likely to use NRT at follow-up (100.0% versus 28.6%, P = 0.03) and recommend NRT2Quit to others (100.0% versus 28.6%, P = 0.01).
Despite very low recruitment, there was preliminary but inconclusive evidence that NRT2Quit may improve short-term abstinence and adherence among smokers using NRT. Well-powered studies on NRT2Quit are needed, but different recruitment methods will be required to engage smokers through community pharmacies or other channels.
ISRCTN ISRCTN33423896 , prospectively registered on 22 March 2015.
Medical education has been transformed during the COVID-19 pandemic, creating challenges regarding adequate training in ultrasound (US). Due to the discontinuation of traditional classroom teaching, ...the need to expand digital learning opportunities is undeniable. The aim of our study is to develop a tele-guided US course for undergraduate medical students and test the feasibility and efficacy of this digital US teaching method.
A tele-guided US course was established for medical students. Students underwent seven US organ modules. Each module took place in a flipped classroom concept via the Amboss platform, providing supplementary e-learning material that was optional and included information on each of the US modules. An objective structured assessment of US skills (OSAUS) was implemented as the final exam. US images of the course and exam were rated by the Brightness Mode Quality Ultrasound Imaging Examination Technique (B-QUIET). Achieved points in image rating were compared to the OSAUS exam.
A total of 15 medical students were enrolled. Students achieved an average score of 154.5 (SD ± 11.72) out of 175 points (88.29 %) in OSAUS, which corresponded to the image rating using B-QUIET. Interrater analysis of US images showed a favorable agreement with an ICC (2.1) of 0.895 (95 % confidence interval 0.858 < ICC < 0.924).
US training via teleguidance should be considered in medical education. Our pilot study demonstrates the feasibility of a concept that can be used in the future to improve US training of medical students even during a pandemic.
Artificial Intelligence competencies will become increasingly important in the near future. Therefore, it is essential that the AI literacy of individuals can be assessed in a valid and reliable way. ...This study presents the development of the “Scale for the assessment of non-experts' AI literacy” (SNAIL). An existing AI literacy item set was distributed as an online questionnaire to a heterogeneous group of non-experts (i.e., individuals without a formal AI or computer science education). Based on the data collected, an exploratory factor analysis was conducted to investigate the underlying latent factor structure. The results indicated that a three-factor model had the best model fit. The individual factors reflected AI competencies in the areas of “Technical Understanding”, “Critical Appraisal”, and “Practical Application”. In addition, eight items from the original questionnaire were deleted based on high intercorrelations and low communalities to reduce the length of the questionnaire. The final SNAIL-questionnaire consists of 31 items that can be used to assess the AI literacy of individual non-experts or specific groups and is also designed to enable the evaluation of AI literacy courses’ teaching effectiveness.
Patients presenting with acute shortness of breath and chest pain should be managed according to guideline recommendations. Serious games can be used to train clinical reasoning. However, only few ...studies have used outcomes beyond student satisfaction, and most of the published evidence is based on short-term follow-up. This study investigated the effectiveness of a digital simulation of an emergency ward regarding appropriate clinical decision-making.
In this prospective trial that ran from summer 2017 to winter 2018/19 at Göttingen Medical University Centre, a total of 178 students enrolled in either the fourth or the fifth year of undergraduate medical education took six 90-min sessions of playing a serious game ('training phase') in which they managed virtual patients presenting with various conditions. Learning outcome was assessed by analysing log-files of in-game activity (including choice of diagnostic methods, differential diagnosis and treatment initiation) with regard to history taking and patient management in three virtual patient cases: Non-ST segment elevation myocardial infarction (NSTEMI), pulmonary embolism (PE) and hypertensive crisis. Fourth-year students were followed up for 1.5 years, and their final performance was compared to the performance of students who had never been exposed to the game but had otherwise taken the same five-year undergraduate course.
During the training phase, overall performance scores increased from 57.6 ± 1.1% to 65.5 ± 1.2% (p < 0.001; effect size 0.656). Performance remained stable over 1.5 years, and the final assessment revealed a strong impact of ever-exposure to the game on management scores (72.6 ± 1.2% vs. 63.5 ± 2.1%, p < 0.001; effect size 0.811). Pre-exposed students were more than twice as likely to correctly diagnose NSTEMI and PE and showed significantly greater adherence to guideline recommendations (e.g., troponin measurement and D-dimer testing in suspected PE).
The considerable difference observed between previously exposed and unexposed students suggests a long-term effect of using the game although retention of specific virtual patient cases rather than general principles might partially account for this effect. Thus, the game may foster the implementation of guideline recommendations.