Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical ...imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice.
Infections have become the major cause of morbidity and mortality among patients with chronic lymphocytic leukemia (CLL) due to immune dysfunction and cytotoxic CLL treatment. Yet, predictive models ...for infection are missing. In this work, we develop the CLL Treatment-Infection Model (CLL-TIM) that identifies patients at risk of infection or CLL treatment within 2 years of diagnosis as validated on both internal and external cohorts. CLL-TIM is an ensemble algorithm composed of 28 machine learning algorithms based on data from 4,149 patients with CLL. The model is capable of dealing with heterogeneous data, including the high rates of missing data to be expected in the real-world setting, with a precision of 72% and a recall of 75%. To address concerns regarding the use of complex machine learning algorithms in the clinic, for each patient with CLL, CLL-TIM provides explainable predictions through uncertainty estimates and personalized risk factors.
Forecasting the risk of pathogen spillover from reservoir populations of wild or domestic animals is essential for the effective deployment of interventions such as wildlife vaccination or culling. ...Due to the sporadic nature of spillover events and limited availability of data, developing and validating robust, spatially explicit, predictions is challenging. Recent efforts have begun to make progress in this direction by capitalizing on machine learning methodologies. An important weakness of existing approaches, however, is that they generally rely on combining human and reservoir infection data during the training process and thus conflate risk attributable to the prevalence of the pathogen in the reservoir population with the risk attributed to the realized rate of spillover into the human population. Because effective planning of interventions requires that these components of risk be disentangled, we developed a multi-layer machine learning framework that separates these processes. Our approach begins by training models to predict the geographic range of the primary reservoir and the subset of this range in which the pathogen occurs. The spillover risk predicted by the product of these reservoir specific models is then fit to data on realized patterns of historical spillover into the human population. The result is a geographically specific spillover risk forecast that can be easily decomposed and used to guide effective intervention. Applying our method to Lassa virus, a zoonotic pathogen that regularly spills over into the human population across West Africa, results in a model that explains a modest but statistically significant portion of geographic variation in historical patterns of spillover. When combined with a mechanistic mathematical model of infection dynamics, our spillover risk model predicts that 897,700 humans are infected by Lassa virus each year across West Africa, with Nigeria accounting for more than half of these human infections.
Research has not yet quantified the effects of workload or duty hours on the accuracy of radiologists. With the exception of a brief reduction in imaging studies during the 2020 peak of the COVID-19 ...pandemic, the workload of radiologists in the United States has seen relentless growth in recent years. One concern is that this increased demand could lead to reduced accuracy. Behavioral studies in species ranging from insects to humans have shown that decision speed is inversely correlated to decision accuracy. A potential solution is to institute workload and duty limits to optimize radiologist performance and patient safety. The concern, however, is that any prescribed mandated limits would be arbitrary and thus no more advantageous than allowing radiologists to self-regulate. Specific studies have been proposed to determine whether limits reduce error, and if so, to provide a principled basis for such limits. This could determine the precise susceptibility of individual radiologists to medical error as a function of speed during image viewing, the maximum number of studies that could be read during a work shift, and the appropriate shift duration as a function of time of day. Before principled recommendations for restrictions are made, however, it is important to understand how radiologists function both optimally and at the margins of adequate performance. This study examines the relationship between interpretation speed and error rates in radiology, the potential influence of artificial intelligence on reading speed and error rates, and the possible outcomes of imposed limits on both caseload and duty hours. This review concludes that the scientific evidence needed to make meaningful rules is lacking and notes that regulating workloads without scientific principles can be more harmful than not regulating at all.
Visual categorization is integral for our interaction with the natural environment. In this process, similar selective responses are produced to a class of variable visual inputs. Whether ...categorization is supported by partial (graded) or absolute (all-or-none) neural responses in high-level human brain regions is largely unknown. We address this issue with a novel frequency-sweep paradigm probing the evolution of face categorization responses between the minimal and optimal stimulus presentation times. In a first experiment, natural images of variable non-face objects were progressively swept from 120 to 3 Hz (8.33–333 ms duration) in rapid serial visual presentation sequences. Widely variable face exemplars appeared every 1 s, enabling an implicit frequency-tagged face-categorization electroencephalographic (EEG) response at 1 Hz. Face-categorization activity emerged with stimulus durations as brief as 17 ms (17–83 ms across individual participants) but was significant with 33 ms durations at the group level. The face categorization response amplitude increased until 83 ms stimulus duration (12 Hz), implying graded categorization responses. In a second EEG experiment, faces appeared non-periodically throughout such sequences at fixed presentation rates, while participants explicitly categorized faces. A strong correlation between response amplitude and behavioral accuracy across frequency rates suggested that dilution from missed categorizations, rather than a decreased response to each face stimulus, accounted for the graded categorization responses as found in Experiment 1. This was supported by (1) the absence of neural responses to faces that participants failed to categorize explicitly in Experiment 2 and (2) equivalent amplitudes and spatio-temporal signatures of neural responses to behaviorally categorized faces across presentation rates. Overall, these observations provide original evidence that high-level visual categorization of faces, starting at about 100 ms following stimulus onset in the human brain, is variable across observers tested under tight temporal constraints, but occurs in an all-or-none fashion.
•All-or-none neural face categorization responses, corresponding with behavior.•All-or-none responses can explain apparent gradients of neural amplitudes.•Face categorization may occur with 17 ms; it is optimal with 83 ms.
Academic medical centers have long relied on radiology residents to provide after-hours coverage, which means that they essentially function with autonomy. In this approach, attending radiologist ...review of resident interpretations occurs the following morning, often by subspecialist faculty. In recent years, however, this traditional coverage model in academic radiology departments has been challenged by an alternative model, the 24-hour attending radiologist coverage. Proponents of this new model seek to improve patient care after hours by increasing report accuracy and the speed with which the report is finalized. In this article, we review the traditional and the 24-hour attending radiologist coverage models. We summarize previous studies that indicate that resident overnight error rates are sufficiently low so that changing to an overnight attending model may not necessarily provide a meaningful increase in report accuracy. Whereas some centers completely replaced overnight residents, we note that most centers use a hybrid model, and overnight residents work alongside supervising attending radiologists, much as they do during the day. Even in this hybrid model, universal double reading and subspecialist final review, typical features of the traditional autonomous resident coverage model, are generally sacrificed. Because of this, changing from resident coverage to coverage by an attending radiologist that is 24 hours/day, 7 days/week may actually have detrimental effects to patient safety and quality of care provided. Changing to an overnight attending radiologist model may also have negative effects on the quality of radiology resident training, and it significantly increases cost.