Digital histopathological images, high‐resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these ...images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Full text
Available for:
BFBNIB, FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SBCE, SBMB, UL, UM, UPUK
We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor ...models which still largely dominate collaborative filtering research.We introduce a generative model with multinomial likelihood and use Bayesian inference for parameter estimation. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements.
The purpose of this study was to conduct a systematic review for understanding the availability and limitations of artificial intelligence (AI) approaches that could automatically identify and ...quantify computed tomography (CT) findings in traumatic brain injury (TBI).
Systematic review, in accordance with PRISMA 2020 and SPIRIT-AI extension guidelines, with a search of 4 databases (Medline, Embase, IEEE Xplore, and Web of Science) was performed to find AI studies that automated the clinical tasks for identifying and quantifying CT findings of TBI-related abnormalities.
A total of 531 unique publications were reviewed, which resulted in 66 articles that met our inclusion criteria. The following components for identification and quantification regarding TBI were covered and automated by existing AI studies: identification of TBI-related abnormalities; classification of intracranial hemorrhage types; slice-, pixel-, and voxel-level localization of hemorrhage; measurement of midline shift; and measurement of hematoma volume. Automated identification of obliterated basal cisterns was not investigated in the existing AI studies. Most of the AI algorithms were based on deep neural networks that were trained on 2- or 3-dimensional CT imaging datasets.
We identified several important TBI-related CT findings that can be automatically identified and quantified with AI. A combination of these techniques may provide useful tools to enhance reproducibility of TBI identification and quantification by supporting radiologists and clinicians in their TBI assessments and reducing subjective human factors.
Background
Current artificial intelligence studies for supporting CT screening tasks depend on either supervised learning or detecting anomalies. However, the former involves a heavy annotation ...workload owing to requiring many slice-wise annotations (ground truth labels); the latter is promising, but while it reduces the annotation workload, it often suffers from lower performance. This study presents a novel weakly supervised anomaly detection (WSAD) algorithm trained based on scan-wise normal and anomalous annotations to provide better performance than conventional methods while reducing annotation workload
.
Methods
Based on surveillance video anomaly detection methodology, feature vectors representing each CT slice were trained on an AR-Net-based convolutional network using a dynamic multiple-instance learning loss and a center loss function. The following two publicly available CT datasets were retrospectively analyzed: the RSNA brain hemorrhage dataset (normal scans: 12,862; scans with intracranial hematoma: 8882) and COVID-CT set (normal scans: 282; scans with COVID-19: 95).
Results
Anomaly scores of each slice were successfully predicted despite inaccessibility to any slice-wise annotations. Slice-level area under the curve (AUC), sensitivity, specificity, and accuracy from the brain CT dataset were 0.89, 0.85, 0.78, and 0.79, respectively. The proposed method reduced the number of annotations in the brain dataset by 97.1% compared to an ordinary slice-level supervised learning method.
Conclusion
This study demonstrated a significant annotation reduction in identifying anomalous CT slices compared to a supervised learning approach. The effectiveness of the proposed WSAD algorithm was verified through higher AUC than existing anomaly detection techniques.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Background & objectives: In the present scenario, the most common sample for diagnosis of COVID-19 by reverse transcription polymerase chain reaction (RT-PCR) is nasal and throat swab (NTS). Other ...sampling options such as gargle lavage have found limited application in clinical use mostly because of unavailability of an appropriate gargling liquid. This study was conducted to assess the stability of SARS-CoV-2 RNA in normal saline at 4°C that can serve as a gargling liquid as well as a transport medium. The study also looked at the agreement between NTS and gargle lavage/saliva for the detection of SARS-CoV-2.
Methods: In 29 consecutive real-time RT-PCR (rRT-PCR) positive COVID-19 patients, paired NTS, gargle and saliva samples were taken. Samples were processed by rRT-PCR for the detection of SARS-CoV-2 RNA. To assess the SARS-CoV-2 RNA stability in normal saline, gargle lavage specimens were divided into two aliquots; one subset of the specimen was run within 4-6 h along with the routine samples (NTS and saliva) and the other subset was stored at 4°C and processed after 24-30 h. Agreement between cycle threshold (Ct) values from both the runs was compared using Bland-Altman (BA) analysis.
Results: The positivity rates of rRT-PCR in NTS, saliva and gargle lavage samples were 82.7 (24/29), 79.3 (23/29) and 86.2 per cent (25/29), respectively. BA plot showed a good agreement between the Ct values of fresh and stored gargle samples, stipulating that there were no significant differences in the approximate viral load levels between the fresh and stored gargle lavage samples (bias: E gene −0.64, N gene −0.51, ORF gene −0.19).
Interpretation & conclusions: Our study results show stability of SARS-CoV-2 RNA in the gargle samples collected using normal saline up to 24-30 h. Gargle lavage and saliva specimen collection are cost-effective and acceptable methods of sampling for the detection of SARS-CoV-2 RNA by rRT-PCR. These simplified, inexpensive and acceptable methods of specimen collection would reduce the cost and workload on healthcare workers for sample collection.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Importance Large language models (LLMs) recently developed an unprecedented ability to answer questions. Studies of LLMs from other fields may not generalize to medical oncology, a high-stakes ...clinical setting requiring rapid integration of new information. Objective To evaluate the accuracy and safety of LLM answers on medical oncology examination questions. Design, Setting, and Participants This cross-sectional study was conducted between May 28 and October 11, 2023. The American Society of Clinical Oncology (ASCO) Oncology Self-Assessment Series on ASCO Connection, the European Society of Medical Oncology (ESMO) Examination Trial questions, and an original set of board-style medical oncology multiple-choice questions were presented to 8 LLMs. Main Outcomes and Measures The primary outcome was the percentage of correct answers. Medical oncologists evaluated the explanations provided by the best LLM for accuracy, classified the types of errors, and estimated the likelihood and extent of potential clinical harm. Results Proprietary LLM 2 correctly answered 125 of 147 questions (85.0%; 95% CI, 78.2%-90.4%; P < .001 vs random answering). Proprietary LLM 2 outperformed an earlier version, proprietary LLM 1, which correctly answered 89 of 147 questions (60.5%; 95% CI, 52.2%-68.5%; P < .001), and the best open-source LLM, Mixtral-8x7B-v0.1, which correctly answered 87 of 147 questions (59.2%; 95% CI, 50.0%-66.4%; P < .001). The explanations provided by proprietary LLM 2 contained no or minor errors for 138 of 147 questions (93.9%; 95% CI, 88.7%-97.2%). Incorrect responses were most commonly associated with errors in information retrieval, particularly with recent publications, followed by erroneous reasoning and reading comprehension. If acted upon in clinical practice, 18 of 22 incorrect answers (81.8%; 95% CI, 59.7%-94.8%) would have a medium or high likelihood of moderate to severe harm. Conclusions and Relevance In this cross-sectional study of the performance of LLMs on medical oncology examination questions, the best LLM answered questions with remarkable performance, although errors raised safety concerns. These results demonstrated an opportunity to develop and evaluate LLMs to improve health care clinician experiences and patient care, considering the potential impact on capabilities and safety.