An ever-growing number of predictive models used to inform clinical decision making have included quantitative, computer-extracted imaging biomarkers, or “radiomic features.” Broadly generalizable ...validity of radiomics-assisted models may be impeded by concerns about reproducibility. We offer a qualitative synthesis of 41 studies that specifically investigated the repeatability and reproducibility of radiomic features, derived from a systematic review of published peer-reviewed literature.
The PubMed electronic database was searched using combinations of the broad Haynes and Ingui filters along with a set of text words specific to cancer, radiomics (including texture analyses), reproducibility, and repeatability. This review has been reported in compliance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. From each full-text article, information was extracted regarding cancer type, class of radiomic feature examined, reporting quality of key processing steps, and statistical metric used to segregate stable features.
Among 624 unique records, 41 full-text articles were subjected to review. The studies primarily addressed non-small cell lung cancer and oropharyngeal cancer. Only 7 studies addressed in detail every methodologic aspect related to image acquisition, preprocessing, and feature extraction. The repeatability and reproducibility of radiomic features are sensitive at various degrees to processing details such as image acquisition settings, image reconstruction algorithm, digital image preprocessing, and software used to extract radiomic features. First-order features were overall more reproducible than shape metrics and textural features. Entropy was consistently reported as one of the most stable first-order features. There was no emergent consensus regarding either shape metrics or textural features; however, coarseness and contrast appeared among the least reproducible.
Investigations of feature repeatability and reproducibility are currently limited to a small number of cancer types. Reporting quality could be improved regarding details of feature extraction software, digital image manipulation (preprocessing), and the cutoff value used to distinguish stable features.
Contouring of organs at risk (OARs) is an important but time consuming part of radiotherapy treatment planning. The aim of this study was to investigate whether using institutional created ...software-generated contouring will save time if used as a starting point for manual OAR contouring for lung cancer patients.
Twenty CT scans of stage I–III NSCLC patients were used to compare user adjusted contours after an atlas-based and deep learning contour, against manual delineation. The lungs, esophagus, spinal cord, heart and mediastinum were contoured for this study. The time to perform the manual tasks was recorded.
With a median time of 20 min for manual contouring, the total median time saved was 7.8 min when using atlas-based contouring and 10 min for deep learning contouring. Both atlas based and deep learning adjustment times were significantly lower than manual contouring time for all OARs except for the left lung and esophagus of the atlas based contouring.
User adjustment of software generated contours is a viable strategy to reduce contouring time of OARs for lung radiotherapy while conforming to local clinical standards. In addition, deep learning contouring shows promising results compared to existing solutions.
Abstract Purpose One of the major hurdles in enabling personalized medicine is obtaining sufficient patient data to feed into predictive models. Combining data originating from multiple hospitals is ...difficult because of ethical, legal, political, and administrative barriers associated with data sharing. In order to avoid these issues, a distributed learning approach can be used. Distributed learning is defined as learning from data without the data leaving the hospital. Patients and methods Clinical data from 287 lung cancer patients, treated with curative intent with chemoradiation (CRT) or radiotherapy (RT) alone were collected from and stored in 5 different medical institutes (123 patients at MAASTRO (Netherlands, Dutch), 24 at Jessa (Belgium, Dutch), 34 at Liege (Belgium, Dutch and French), 48 at Aachen (Germany, German) and 58 at Eindhoven (Netherlands, Dutch)). A Bayesian network model is adapted for distributed learning (watch the animation: http://youtu.be/nQpqMIuHyOk ). The model predicts dyspnea, which is a common side effect after radiotherapy treatment of lung cancer. Results We show that it is possible to use the distributed learning approach to train a Bayesian network model on patient data originating from multiple hospitals without these data leaving the individual hospital. The AUC of the model is 0.61 (95%CI, 0.51–0.70) on a 5-fold cross-validation and ranges from 0.59 to 0.71 on external validation sets. Conclusion Distributed learning can allow the learning of predictive models on data originating from multiple hospitals while avoiding many of the data sharing barriers. Furthermore, the distributed learning approach can be used to extract and employ knowledge from routine patient data from multiple hospitals while being compliant to the various national and European privacy laws.
Our hypothesis was that pretreatment inflammation in the lung makes pulmonary tissue more susceptible to radiation damage. The relationship between pretreatment (18)Ffluorodeoxyglucose ((18)FFDG) ...uptake in the lungs (as a surrogate for inflammation) and the delivered radiation dose and radiation-induced lung toxicity (RILT) was investigated.
We retrospectively studied a prospectively obtained cohort of 101 non-small-cell lung cancer patients treated with (chemo)radiation therapy (RT). (18)FFDG-positron emission tomography-computed tomography (PET-CT) scans used for treatment planning were studied. Different parameters were used to describe (18)FFDG uptake patterns in the lungs, excluding clinical target volumes, and the interaction with radiation dose. An increase in the dyspnea grade of 1 (Common Terminology Criteria for Adverse Events version 3.0) or more points compared to the pre-RT score was used as an endpoint for analysis of RILT. The effect of (18)FFDG and CT-based variables, dose, and other patient or treatment characteristics that effected RILT was studied using logistic regression.
Increased lung density and pretreatment (18)FFDG uptake were related to RILT after RT with univariable logistic regression. The 95th percentile of the (18)FFDG uptake in the lungs remained significant in multivariable logistic regression (p = 0.016; odds ratio OR = 4.3), together with age (p = 0.029; OR = 1.06), and a pre-RT dyspnea score of ≥1 (p = 0.005; OR = 0.20). Significant interaction effects were demonstrated among the 80th, 90th, and 95th percentiles and the relative lung volume receiving more than 2 and 5 Gy.
The risk of RILT increased with the 95th percentile of the (18)FFDG uptake in the lungs, excluding clinical tumor volume (OR = 4.3). The effect became more pronounced as the fraction of the 5%, 10%, and 20% highest standardized uptake value voxels that received more than 2 Gy to 5 Gy increased. Therefore, the risk of RILT may be decreased by applying sophisticated radiotherapy techniques to avoid areas in the lung with high (18)FFDG uptake.
Maternal and child mortality remained higher in developing regions such as Southern Ethiopia due to poor maternal and child health. Technologies such as mobile applications in health may be an ...opportunity to reduce maternal and child mortality because they can improve access to information. Therefore, the main aim of this study was to explore the role of mHealth in improving maternal and child health in Southern Ethiopia.
This study employed a qualitative study design to explore the role of mHealth in improving maternal and child health among health professionals in Southern Ethiopia from December 2022 to March 2023. We conducted nine in-depth interviews, six key informants' in-depth interviews, and four focused group discussions among health professionals. This is followed by thematic analyses to synthesize the collected evidence.
The results are based on 226 quotations, 5 major themes, and 24 subthemes. The study participants discussed the possible acceptance of mHealth in terms of its fitness in the existing health system, its support to health professionals, and its importance in improving maternal and child health. The participants ascertained the importance of awareness creation before the implementation of mHealth among women, families, communities, and providers. They reported the importance of mHealth for mothers and health professionals and the effectiveness of mHealth services. The participants stated that the main challenges related to acceptance, awareness, negligence, readiness, and workload. However, they also suggested strategic solutions such as using family support, provider support, mothers' forums, and community forums.
The evidence generated during this analysis is important information for program implementations and can inform policy-making. The planned intervention needs to introduce mHealth in Southern Ethiopia. Planners, decision-makers, and researchers can use it in mobile technology-related interventions. For challenges identified, we recommend solution-identified-based interventions and quality studies.
This open access book comprehensively covers the fundamentals of clinical data science, focusing on data collection, modelling and clinical applications. Topics covered in the first section on data ...collection include: data sources, data at scale (big data), data stewardship (FAIR data) and related privacy concerns. Aspects of predictive modelling using techniques such as classification, regression or clustering, and prediction model validation will be covered in the second section. The third section covers aspects of (mobile) clinical decision support systems, operational excellence and value-based healthcare. Fundamentals of Clinical Data Science is an essential resource for healthcare professionals and IT consultants intending to develop and refine their skills in personalized medicine, using solutions based on large datasets from electronic health records or telemonitoring programmes. The book’s promise is “no math, no code”and will explain the topics in a style that is optimized for a healthcare audience.
•Presented Safeguards ensure productive progress of the radiomic field.•Radiomic models and features should be tested to determine added prognostic and predictive accuracy compared to accepted ...clinical factors.•Radiomic features are susceptible to underlying dependencies and multi-collinearity within models.•Open-source software should be used in radiomic developments to increase development accountability and facilitate inter-institutional research.
Refinement of radiomic results and methodologies is required to ensure progression of the field. In this work, we establish a set of safeguards designed to improve and support current radiomic methodologies through detailed analysis of a radiomic signature.
A radiomic model (MW2018) was fitted and externally validated using features extracted from previously reported lung and head and neck (H&N) cancer datasets using gross-tumour-volume contours, as well as from images with randomly permuted voxel index values; i.e. images without meaningful texture. To determine MW2018’s added benefit, the prognostic accuracy of tumour volume alone was calculated as a baseline.
MW2018 had an external validation concordance index (c-index) of 0.64. However, a similar performance was achieved using features extracted from images with randomized signal intensities (c-index = 0.64 and 0.60 for H&N and lung, respectively). Tumour volume had a c-index = 0.64 and correlated strongly with three of the four model features. It was determined that the signature was a surrogate for tumour volume and that intensity and texture values were not pertinent for prognostication.
Our experiments reveal vulnerabilities in radiomic signature development processes and suggest safeguards that can be used to refine methodologies, and ensure productive radiomic development using objective and independent features.
Vaccine-preventable diseases are the public health problems in Africa, although vaccination is an available, safe, simple, and effective method prevention. Technologies such as mHealth may provide ...maternal access to health information and support decisions on childhood vaccination. Many studies on the role of mHealth in vaccination decisions have been conducted in Africa, but the evidence needs to provide conclusive information to support mHealth introduction. This study provides essential information to assist planning and policy decisions regarding the use of mHealth for childhood vaccination.
We conducted a systematic review and meta-analysis for studies applying mHealth in Africa for vaccination decisions following the Preferred Reporting Items for Systematic and Meta-Analysis PRISMA guideline. Databases such as CINAHL, EMBASE, PubMed, PsycINFO, Scopus, Web of Science, Google Scholar, Global Health, HINARI, and Cochrane Library were included. We screened studies in Endnote X20 and performed the analysis using Revman 5.4.1.
The database search yielded 1,365 articles 14 RCTs and 4 quasi-experiments with 21,070 participants satisfied all eligibility criteria. The meta-analysis showed that mHealth has an OR of 2.15 95% CI: 1.70-2.72; P<0.001; I2 = 90% on vaccination rates. The subgroup analysis showed that regional differences cause heterogeneity. Funnel plots and Harbord tests showed the absence of publication bias, while the GRADE scale showed a moderate-quality body of evidence.
Although heterogeneous, this systematic review and meta-analysis showed that the application of mHealth could potentially improve childhood vaccination in Africa. It increased childhood vaccination by more than double 2.15 times among children whose mothers are motivated by mHealth services. MHealth is more effective in less developed regions and when an additional incentive party with the messaging system. However, it can be provided at a comparably low cost based on the development level of regions and can be established as a routine service in Africa.
PROSPERO: CRD42023415956.
Radiomics, the high-throughput mining of quantitative image features from standard-of-care medical imaging that enables data to be extracted and applied within clinical-decision support systems to ...improve diagnostic, prognostic, and predictive accuracy, is gaining importance in cancer research. Radiomic analysis exploits sophisticated image analysis tools and the rapid development and validation of medical imaging data that uses image-based signatures for precision diagnosis and treatment, providing a powerful tool in modern medicine. Herein, we describe the process of radiomics, its pitfalls, challenges, opportunities, and its capacity to improve clinical decision making, emphasizing the utility for patients with cancer. Currently, the field of radiomics lacks standardized evaluation of both the scientific integrity and the clinical relevance of the numerous published radiomics investigations resulting from the rapid growth of this area. Rigorous evaluation criteria and reporting guidelines need to be established in order for radiomics to mature as a discipline. Herein, we provide guidance for investigations to meet this urgent need in the field of radiomics.
Purpose
Machine learning classification algorithms (classifiers) for prediction of treatment response are becoming more popular in radiotherapy literature. General Machine learning literature ...provides evidence in favor of some classifier families (random forest, support vector machine, gradient boosting) in terms of classification performance. The purpose of this study is to compare such classifiers specifically for (chemo)radiotherapy datasets and to estimate their average discriminative performance for radiation treatment outcome prediction.
Methods
We collected 12 datasets (3496 patients) from prior studies on post‐(chemo)radiotherapy toxicity, survival, or tumor control with clinical, dosimetric, or blood biomarker features from multiple institutions and for different tumor sites, that is, (non‐)small‐cell lung cancer, head and neck cancer, and meningioma. Six common classification algorithms with built‐in feature selection (decision tree, random forest, neural network, support vector machine, elastic net logistic regression, LogitBoost) were applied on each dataset using the popular open‐source R package caret. The R code and documentation for the analysis are available online (https://github.com/timodeist/classifier_selection_code). All classifiers were run on each dataset in a 100‐repeated nested fivefold cross‐validation with hyperparameter tuning. Performance metrics (AUC, calibration slope and intercept, accuracy, Cohen's kappa, and Brier score) were computed. We ranked classifiers by AUC to determine which classifier is likely to also perform well in future studies. We simulated the benefit for potential investigators to select a certain classifier for a new dataset based on our study (pre‐selection based on other datasets) or estimating the best classifier for a dataset (set‐specific selection based on information from the new dataset) compared with uninformed classifier selection (random selection).
Results
Random forest (best in 6/12 datasets) and elastic net logistic regression (best in 4/12 datasets) showed the overall best discrimination, but there was no single best classifier across datasets. Both classifiers had a median AUC rank of 2. Preselection and set‐specific selection yielded a significant average AUC improvement of 0.02 and 0.02 over random selection with an average AUC rank improvement of 0.42 and 0.66, respectively.
Conclusion
Random forest and elastic net logistic regression yield higher discriminative performance in (chemo)radiotherapy outcome and toxicity prediction than other studied classifiers. Thus, one of these two classifiers should be the first choice for investigators when building classification models or to benchmark one's own modeling results against. Our results also show that an informed preselection of classifiers based on existing datasets can improve discrimination over random selection.