Radiomics aims to quantify phenotypic characteristics on medical imaging through the use of automated algorithms. Radiomic artificial intelligence (AI) technology, either based on engineered ...hard-coded algorithms or deep learning methods, can be used to develop noninvasive imaging-based biomarkers. However, lack of standardized algorithm definitions and image processing severely hampers reproducibility and comparability of results. To address this issue, we developed
, a flexible open-source platform capable of extracting a large panel of engineered features from medical images.
is implemented in Python and can be used standalone or using 3D Slicer. Here, we discuss the workflow and architecture of
and demonstrate its application in characterizing lung lesions. Source code, documentation, and examples are publicly available at www.radiomics.io With this platform, we aim to establish a reference standard for radiomic analyses, provide a tested and maintained resource, and to grow the community of radiomic developers addressing critical needs in cancer research.
.
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only ...its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Non-small-cell lung cancer (NSCLC) patients often demonstrate varying clinical courses and outcomes, even within the same tumor stage. This study explores deep learning applications in medical ...imaging allowing for the automated quantification of radiographic characteristics and potentially improving patient stratification.
We performed an integrative analysis on 7 independent datasets across 5 institutions totaling 1,194 NSCLC patients (age median = 68.3 years range 32.5-93.3, survival median = 1.7 years range 0.0-11.7). Using external validation in computed tomography (CT) data, we identified prognostic signatures using a 3D convolutional neural network (CNN) for patients treated with radiotherapy (n = 771, age median = 68.0 years range 32.5-93.3, survival median = 1.3 years range 0.0-11.7). We then employed a transfer learning approach to achieve the same for surgery patients (n = 391, age median = 69.1 years range 37.2-88.0, survival median = 3.1 years range 0.0-8.8). We found that the CNN predictions were significantly associated with 2-year overall survival from the start of respective treatment for radiotherapy (area under the receiver operating characteristic curve AUC = 0.70 95% CI 0.63-0.78, p < 0.001) and surgery (AUC = 0.71 95% CI 0.60-0.82, p < 0.001) patients. The CNN was also able to significantly stratify patients into low and high mortality risk groups in both the radiotherapy (p < 0.001) and surgery (p = 0.03) datasets. Additionally, the CNN was found to significantly outperform random forest models built on clinical parameters-including age, sex, and tumor node metastasis stage-as well as demonstrate high robustness against test-retest (intraclass correlation coefficient = 0.91) and inter-reader (Spearman's rank-order correlation = 0.88) variations. To gain a better understanding of the characteristics captured by the CNN, we identified regions with the most contribution towards predictions and highlighted the importance of tumor-surrounding tissue in patient stratification. We also present preliminary findings on the biological basis of the captured phenotypes as being linked to cell cycle and transcriptional processes. Limitations include the retrospective nature of this study as well as the opaque black box nature of deep learning networks.
Our results provide evidence that deep learning networks may be used for mortality risk stratification based on standard-of-care CT images from NSCLC patients. This evidence motivates future research into better deciphering the clinical and biological basis of deep learning networks as well as validation in prospective data.
Tumor histology is an important predictor of therapeutic response and outcomes in lung cancer. Tissue sampling for pathologist review is the most reliable method for histology classification, ...however, recent advances in deep learning for medical image analysis allude to the utility of radiologic data in further describing disease characteristics and for risk stratification. In this study, we propose a radiomics approach to predicting non-small cell lung cancer (NSCLC) tumor histology from non-invasive standard-of-care computed tomography (CT) data. We trained and validated convolutional neural networks (CNNs) on a dataset comprising 311 early-stage NSCLC patients receiving surgical treatment at Massachusetts General Hospital (MGH), with a focus on the two most common histological types: adenocarcinoma (ADC) and Squamous Cell Carcinoma (SCC). The CNNs were able to predict tumor histology with an AUC of 0.71(p = 0.018). We also found that using machine learning classifiers such as k-nearest neighbors (kNN) and support vector machine (SVM) on CNN-derived quantitative radiomics features yielded comparable discriminative performance, with AUC of up to 0.71 (p = 0.017). Our best performing CNN functioned as a robust probabilistic classifier in heterogeneous test sets, with qualitatively interpretable visual explanations to its predictions. Deep learning based radiomics can identify histological phenotypes in lung cancer. It has the potential to augment existing approaches and serve as a corrective aid for diagnosticians.
The Metaverse has been the centre of attraction for educationists for quite some time. This field got renewed interest with the announcement of social media giant Facebook as it rebranding and ...positioning it as Meta. While several studies conducted literature reviews to summarize the findings related to the Metaverse in general, no study to the best of our knowledge focused on systematically summarizing the finding related to the Metaverse in education. To cover this gap, this study conducts a systematic literature review of the Metaverse in education. It then applies both content and bibliometric analysis to reveal the research trends, focus, and limitations of this research topic. The obtained findings reveal the research gap in lifelogging applications in educational Metaverse. The findings also show that the design of Metaverse in education has evolved over generations, where generation Z is more targeted with artificial intelligence technologies compared to generation X or Y. In terms of learning scenarios, there have been very few studies focusing on mobile learning, hybrid learning, and micro learning. Additionally, no study focused on using the Metaverse in education for students with disabilities. The findings of this study provide a roadmap of future research directions to be taken into consideration and investigated to enhance the adoption of the Metaverse in education worldwide, as well as to enhance the learning and teaching experiences in the Metaverse.
Radiographic imaging continues to be one of the most effective and clinically useful tools within oncology. Sophistication of artificial intelligence has allowed for detailed quantification of ...radiographic characteristics of tissues using predefined engineered algorithms or deep learning methods. Precedents in radiology as well as a wealth of research studies hint at the clinical relevance of these characteristics. However, critical challenges are associated with the analysis of medical imaging data. Although some of these challenges are specific to the imaging field, many others like reproducibility and batch effects are generic and have already been addressed in other quantitative fields such as genomics. Here, we identify these pitfalls and provide recommendations for analysis strategies of medical imaging data, including data normalization, development of robust models, and rigorous statistical analyses. Adhering to these recommendations will not only improve analysis quality but also enhance precision medicine by allowing better integration of imaging data with other biomedical data sources.
.
Textbooks use images, in addition to text, for delivering knowledge, thereby convey attitudes and values of students including those on gender bias. The gender bias presented in textbook images ...affects in subtle ways the students' learning outcomes, career choices, and how they perceive science. However, prior research has relied on explicit information presented by textbook images of several subjects to investigate gender representation, overlooking the implicit meaning behind images with a very limited attention to science textbooks. Therefore, this study uses the social semiotic framework to analyse the implicit meaning that images convey related to gender representation in Chinese and Egyptian science textbooks. Specifically, four (two for each country) science textbooks of grade nine were coded and analysed. The findings revealed that gender gap still exists in the images of both Chinese and Egyptian science textbooks. Specifically, females were less represented in the textbook images compared to males, and their role was mostly a caring role. Notably, unlike the Chinese females and the common gender stereotype, Egyptian females were represented in a more active and powerful way compared to males. The findings of this study could help in better designing science textbook images to reduce gender bias.
Man-made armors often rely on rigid structures for mechanical protection, which typically results in a trade-off with flexibility and maneuverability. Chitons, a group of marine mollusks, evolved ...scaled armors that address similar challenges. Many chiton species possess hundreds of small, mineralized scales arrayed on the soft girdle that surrounds their overlapping shell plates. Ensuring both flexibility for locomotion and protection of the underlying soft body, the scaled girdle is an excellent model for multifunctional armor design. Here we conduct a systematic study of the material composition, nanomechanical properties, three-dimensional geometry, and interspecific structural diversity of chiton girdle scales. Moreover, inspired by the tessellated organization of chiton scales, we fabricate a synthetic flexible scaled armor analogue using parametric computational modeling and multi-material 3D printing. This approach allows us to conduct a quantitative evaluation of our chiton-inspired armor to assess its orientation-dependent flexibility and protection capabilities.
P-wave receiver functions from 26 stations in the Egyptian National Seismic Network (ENSN) have been modeled using the H-k stacking method and in a joint inversion method with Rayleigh wave group ...velocities to investigate crustal structure across Egypt and the northern Red Sea region. The new estimates of crustal structure, when combined with previous results, show that along the rifted margins of the Red Sea, Gulf of Suez and Gulf of Aqaba crustal thickness ranges from 25 to 30km, the average crustal Vp/Vs ratio is 1.77, and the average crustal shear-wave velocity is 3.6km/s. Beneath northern and central Egypt, including the Sinai Peninsula, crustal thickness ranges from 32 to 38km, the average crustal Vp/Vs ratio is 1.79, and the average crustal shear-wave velocity is 3.5km/s. Beneath southern Egypt, crustal thickness ranges from 35 to 40km, the average crustal Vp/Vs ratio is 1.76, and the average crustal shear-wave velocity is 3.7km/s. In southern Egypt, the crust is also characterized by a 10–20km thick mafic lower crust. These findings indicate that crust along the rifted margins of the northern Red Sea, and Gulfs of Suez and Aqaba have been thinned by about 5 to 10km. The thick mafic lower crust in southern Egypt can be attributed to suturing during the Neoproterozoic collision of east Gondwana against the Sahara metacraton. Overall, the structure of the crust in Egypt away from the northern Red Sea region is similar to the structure of Precambrian crust in many other parts of Africa.
•Crustal structure away from the northern Red Sea is similar to Precambrian crustal structure in other parts of Africa.•Crust along the rifted margins of the northern Red Sea, and Gulfs of Suez and Aqaba has been thinned by about 5 to 10 km.•A region with thick mafic lower crust in southern Egypt is attributed to suturing during the Neoproterozoic