•A novel medical image enhancement method based on Genetic Algorithms is proposed.•MedGA enhances images characterized by nearly bimodal gray level histograms.•The fitness function strengthens the ...two underlying intensity distributions.•MedGA considerably outperforms the classical image enhancement techniques.•MedGA achieves excellent results in terms of signal and perceived image quality.
Medical imaging systems often require the application of image enhancement techniques to help physicians in anomaly/abnormality detection and diagnosis, as well as to improve the quality of images that undergo automated image processing. In this work we introduce MedGA, a novel image enhancement method based on Genetic Algorithms that is able to improve the appearance and the visual quality of images characterized by a bimodal gray level intensity histogram, by strengthening their two underlying sub-distributions. MedGA can be exploited as a pre-processing step for the enhancement of images with a nearly bimodal histogram distribution, to improve the results achieved by downstream image processing techniques. As a case study, we use MedGA as a clinical expert system for contrast-enhanced Magnetic Resonance image analysis, considering Magnetic Resonance guided Focused Ultrasound Surgery for uterine fibroids. The performances of MedGA are quantitatively evaluated by means of various image enhancement metrics, and compared against the conventional state-of-the-art image enhancement techniques, namely, histogram equalization, bi-histogram equalization, encoding and decoding Gamma transformations, and sigmoid transformations. We show that MedGA considerably outperforms the other approaches in terms of signal and perceived image quality, while preserving the input mean brightness. MedGA may have a significant impact in real healthcare environments, representing an intelligent solution for Clinical Decision Support Systems in radiology practice for image enhancement, to visually assist physicians during their interactive decision-making tasks, as well as for the improvement of downstream automated processing pipelines in clinically useful measurements.
•We propose USE-Net that incorporates Squeeze-and-Excitation blocks into U-Net.•It achieves accurate prostate zonal segmentation results on multiple MRI datasets.•Training on multiple datasets ...provides excellent intra/cross-dataset generalization.•USE-Net remarkably outperforms related methods when trained/tested on all datasets.•Feature recalibration may be a valuable solution in multi-institutional scenarios.
Display omitted
Prostate cancer is the most common malignant tumors in men but prostate Magnetic Resonance Imaging (MRI) analysis remains challenging. Besides whole prostate gland segmentation, the capability to differentiate between the blurry boundary of the Central Gland (CG) and Peripheral Zone (PZ) can lead to differential diagnosis, since the frequency and severity of tumors differ in these regions. To tackle the prostate zonal segmentation task, we propose a novel Convolutional Neural Network (CNN), called USE-Net, which incorporates Squeeze-and-Excitation (SE) blocks into U-Net, i.e., one of the most effective CNNs in biomedical image segmentation. Especially, the SE blocks are added after every Encoder (Enc USE-Net) or Encoder-Decoder block (Enc-Dec USE-Net). This study evaluates the generalization ability of CNN-based architectures on three T2-weighted MRI datasets, each one consisting of a different number of patients and heterogeneous image characteristics, collected by different institutions. The following mixed scheme is used for training/testing: (i) training on either each individual dataset or multiple prostate MRI datasets and (ii) testing on all three datasets with all possible training/testing combinations. USE-Net is compared against three state-of-the-art CNN-based architectures (i.e., U-Net, pix2pix, and Mixed-Scale Dense Network), along with a semi-automatic continuous max-flow model. The results show that training on the union of the datasets generally outperforms training on each dataset separately, allowing for both intra-/cross-dataset generalization. Enc USE-Net shows good overall generalization under any training condition, while Enc-Dec USE-Net remarkably outperforms the other methods when trained on all datasets. These findings reveal that the SE blocks’ adaptive feature recalibration provides excellent cross-dataset generalization when testing is performed on samples of the datasets used during training. Therefore, we should consider multi-dataset training and SE blocks together as mutually indispensable methods to draw out each other’s full potential. In conclusion, adaptive mechanisms (e.g., feature recalibration) may be a valuable solution in medical imaging applications involving multi-institutional settings.
Abstract
Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and ...levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools.
Several mathematical formalisms can be exploited to model complex systems, in order to capture different features of their dynamic behavior and leverage any available quantitative or qualitative ...data. Correspondingly, either quantitative models or qualitative models can be defined; bridging the gap between these two worlds would allow us to simultaneously exploit the peculiar advantages provided by each modeling approach. However, to date, the attempts in this direction have been limited to specific fields of research. In this paper, we propose a novel, general-purpose computational framework, named Fuzzy-mechanistic modeling of compleX systems (FuzzX), for the analysis of hybrid models consisting of a quantitative (or mechanistic) module and a qualitative module that can reciprocally control each other's dynamic behavior through a common interface. FuzzX takes advantage of precise quantitative information about the system through the definition and simulation of the mechanistic module. At the same time, it describes the behavior of components and their interactions that are not known in full details, by exploiting fuzzy logic for the definition of the qualitative module. We applied FuzzX for the analysis of a hybrid model of a complex biochemical system, characterized by the presence of positive and negative feedback regulations. We show that FuzzX is able to correctly reproduce known emergent behaviors of this system in normal and perturbed conditions. We envision that FuzzX could be employed to analyze any kind of complex system when quantitative information is limited, as well as to extend existing mechanistic models with fuzzy modules to describe those components and interactions of the system that are not fully characterized.
Artificial intelligence is getting a foothold in medicine for disease screening and diagnosis. While typical machine learning methods require large labeled datasets for training and validation, their ...application is limited in clinical fields since ground truth information can hardly be obtained on a sizeable cohort of patients. Unsupervised neural networks – such as Self-Organizing Maps (SOMs) – represent an alternative approach to identifying hidden patterns in biomedical data. Here we investigate the feasibility of SOMs for the identification of malignant and non-malignant regions in liquid biopsies of thyroid nodules, on a patient-specific basis. MALDI-ToF (Matrix Assisted Laser Desorption Ionization - Time of Flight) mass spectrometry-imaging (MSI) was used to measure the spectral profile of bioptic samples. SOMs were then applied for the analysis of MALDI-MSI data of individual patients’ samples, also testing various pre-processing and agglomerative clustering methods to investigate their impact on SOMs’ discrimination efficacy. The final clustering was compared against the sample’s probability to be malignant, hyperplastic or related to Hashimoto thyroiditis as quantified by multinomial regression with LASSO. Our results show that SOMs are effective in separating the areas of a sample containing benign cells from those containing malignant cells. Moreover, they allow to overlap the different areas of cytological glass slides with the corresponding proteomic profile image, and inspect the specific weight of every cellular component in bioptic samples. We envision that this approach could represent an effective means to assist pathologists in diagnostic tasks, avoiding the need to manually annotate cytological images and the effort in creating labeled datasets.
•Application of unsupervised learning for automated clustering of spectra profiles.•Methodology to identify morphological regions of interest in a bioptic sample.•Methodology tested on a case study regarding mass spectra data from thyroid nodules.•Comparison to supervised learning shows effectiveness in separating regions.•Effective tool to assist pathologists by avoiding the need for manual annotation.
Reaction systems are a formal model based on the regulation mechanisms of facilitation and inhibition between biochemical reactions, which underlie the functioning of living cells. The aim of this ...paper is to explore the expressive power of reaction systems as a modeling framework, showing how their basic assumptions and properties can be exploited to formalize computer science and biology oriented problems. In this view, we first provide a reaction-based description of an iterative algorithm to solve the Tower of Hanoi puzzle. Then, we show how the regulation of gene expression in the lac operon, involved in the metabolism of lactose in Escherichia coli cells, can be formalized in terms of reaction systems. Finally, we present a method to derive, given a reaction system with n reactions, a functionally equivalent system with n′≤n reactions using simplification methods of boolean expressions. Some final remarks and directions for future research conclude the paper.
•We propose an evolutionary-based computational framework for MR images.•Pre-processing tool better separates the sub-distributions in bimodal intensity histograms.•Genetic Algorithms considerably ...increase the accuracy of segmentation results.•The proposed computational framework outperforms the state-of-the-art approaches.•Measurement repeatability in clinical workflows is highly improved.
Display omitted
Background and Objectives: Image segmentation represents one of the most challenging issues in medical image analysis to distinguish among different adjacent tissues in a body part. In this context, appropriate image pre-processing tools can improve the result accuracy achieved by computer-assisted segmentation methods. Taking into consideration images with a bimodal intensity distribution, image binarization can be used to classify the input pictorial data into two classes, given a threshold intensity value. Unfortunately, adaptive thresholding techniques for two-class segmentation work properly only for images characterized by bimodal histograms. We aim at overcoming these limitations and automatically determining a suitable optimal threshold for bimodal Magnetic Resonance (MR) images, by designing an intelligent image analysis framework tailored to effectively assist the physicians during their decision-making tasks.
Methods: In this work, we present a novel evolutionary framework for image enhancement, automatic global thresholding, and segmentation, which is here applied to different clinical scenarios involving bimodal MR image analysis: (i) uterine fibroid segmentation in MR guided Focused Ultrasound Surgery, and (ii) brain metastatic cancer segmentation in neuro-radiosurgery therapy. Our framework exploits MedGA as a pre-processing stage. MedGA is an image enhancement method based on Genetic Algorithms that improves the threshold selection, obtained by the efficient Iterative Optimal Threshold Selection algorithm, between the underlying sub-distributions in a nearly bimodal histogram.
Results: The results achieved by the proposed evolutionary framework were quantitatively evaluated, showing that the use of MedGA as a pre-processing stage outperforms the conventional image enhancement methods (i.e., histogram equalization, bi-histogram equalization, Gamma transformation, and sigmoid transformation), in terms of both MR image enhancement and segmentation evaluation metrics.
Conclusions: Thanks to this framework, MR image segmentation accuracy is considerably increased, allowing for measurement repeatability in clinical workflows. The proposed computational solution could be well-suited for other clinical contexts requiring MR image analysis and segmentation, aiming at providing useful insights for differential diagnosis and prognosis.
Abstract
Motivation
Acute myeloid leukemia (AML) is one of the most common hematological malignancies, characterized by high relapse and mortality rates. The inherent intra-tumor heterogeneity in AML ...is thought to play an important role in disease recurrence and resistance to chemotherapy. Although experimental protocols for cell proliferation studies are well established and widespread, they are not easily applicable to in vivo contexts, and the analysis of related time-series data is often complex to achieve. To overcome these limitations, model-driven approaches can be exploited to investigate different aspects of cell population dynamics.
Results
In this work, we present ProCell, a novel modeling and simulation framework to investigate cell proliferation dynamics that, differently from other approaches, takes into account the inherent stochasticity of cell division events. We apply ProCell to compare different models of cell proliferation in AML, notably leveraging experimental data derived from human xenografts in mice. ProCell is coupled with Fuzzy Self-Tuning Particle Swarm Optimization, a swarm-intelligence settings-free algorithm used to automatically infer the models parameterizations. Our results provide new insights on the intricate organization of AML cells with highly heterogeneous proliferative potential, highlighting the important role played by quiescent cells and proliferating cells characterized by different rates of division in the progression and evolution of the disease, thus hinting at the necessity to further characterize tumor cell subpopulations.
Availability and implementation
The source code of ProCell and the experimental data used in this work are available under the GPL 2.0 license on GITHUB at the following URL: https://github.com/aresio/ProCell.
Supplementary information
Supplementary data are available at Bioinformatics online.
Acute myeloid leukemia (AML) is a highly frequent hematological malignancy, characterized by clinical and biological diversity, along with high relapse and mortality rates. The inherent functional ...and genetic intra-tumor heterogeneity in AML is thought to play an important role in disease recurrence and resistance to chemotherapy. Patient-derived xenograft (PDX) models preserve important features of the original tumor, allowing, at the same time, experimental manipulation and in vivo amplification of the human cells. Here we present a detailed protocol for the generation of fluorescently labeled AML PDX models to monitor cell proliferation kinetics in vivo, at the single-cell level. Although experimental protocols for cell proliferation studies are well established and widespread, they are not easily applicable to in vivo contexts, and the analysis of related time-series data is often complex to achieve. To overcome these limitations, model-driven approaches can be exploited to investigate different aspects of cell population dynamics. Among the existing approaches, the ProCell framework is able to perform detailed and accurate stochastic simulations of cell proliferation, relying on flow cytometry data. In particular, by providing an initial and a target fluorescence histogram, ProCell automatically assesses the validity of any user-defined scenario of intra-tumor heterogeneity, that is, it is able to infer the proportion of various cell subpopulations (including quiescent cells) and the division interval of proliferating cells. Here we explain the protocol in detail, providing a description of our methodology for the conditional expression of H2B-GFP in human AML xenografts, data processing by flow cytometry, and the final elaboration in ProCell.
GPU-powered Simulation Methodologies for Biological Systems Besozzi, Daniela; Caravagna, Giulio; Cazzaniga, Paolo ...
Electronic proceedings in theoretical computer science,
01/2013, Letnik:
130, Številka:
Proc. Wivace 2013
Journal Article
Odprti dostop
The study of biological systems witnessed a pervasive cross-fertilization between experimental investigation and computational methods. This gave rise to the development of new methodologies, able to ...tackle the complexity of biological systems in a quantitative manner. Computer algorithms allow to faithfully reproduce the dynamics of the corresponding biological system, and, at the price of a large number of simulations, it is possible to extensively investigate the system functioning across a wide spectrum of natural conditions. To enable multiple analysis in parallel, using cheap, diffused and highly efficient multi-core devices we developed GPU-powered simulation algorithms for stochastic, deterministic and hybrid modeling approaches, so that also users with no knowledge of GPUs hardware and programming can easily access the computing power of graphics engines.