Today, most of the people are affected by lung cancer, mainly because of the genetic changes of the tissues in the lungs. Other factors such as smoking, alcohol, and exposure to dangerous gases can ...also be considered the contributory causes of lung cancer. Due to the serious consequences of lung cancer, the medical associations have been striving to diagnose cancer in its early stage of growth by applying the computer-aided diagnosis process. Although the CAD system at healthcare centers is able to diagnose lung cancer during its early stage of growth, the accuracy of cancer detection is difficult to achieve, mainly because of the overfitting of lung cancer features and the dimensionality of the feature set. Thus, this paper introduces the effective and optimized neural computing and soft computing techniques to minimize the difficulties and issues in the feature set. Initially, lung biomedical data were collected from the ELVIRA Biomedical Data Set Repository. The noise present in the data was eliminated by applying the bin smoothing normalization process. The minimum repetition and Wolf heuristic features were subsequently selected to minimize the dimensionality and complexity of the features. The selected lung features were analyzed using discrete AdaBoost optimized ensemble learning generalized neural networks, which successfully analyzed the biomedical lung data and classified the normal and abnormal features with great effectiveness. The efficiency of the system was then evaluated using MATLAB experimental setup in terms of error rate, precision, recall, G-mean, F-measure, and prediction rate.
•This paper presents a comprehensive review of the general architecture of 1D CNNs.•Their major engineering applications, principals, and recent progress on 1D CNNs are reviewed.•The state-of-the-art ...performance and unique properties of 1D CNNs are highlighted.•Detailed computational complexity analysis of compact and adaptive 1D CNNs are reported.•The benchmark datasets and the principal 1D CNN software are also publicly shared.
During the last decade, Convolutional Neural Networks (CNNs) have become the de facto standard for various Computer Vision and Machine Learning operations. CNNs are feed-forward Artificial Neural Networks (ANNs) with alternating convolutional and subsampling layers. Deep 2D CNNs with many hidden layers and millions of parameters have the ability to learn complex objects and patterns providing that they can be trained on a massive size visual database with ground-truth labels. With a proper training, this unique ability makes them the primary tool for various engineering applications for 2D signals such as images and video frames. Yet, this may not be a viable option in numerous applications over 1D signals especially when the training data is scarce or application specific. To address this issue, 1D CNNs have recently been proposed and immediately achieved the state-of-the-art performance levels in several applications such as personalized biomedical data classification and early diagnosis, structural health monitoring, anomaly detection and identification in power electronics and electrical motor fault detection. Another major advantage is that a real-time and low-cost hardware implementation is feasible due to the simple and compact configuration of 1D CNNs that perform only 1D convolutions (scalar multiplications and additions). This paper presents a comprehensive review of the general architecture and principals of 1D CNNs along with their major engineering applications, especially focused on the recent progress in this field. Their state-of-the-art performance is highlighted concluding with their unique properties. The benchmark datasets and the principal 1D CNN software used in those applications are also publicly shared in a dedicated website. While there has not been a paper on the review of 1D CNNs and its applications in the literature, this paper fulfills this gap.
Implementing precision medicine hinges on the integration of omics data, such as proteomics, into the clinical decision-making process, but the quantity and diversity of biomedical data, and the ...spread of clinically relevant knowledge across multiple biomedical databases and publications, pose a challenge to data integration. Here we present the Clinical Knowledge Graph (CKG), an open-source platform currently comprising close to 20 million nodes and 220 million relationships that represent relevant experimental data, public databases and literature. The graph structure provides a flexible data model that is easily extendable to new nodes and relationships as new databases become available. The CKG incorporates statistical and machine learning algorithms that accelerate the analysis and interpretation of typical proteomics workflows. Using a set of proof-of-concept biomarker studies, we show how the CKG might augment and enrich proteomics data and help inform clinical decision-making.
U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that ...enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.
Segmenting the nuclei of cells in microscopy images is often the first step in the quantitative analysis of imaging data for biological and biomedical applications. Many bioimage analysis tools can ...segment nuclei in images but need to be selected and configured for every experiment. The 2018 Data Science Bowl attracted 3,891 teams worldwide to make the first attempt to build a segmentation method that could be applied to any two-dimensional light microscopy image of stained nuclei across experiments, with no human interaction. Top participants in the challenge succeeded in this task, developing deep-learning-based models that identified cell nuclei across many image types and experimental conditions without the need to manually adjust segmentation parameters. This represents an important step toward configuration-free bioimage analysis software tools.
Abstract
Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have ...been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.
Magnetism, originating from the moving charges and spin of elementary particles, has revolutionized important technologies such as data storage and biomedical imaging, and continues to bring forth ...new phenomena in emergent materials and reduced dimensions. The recently discovered two-dimensional (2D) magnetic van der Waals crystals provide ideal platforms for understanding 2D magnetism, the control of which has been fueling opportunities for atomically thin, flexible magneto-optic and magnetoelectric devices (such as magnetoresistive memories and spin field-effect transistors). The seamless integration of 2D magnets with dissimilar electronic and photonic materials opens up exciting possibilities for unprecedented properties and functionalities. We review the progress in this area and identify the possible directions for device applications, which may lead to advances in spintronics, sensors, and computing.
Feature selection is a preprocessing technique that identifies the key features of a given problem. It has traditionally been applied in a wide range of problems that include biological data ...processing, finance, and intrusion detection systems. In particular, feature selection has been successfully used in medical applications, where it can not only reduce dimensionality but also help us understand the causes of a disease. We describe some basic concepts related to medical applications and provide some necessary background information on feature selection. We review the most recent feature selection methods developed for and applied in medical problems, covering prolific research fields such as medical imaging, biomedical signal processing, and DNA microarray data analysis. A case study of two medical applications that includes actual patient data is used to demonstrate the suitability of applying feature selection methods in medical problems and to illustrate how these methods work in real-world scenarios.
General steps of the feature selection approaches and their main benefits. Display omitted
•A survey on feature selection methods developed and/or applied to medical applications.•Background information for researchers who are not familiar enough with certain terms.•A case study of two medical applications to demonstrate the adequacy of feature selection in this domain.