In recent years, the incidence of skin cancer cases has risen, worldwide, mainly due to the prolonged exposure to harmful ultraviolet radiation. Concurrently, the computer-assisted medical diagnosis ...of skin cancer has undergone major advances, through an improvement in the instrument and detection technology, and the development of algorithms to process the information. Moreover, because there has been an increased need to store medical data, for monitoring, comparative and assisted-learning purposes, algorithms for data processing and storage have also become more efficient in handling the increase of data. In addition, the potential use of common mobile devices to register high-resolution images of skin lesions has also fueled the need to create real-time processing algorithms that may provide a likelihood for the development of malignancy. This last possibility allows even non-specialists to monitor and follow-up suspected skin cancer cases. In this review, we present the major steps in the pre-processing, processing and post-processing of skin lesion images, with a particular emphasis on the quantification and classification of pigmented skin lesions. We further review and outline the future challenges for the creation of minimum-feature, automated and real-time algorithms for the detection of skin cancer from images acquired via common mobile devices.
Recently, the massification of new technologies, which has been adopted by a large majority of the world population, has accumulated a tremendous amount of data, including clinical data. This ...clinical data have been gathered up and interpreted by medical organizations in order to gain insights and knowledge useful for clinical decisions, drug recommendations, and better diagnoses, among many other uses. This paper highlights the enormous impacts of big data on medical stakeholders, patients, physicians, pharmaceutical and medical operators, and healthcare insurers, and also reviews the different challenges that must be taken into account to get the best benefits from all this big data and the available applications.
Requests for caring for and monitoring the health and safety of older adults are increasing nowadays and form a topic of great social interest. One of the issues that lead to serious concerns is ...human falls, especially among aged people. Computer vision techniques can be used to identify fall events, and Deep Learning methods can detect them with optimum accuracy. Such imaging-based solutions are a good alternative to body-worn solutions. This article proposes a novel human fall detection solution based on the Fast Pose Estimation method. The solution uses Time-Distributed Convolutional Long Short-Term Memory (TD-CNN-LSTM) and 1Dimentional Convolutional Neural Network (1D-CNN) models, to classify the data extracted from image frames, and achieved high accuracies: 98 and 97% for the 1D-CNN and TD-CNN-LSTM models, respectively. Therefore, by applying the Fast Pose Estimation method, which has not been used before for this purpose, the proposed solution is an effective contribution to accurate human fall detection, which can be deployed in edge devices due to its low computational and memory demands.
Leukaemia is a dysfunction that affects the production of white blood cells in the bone marrow. Young cells are abnormally produced, replacing normal blood cells. Consequently, the person suffers ...problems in transporting oxygen and in fighting infections. This article proposes a convolutional neural network (CNN) named LeukNet that was inspired on convolutional blocks of VGG-16, but with smaller dense layers. To define the LeukNet parameters, we evaluated different CNNs models and fine-tuning methods using 18 image datasets, with different resolution, contrast, colour and texture characteristics. We applied data augmentation operations to expand the training dataset, and the 5-fold cross-validation led to an accuracy of 98.61%. To evaluate the CNNs generalisation ability, we applied a cross-dataset validation technique. The obtained accuracies using cross-dataset experiments on three datasets were 97.04, 82.46 and 70.24%, which overcome the accuracies obtained by current state-of-the-art methods. We conclude that using the most common and deepest CNNs may not be the best choice for applications where the images to be classified differ from those used in pre-training. Additionally, the adopted cross-dataset validation approach proved to be an excellent choice to evaluate the generalisation capability of a model, as it considers the model performance on unseen data, which is paramount for CAD systems.
With the rapid growth and development of cities, Intelligent Traffic Management and Control (ITMC) is becoming a fundamental component to address the challenges of modern urban traffic management, ...where a wide range of daily problems need to be addressed in a prompt and expedited manner. Issues such as unpredictable traffic dynamics, resource constraints, and abnormal events pose difficulties to city managers. ITMC aims to increase the efficiency of traffic management by minimizing the odds of traffic problems, by providing real-time traffic state forecasts to better schedule the intersection signal controls. Reliable implementations of ITMC improve the safety of inhabitants and the quality of life, leading to economic growth. In recent years, researchers have proposed different solutions to address specific problems concerning traffic management, ranging from image-processing and deep-learning techniques to forecasting the traffic state and deriving policies to control intersection signals. This review article studies the primary public datasets helpful in developing models to address the identified problems, complemented with a deep analysis of the works related to traffic state forecast and intersection-signal-control models. Our analysis found that deep-learning-based approaches for short-term traffic state forecast and multi-intersection signal control showed reasonable results, but lacked robustness for unusual scenarios, particularly during oversaturated situations, which can be resolved by explicitly addressing these cases, potentially leading to significant improvements of the systems overall. However, there is arguably a long path until these models can be used safely and effectively in real-world scenarios.
The crowd counting task has become a pillar for crowd control as it provides information concerning the number of people in a scene. It is helpful in many scenarios such as video surveillance, public ...safety, and future event planning. To solve such tasks, researchers have proposed different solutions. In the beginning, researchers went with more traditional solutions, while recently the focus is on deep learning methods and, more specifically, on Convolutional Neural Networks (CNNs), because of their efficiency. This review explores these methods by focusing on their key differences, advantages, and disadvantages. We have systematically analyzed algorithms and works based on the different models suggested and the problems they are trying to solve. The main focus is on the shift made in the history of crowd counting methods, moving from the heuristic models to CNN models by identifying each category and discussing its different methods and architectures. After a deep study of the literature on crowd counting, the survey partitions current datasets into sparse and crowded ones. It discusses the reviewed methods by comparing their results on the different datasets. The findings suggest that the heuristic models could be even more effective than the CNN models in sparse scenarios.
Medical image registration: a review Oliveira, Francisco P.M.; Tavares, João Manuel R.S.
Computer methods in biomechanics and biomedical engineering,
2014, Letnik:
17, Številka:
2
Journal Article
Recenzirano
This paper presents a review of automated image registration methodologies that have been used in the medical field. The aim of this paper is to be an introduction to the field, provide knowledge on ...the work that has been developed and to be a suitable reference for those who are looking for registration methods for a specific application. The registration methodologies under review are classified into intensity or feature based. The main steps of these methodologies, the common geometric transformations, the similarity measures and accuracy assessment techniques are introduced and described.
The analysis of ambient sounds can be very useful when developing sound base intelligent systems. Acoustic scene classification (ASC) is defined as identifying the area of a recorded sound or clip ...among some predefined scenes. ASC has huge potential to be used in urban sound event classification systems. This research presents a hybrid method that includes a novel mathematical fusion step which aims to tackle the challenges of ASC accuracy and adaptability of current state-of-the-art models. The proposed method uses a stereo signal, two ensemble classifiers (random subspace), and a novel mathematical fusion step. In the proposed method, a stable, invariant signal representation of the stereo signal is built using Wavelet Scattering Transform (WST). For each mono, i.e., left and right, channel, a different random subspace classifier is trained using WST. A novel mathematical formula for fusion step was developed, its parameters being found using a Genetic algorithm. The results on the DCASE 2017 dataset showed that the proposed method has higher classification accuracy (about 95%), pushing the boundaries of existing methods.
Classification and analysis of surface EMG (sEMG) signals have been of particular interest due to their numerous applications in the biomedical field. They can be used for the diagnosis of ...neuromuscular diseases, kinesiological studies, and human-machine interaction. However, these signals are difficult to process due to their noisy nature. To overcome this problem, a hybrid of wavelet with ensemble empirical mode decomposition pre-processing technique called WD-EEMD is proposed for classifying lower limb activities based on sEMG signals in healthy and knee abnormal subjects. First, Wavelet De-noising is used for filtering out white Gaussian Noise (WGN) and unwanted signals (contribution of other muscle signals). Next, an Ensemble Empirical Mode Decomposition is used for filtering out power line interference (PLI) and baseline wandering (BW) noises, followed by extraction of a total of nine time-domain features. Finally, the performance parameters of the Linear Discriminant Analysis (LDA) classifier are calculated with a 3-fold cross-validation technique. This study involves 11 healthy and 11 individuals with a knee abnormality for three different activities: walking, flexion of the leg up (standing), and leg extension from sitting position (sitting). Different pre-processing techniques similar to that of WD-EEMD were compared. It was observed that the proposed method achieves an average classification accuracy of 90.69% and 97.45% for healthy subjects and knee abnormal subjects, respectively.
•This work proposes a Modified AlexNet (MAN) deep-learning framework to evaluate the lung abnormality.•This work introduces a threshold filter to remove the artifacts from the Lung CT images.•This ...work introduces an Ensemble-Feature-Technique (EFT) by integrating the deep-features and the handcrafted features.•Serial fusion and PCA based selection is implemented in EFT to chose principal feature set.•Experimental results demonstrate superior performance of MAN in comparison with other existing state of the art methods.
Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained.
Display omitted