In this paper, a multichannel EEG emotion recognition method based on a novel dynamical graph convolutional neural networks (DGCNN) is proposed. The basic idea of the proposed EEG emotion recognition ...method is to use a graph to model the multichannel EEG features and then perform EEG emotion classification based on this model. Different from the traditional graph convolutional neural networks (GCNN) methods, the proposed DGCNN method can dynamically learn the intrinsic relationship between different electroencephalogram (EEG) channels, represented by an adjacency matrix, via training a neural network so as to benefit for more discriminative EEG feature extraction. Then, the learned adjacency matrix is used to learn more discriminative features for improving the EEG emotion recognition. We conduct extensive experiments on the SJTU emotion EEG dataset (SEED) and DREAMER dataset. The experimental results demonstrate that the proposed method achieves better recognition performance than the state-of-the-art methods, in which the average recognition accuracy of 90.4 percent is achieved for subject dependent experiment while 79.95 percent for subject independent cross-validation one on the SEED database, and the average accuracies of 86.23, 84.54 and 85.02 percent are respectively obtained for valence, arousal and dominance classifications on the DREAMER database.
Object Detection in 20 Years: A Survey Zou, Zhengxia; Chen, Keyan; Shi, Zhenwei ...
Proceedings of the IEEE,
03/2023, Volume:
111, Issue:
3
Journal Article
Peer reviewed
Open access
Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Over the past two decades, we have seen a rapid ...technological evolution of object detection and its profound impact on the entire computer vision field. If we consider today's object detection technique as a revolution driven by deep learning, then, back in the 1990s, we would see the ingenious thinking and long-term perspective design of early computer vision. This article extensively reviews this fast-moving research field in the light of technical evolution, spanning over a quarter-century's time (from the 1990s to 2022). A number of topics have been covered in this article, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speedup techniques, and recent state-of-the-art detection methods.
A convolutional neural network (CNN) is one of the most significant networks in the deep learning field. Since CNN made impressive achievements in many areas, including but not limited to computer ...vision and natural language processing, it attracted much attention from both industry and academia in the past few years. The existing reviews mainly focus on CNN's applications in different scenarios without considering CNN from a general perspective, and some novel ideas proposed recently are not covered. In this review, we aim to provide some novel ideas and prospects in this fast-growing field. Besides, not only 2-D convolution but also 1-D and multidimensional ones are involved. First, this review introduces the history of CNN. Second, we provide an overview of various convolutions. Third, some classic and advanced CNN models are introduced; especially those key points making them reach state-of-the-art results. Fourth, through experimental analysis, we draw some conclusions and provide several rules of thumb for functions and hyperparameter selection. Fifth, the applications of 1-D, 2-D, and multidimensional convolution are covered. Finally, some open issues and promising directions for CNN are discussed as guidelines for future work.
•A deep learning approach to cope with Parkinson's disease (PD) diagnosis.•Handwritten dynamics to assess PD diagnosis.•Promising and accurate results.•An extensive experimental evaluation is ...conducted.•New insights about future research concerning automatic PD identification.
Parkinson's disease (PD) is considered a degenerative disorder that affects the motor system, which may cause tremors, micrography, and the freezing of gait. Although PD is related to the lack of dopamine, the triggering process of its development is not fully understood yet.
In this work, we introduce convolutional neural networks to learn features from images produced by handwritten dynamics, which capture different information during the individual's assessment. Additionally, we make available a dataset composed of images and signal-based data to foster the research related to computer-aided PD diagnosis.
The proposed approach was compared against raw data and texture-based descriptors, showing suitable results, mainly in the context of early stage detection, with results nearly to 95%.
The analysis of handwritten dynamics using deep learning techniques showed to be useful for automatic Parkinson's disease identification, as well as it can outperform handcrafted features.
With the recent advances in remote sensing technologies for Earth observation, many different remote sensors are collecting data with distinctive properties. The obtained data are so large and ...complex that analyzing them manually becomes impractical or even impossible. Therefore, understanding remote sensing images effectively, in connection with physics, has been the primary concern of the remote sensing research community in recent years. For this purpose, machine learning is thought to be a promising technique because it can make the system learn to improve itself. With this distinctive characteristic, the algorithms will be more adaptive, automatic, and intelligent. This book introduces some of the most challenging issues of machine learning in the field of remote sensing, and the latest advanced technologies developed for different applications. It integrates with multi-source/multi-temporal/multi-scale data, and mainly focuses on learning to understand remote sensing images. Particularly, it presents many more effective techniques based on the popular concepts of deep learning and big data to reach new heights of data understanding. Through reporting recent advances in the machine learning approaches towards analyzing and understanding remote sensing images, this book can help readers become more familiar with knowledge frontier and foster an increased interest in this field.
With the recent advances in remote sensing technologies for Earth observation, many different remote sensors are collecting data with distinctive properties. The obtained data are so large and ...complex that analyzing them manually becomes impractical or even impossible. Therefore, understanding remote sensing images effectively, in connection with physics, has been the primary concern of the remote sensing research community in recent years. For this purpose, machine learning is thought to be a promising technique because it can make the system learn to improve itself. With this distinctive characteristic, the algorithms will be more adaptive, automatic, and intelligent. This book introduces some of the most challenging issues of machine learning in the field of remote sensing, and the latest advanced technologies developed for different applications. It integrates with multi-source/multi-temporal/multi-scale data, and mainly focuses on learning to understand remote sensing images. Particularly, it presents many more effective techniques based on the popular concepts of deep learning and big data to reach new heights of data understanding. Through reporting recent advances in the machine learning approaches towards analyzing and understanding remote sensing images, this book can help readers become more familiar with knowledge frontier and foster an increased interest in this field.
In hyperspectral image (HSI) classification, each pixel sample is assigned to a land-cover category. In the recent past, convolutional neural network (CNN)-based HSI classification methods have ...greatly improved performance due to their superior ability to represent features. However, these methods have limited ability to obtain deep semantic features, and as the layer's number increases, computational costs rise significantly. The transformer framework can represent high-level semantic features well. In this article, a spectral-spatial feature tokenization transformer (SSFTT) method is proposed to capture spectral-spatial features and high-level semantic features. First, a spectral-spatial feature extraction module is built to extract low-level features. This module is composed of a 3-D convolution layer and a 2-D convolution layer, which are used to extract the shallow spectral and spatial features. Second, a Gaussian weighted feature tokenizer is introduced for features transformation. Third, the transformed features are input into the transformer encoder module for feature representation and learning. Finally, a linear layer is used to identify the first learnable token to obtain the sample label. Using three standard datasets, experimental analysis confirms that the computation time is less than other deep learning methods and the performance of the classification outperforms several current state-of-the-art methods. The code of this work is available at https://github.com/zgr6010/HSI_SSFTT for the sake of reproducibility.
Bearing remaining useful life (RUL) prediction plays a crucial role in guaranteeing safe operation of machinery and reducing maintenance loss. In this paper, we present a new deep feature learning ...method for RUL estimation approach through time frequency representation (TFR) and multiscale convolutional neural network (MSCNN). TFR can reveal nonstationary property of a bearing degradation signal effectively. After acquiring time-series degradation signals, we get TFRs, which contain plenty of useful information using wavelet transform. Owing to high dimensionality, the size of these TFRs is reduced by bilinear interpolation, which are further regarded as inputs for deep learning models. Here, we introduce an MSCNN model structure, which keeps the global and local information synchronously compared to a traditional convolutional neural network (CNN). The salient features, which contribute for RUL estimation, can be learned automatically by MSCNN. The effectiveness of the presented method is validated by the experiment data. Compared to traditional data-driven and different CNN-based feature extraction methods, the proposed method shows enhanced performance in the prediction accuracy.
With advances in the continuous improvement and development of the power system, insulators have gradually become one of the most important components. At present, unmanned aerial vehicles (UAVs) ...have been widely used to inspect insulators, insulators in pictures are accurately and efficiently identified by convolutional neural networks (CNNs), and this method has been extensively applied. These existing methods have been widely used to identify insulators in pictures with high accuracy and efficiency. However, they are based on Faster R-CNN, and you only look once (YOLO) either requires more identification time due to the complex network structure or does not have sufficient accuracy for insulator defects. More identification time is required due to the complexity of the network structure, or there is not enough accuracy for insulator defects. Based on the YOLOv3 network, this article proposes a new type of CNN for target detection, which can improve and enhance the efficiency while ensuring the detection speed. In addition, this article applies the latest EIoU and loss functions to YOLOv3, which significantly improves the coincidence of the prediction frame and the annotation frame, and accelerates the convergence speed. The experimental results show that the detection model proposed in this article has an average precision (AP) of 0.94 for insulators and 0.89 for insulator defects, and its detection speed can reach 93.5 ms/image. Finally, after experimental verification, the detection model proposed in this article meets the requirements of power inspection and has good engineering application prospects.
Recently, deep learning has been introduced to classify hyperspectral images (HSIs) and achieved good performance. In general, deep models adopt a large number of hierarchical layers to extract ...features. However, excessively increasing network depth will result in some negative effects (e.g., overfitting, gradient vanishing, and accuracy degrading) for conventional convolutional neural networks. In addition, the previous networks used in HSI classification do not consider the strong complementary yet correlated information among different hierarchical layers. To address the above two issues, a deep feature fusion network (DFFN) is proposed for HSI classification. On the one hand, the residual learning is introduced to optimize several convolutional layers as the identity mapping, which can ease the training of deep network and benefit from increasing depth. As a result, we can build a very deep network to extract more discriminative features of HSIs. On the other hand, the proposed DFFN model fuses the outputs of different hierarchical layers, which can further improve the classification accuracy. Experimental results on three real HSIs demonstrate that the proposed method outperforms other competitive classifiers.