Deep generative adversarial networks (GANs) are the emerging technology in drug discovery and biomarker development. In our recent work, we demonstrated a proof-of-concept of implementing deep ...generative adversarial autoencoder (AAE) to identify new molecular fingerprints with predefined anticancer properties. Another popular generative model is the variational autoencoder (VAE), which is based on deep neural architectures. In this work, we developed an advanced AAE model for molecular feature extraction problems, and demonstrated its advantages compared to VAE in terms of (a) adjustability in generating molecular fingerprints; (b) capacity of processing very large molecular data sets; and (c) efficiency in unsupervised pretraining for regression model. Our results suggest that the proposed AAE model significantly enhances the capacity and efficiency of development of the new molecules with specific anticancer properties using the deep generative models.
Spectral unmixing is a technique for remotely sensed image interpretation that expresses each (possibly mixed) pixel as a combination of pure spectral signatures (endmembers) and their fractional ...abundances. In this paper, we develop a new technique for unsupervised unmixing which is based on a deep autoencoder network (DAEN). Our newly developed DAEN consists of two parts. The first part of the network adopts stacked autoencoders (SAEs) to learn spectral signatures, so as to generate a good initialization for the unmixing process. In the second part of the network, a variational autoencoder (VAE) is employed to perform blind source separation, aimed at obtaining the endmember signatures and abundance fractions simultaneously. By taking advantage from the SAEs, the robustness of the proposed approach is remarkable as it can unmix data sets with outliers and low signal-to-noise ratio. Moreover, the multihidden layers of the VAE ensure the required constraints (nonnegativity and sum-to-one) when estimating the abundances. The effectiveness of the proposed method is evaluated using both synthetic and real hyperspectral data. When compared with other unmixing methods, the proposed approach demonstrates very competitive performance.
Abstract
Variational Autoencoder (VAE), as a kind of deep hidden space generation model, has achieved great success in performance in recent years, especially in image generation. This paper aims to ...study image compression algorithms based on variational autoencoders. This experiment uses the image quality evaluation measurement model, because the image super-resolution algorithm based on interpolation is the most direct and simple method to change the image resolution. In the experiment, the first step of the whole picture is transformed by the variational autoencoder, and then the actual coding is applied to the complete coefficient. Experimental data shows that after encoding using the improved encoding method of the variational autoencoder, the number of bits required for the encoding symbol stream required for transmission or storage in the traditional encoding method is greatly reduced, and symbol redundancy is effectively avoided. The experimental results show that the image research algorithm using variational autoencoder for image 1, image 2, and image 3 reduces the time by 3332, 2637, and 1470 bit respectively compared with the traditional image research algorithm of self-encoding. In the future, people will introduce deep convolutional neural networks to optimize the generative adversarial network, so that the generative adversarial network can obtain better convergence speed and model stability.
Nowadays, data-driven soft sensors have become a mainstream for the key performance indicators prediction, which guarantees the safety and stability of the industrial process. The typical autoencoder ...(AE) has been widely used to extract potential features through unsupervised pretraining and supervised fine-tuning. However, most existing studies fail to consider both the time-varying features of the process and the differences in the contributions of the hidden features to the target variable. Therefore, in this paper, a stacked spatial-temporal autoencoder (S 2 TAE) is proposed to enhance the representation learning capability for soft sensor modeling by taking the spatial-temporal correlations into consideration. Specifically, in order to effectively model the temporal dependence from nearby times, a temporal autoencoder (TAE) is proposed, in which a memory module is devised and integrated to learn valuable historical information. Moreover, a "feature recalibration" block is developed and embedded into the spatial-temporal autoencoder (STAE) to selectively capture more informative features and suppress the less useful ones in a supervised way. Then, multiple STAEs are stacked to construct the S 2 TAE network to extract more robust high-level features. Finally, the experimental results on two real-world datasets of an SDS desulphurization process and a high-low transformer demonstrate that the S 2 TAE-based soft sensor is effective and feasible.
Change detection based on heterogeneous images, such as optical images and synthetic aperture radar images, is a challenging problem because of their huge appearance differences. To combat this ...problem, we propose an unsupervised change detection method that contains only a convolutional autoencoder (CAE) for feature extraction and the commonality autoencoder for commonalities exploration. The CAE can eliminate a large part of redundancies in two heterogeneous images and obtain more consistent feature representations. The proposed commonality autoencoder has the ability to discover common features of ground objects between two heterogeneous images by transforming one heterogeneous image representation into another. The unchanged regions with the same ground objects share much more common features than the changed regions. Therefore, the number of common features can indicate changed regions and unchanged regions, and then a difference map can be calculated. At last, the change detection result is generated by applying a segmentation algorithm to the difference map. In our method, the network parameters of the commonality autoencoder are learned by the relevance of unchanged regions instead of the labels. Our experimental results on five real data sets demonstrate the promising performance of the proposed framework compared with several existing approaches.
•A distribution consistency preserving deep embedded clustering model is proposed.•The model exploits GAE and AE to learn node representations and clusters jointly.•A consistency constraint is ...designed to maintain the consistency of the clusters.•The empirical study verifies the effectiveness of the proposed model.
Many complex systems in the real world can be characterized as attributed networks. To mine the potential information in these networks, deep embedded clustering, which obtains node representations and clusters simultaneously, has been given much attention in recent years. Under the assumption of consistency for data in different views, the cluster structure of network topology and that of node attributes should be consistent for an attributed network. However, many existing methods ignore this property, even though they separately encode node representations from network topology and node attributes and cluster nodes on representation vectors learned from one of the views. Therefore, in this study, we propose an end-to-end deep embedded clustering model for attributed networks. It utilizes graph autoencoder and node attribute autoencoder to learn node representations and cluster assignments. In addition, a distribution consistency constraint is introduced to maintain the latent consistency of cluster distributions in two views. Extensive experiments on several datasets demonstrate that the proposed model achieves significantly better or competitive performance compared with the state-of-the-art methods. The source code can be found at https://github.com/Zhengymm/DCP.
Generative models have the potential to revolutionize 3D extended reality. A primary obstacle is that augmented and virtual reality need real-time computing. Current state-of-the-art point cloud ...random generation methods are not fast enough for these applications. We introduce a vector-quantized variational autoencoder model (VQVAE) that can synthesize high-quality point clouds in milliseconds. Unlike previous work in VQVAEs, our model offers a compact sample representation suitable for conditional generation and data exploration with potential applications in rapid prototyping. We achieve this result by combining architectural improvements with an innovative approach for probabilistic random generation. First, we rethink current parallel point cloud autoencoder structures, and we propose several solutions to improve robustness, efficiency and reconstruction quality. Notable contributions in the decoder architecture include an innovative computation layer to process the shape semantic information, an attention mechanism that helps the model focus on different areas and a filter to cover possible sampling errors. Secondly, we introduce a parallel sampling strategy for VQVAE models consisting of a double encoding system, where a variational autoencoder learns how to generate the complex discrete distribution of the VQVAE, not only allowing quick inference but also describing the shape with a few global variables. We compare the proposed decoder and our VQVAE model with established and concurrent work, and we prove, one by one, the validity of the single contributions.
Autoencoder is an unsupervised learning model, which can automatically learn data features from a large number of samples and can act as a dimensionality reduction method. With the development of ...deep learning technology, autoencoder has attracted the attention of many scholars. Researchers have proposed several improved versions of autoencoder based on different application fields. First, this paper explains the principle of a conventional autoencoder and investigates the primary development process of an autoencoder. Second, We proposed a taxonomy of autoencoders according to their structures and principles. The related autoencoder models are comprehensively analyzed and discussed. This paper introduces the application progress of autoencoders in different fields, such as image classification and natural language processing, etc. Finally, the shortcomings of the current autoencoder algorithm are summarized, and prospected for its future development directions are addressed.
•The development process of autoencoder.•The application of autoencoder in different fields.•Disadvantages, characteristics and development trend of autoencoder.