Remotely sensed images often suffer from the common problems of stripe noise and random dead pixels. The techniques to recover a good image from the contaminated one are called image destriping (for ...stripes) and image inpainting (for dead pixels). This paper presents a maximum a posteriori (MAP)-based algorithm for both destriping and inpainting problems. The main advantage of this algorithm is that it can constrain the solution space according to a priori knowledge during the destriping and inpainting processes. In the MAP framework, the likelihood probability density function (PDF) is constructed based on a linear image observation model, and a robust Huber-Markov model is used as the prior PDF. The gradient descent optimization method is employed to produce the desired image. The proposed algorithm has been tested using moderate resolution imaging spectrometer images for destriping and China-Brazil Earth Resource Satellite and QuickBird images for simulated inpainting. The experiment results and quantitative analyses verify the efficacy of this algorithm.
In this paper, an adaptive mean-shift (MS) analysis framework is proposed for object extraction and classification of hyperspectral imagery over urban areas. The basic idea is to apply an MS to ...obtain an object-oriented representation of hyperspectral data and then use support vector machine to interpret the feature set. In order to employ MS for hyperspectral data effectively, a feature-extraction algorithm, nonnegative matrix factorization, is utilized to reduce the high-dimensional feature space. Furthermore, two bandwidth-selection algorithms are proposed for the MS procedure. One is based on the local structures, and the other exploits separability analysis. Experiments are conducted on two hyperspectral data sets, the DC Mall hyperspectral digital-imagery collection experiment and the Purdue campus hyperspectral mapper images. We evaluate and compare the proposed approach with the well-known commercial software eCognition (object-based analysis approach) and an effective spectral/spatial classifier for hyperspectral data, namely, the derivative of the morphological profile. Experimental results show that the proposed MS-based analysis system is robust and obviously outperforms the other methods.
Due to the high spectral resolution, anomaly detection from hyperspectral images provides a new way to locate potential targets in a scene, especially those targets that are spectrally different from ...the majority of the data set. Conventional Mahalanobis-distance-based anomaly detection methods depend on the background statistics to construct the anomaly detection metric. One of the main problems with these methods is that the Gaussian distribution assumption of the background may not be reasonable. Furthermore, these methods are also susceptible to contamination of the conventional background covariance matrix by anomaly pixels. This paper proposes a new anomaly detection method by effectively exploiting a robust anomaly degree metric for increasing the separability between anomaly pixels and other background pixels, using discriminative information. First, the manifold feature is used so as to divide the pixels into the potential anomaly part and the potential background part. This procedure is called discriminative information learning. A metric learning method is then performed to obtain the robust anomaly degree measurements. Experiments with three hyperspectral data sets reveal that the proposed method outperforms other current anomaly detection methods. The sensitivity of the method to several important parameters is also investigated.
In recent years, the resolution of remotely sensed imagery has become increasingly high in both the spectral and spatial domains, which simultaneously provides more plentiful spectral and spatial ...information. Accordingly, the accurate interpretation of high-resolution imagery depends on effective integration of the spectral, structural and semantic features contained in the images. In this paper, we propose a new multifeature model, aiming to construct a support vector machine (SVM) ensemble combining multiple spectral and spatial features at both pixel and object levels. The features employed in this study include a gray-level co-occurrence matrix, differential morphological profiles, and an urban complexity index. Subsequently, three algorithms are proposed to integrate the multifeature SVMs: certainty voting, probabilistic fusion, and an object-based semantic approach, respectively. The proposed algorithms are compared with other multifeature SVM methods including the vector stacking, feature selection, and composite kernels. Experiments are conducted on the hyperspectral digital imagery collection experiment DC Mall data set and two WorldView-2 data sets. It is found that the multifeature model with semantic-based postprocessing provides more accurate classification results (an accuracy improvement of 1-4% for the three experimental data sets) compared to the voting and probabilistic models.
In this paper, we propose a spectral-spatial unified network (SSUN) with an end-to-end architecture for the hyperspectral image (HSI) classification. Different from traditional spectral-spatial ...classification frameworks where the spectral feature extraction (FE), spatial FE, and classifier training are separated, these processes are integrated into a unified network in our model. In this way, both FE and classifier training will share a uniform objective function and all the parameters in the network can be optimized at the same time. In the implementation of the SSUN, we propose a band grouping-based long short-term memory model and a multiscale convolutional neural network as the spectral and spatial feature extractors, respectively. In the experiments, three benchmark HSIs are utilized to evaluate the performance of the proposed method. The experimental results demonstrate that the SSUN can yield a competitive performance compared with existing methods.
Change detection has been a hotspot in the remote sensing technology for a long time. With the increasing availability of multi-temporal remote sensing images, numerous change detection algorithms ...have been proposed. Among these methods, image transformation methods with feature extraction and mapping could effectively highlight the changed information and thus has a better change detection performance. However, the changes of multi-temporal images are usually complex, and the existing methods are not effective enough. In recent years, the deep network has shown its brilliant performance in many fields, including feature extraction and projection. Therefore, in this paper, based on the deep network and slow feature analysis (SFA) theory, we proposed a new change detection algorithm for multi-temporal remotes sensing images called deep SFA (DSFA). In the DSFA model, two symmetric deep networks are utilized for projecting the input data of bi-temporal imagery. Then, the SFA module is deployed to suppress the unchanged components and highlight the changed components of the transformed features. The change vector analysis pre-detection is employed to find unchanged pixels with high confidence as training samples. Finally, the change intensity is calculated with chi-square distance and the changes are determined by threshold algorithms. The experiments are performed on two real-world data sets and a public hyperspectral data set. The visual comparison and the quantitative evaluation have shown that DSFA could outperform the other state-of-the-art algorithms, including other SFA-based and deep learning methods.
In this paper, we present a spatial spectral hyperspectral image (HSI) mixed-noise removal method named total variation (TV)-regularized low-rank matrix factorization (LRTV). In general, HSIs are not ...only assumed to lie in a low-rank subspace from the spectral perspective but also assumed to be piecewise smooth in the spatial dimension. The proposed method integrates the nuclear norm, TV regularization, and L 1 -norm together in a unified framework. The nuclear norm is used to exploit the spectral low-rank property, and the TV regularization is adopted to explore the spatial piecewise smooth structure of the HSI. At the same time, the sparse noise, which includes stripes, impulse noise, and dead pixels, is detected by the L 1 -norm regularization. To tradeoff the nuclear norm and TV regularization and to further remove the Gaussian noise of the HSI, we also restrict the rank of the clean image to be no larger than the number of endmembers. A number of experiments were conducted in both simulated and real data conditions to illustrate the performance of the proposed LRTV method for HSI restoration.
Due to the recent advances in satellite sensors, a large amount of high-resolution remote sensing images is now being obtained each day. How to automatically recognize and analyze scenes from these ...satellite images effectively and efficiently has become a big challenge in the remote sensing field. Recently, a lot of work in scene classification has been proposed, focusing on deep neural networks, which learn hierarchical internal feature representations from image data sets and produce state-of-the-art performance. However, most methods, including the traditional shallow methods and deep neural networks, only concentrate on training a single model. Meanwhile, neural network ensembles have proved to be a powerful and practical tool for a number of different predictive tasks. Can we find a way to combine different deep neural networks effectively and efficiently for scene classification? In this paper, we propose a gradient boosting random convolutional network (GBRCN) framework for scene classification, which can effectively combine many deep neural networks. As far as we know, this is the first time that a deep ensemble framework has been proposed for scene classification. Moreover, in the experiments, the proposed method was applied to two challenging high-resolution data sets: 1) the UC Merced data set containing 21 different aerial scene categories with a submeter resolution and 2) a Sydney data set containing eight land-use categories with a 1.0-m spatial resolution. The proposed GBRCN framework outperformed the state-of-the-art methods with the UC Merced data set, including the traditional single convolutional network approach. For the Sydney data set, the proposed method again obtained the best accuracy, demonstrating that the proposed framework can provide more accurate classification results than the state-of-the-art methods.
Due to the rapid technological development of various different satellite sensors, a huge volume of high-resolution image data sets can now be acquired. How to efficiently represent and recognize the ...scenes from such high-resolution image data has become a critical task. In this paper, we propose an unsupervised feature learning framework for scene classification. By using the saliency detection algorithm, we extract a representative set of patches from the salient regions in the image data set. These unlabeled data patches are exploited by an unsupervised feature learning method to learn a set of feature extractors which are robust and efficient and do not need elaborately designed descriptors such as the scale-invariant-feature-transform-based algorithm. We show that the statistics generated from the learned feature extractors can characterize a complex scene very well and can produce excellent classification accuracy. In order to reduce overfitting in the feature learning step, we further employ a recently developed regularization method called "dropout," which has proved to be very effective in image classification. In the experiments, the proposed method was applied to two challenging high-resolution data sets: the UC Merced data set containing 21 different aerial scene categories with a submeter resolution and the Sydney data set containing seven land-use categories with a 60-cm spatial resolution. The proposed method obtained results that were equal to or even better than the previous best results with the UC Merced data set, and it also obtained the highest accuracy with the Sydney data set, demonstrating that the proposed unsupervised-feature-learning-based scene classification method provides more accurate classification results than the other latent-Dirichlet-allocation-based methods and the sparse coding method.
Hyperspectral image (HSI) denoising is a crucial preprocessing procedure to improve the performance of the subsequent HSI interpretation and applications. In this paper, a novel deep learning-based ...method for this task is proposed, by learning a nonlinear end-to-end mapping between the noisy and clean HSIs with a combined spatial-spectral deep convolutional neural network (HSID-CNN). Both the spatial and spectral information are simultaneously assigned to the proposed network. In addition, multiscale feature extraction and multilevel feature representation are, respectively, employed to capture both the multiscale spatial-spectral feature and fuse different feature representations for the final restoration. The simulated and real-data experiments demonstrate that the proposed HSID-CNN outperforms many of the mainstream methods in both the quantitative evaluation indexes, visual effects, and HSI classification accuracy.