The current superresolution (SR) methods based on deep learning have shown remarkable comparative advantages but remain unsatisfactory in recovering the high-frequency edge details of the images in ...noise-contaminated imaging conditions, e.g., remote sensing satellite imaging. In this paper, we propose a generative adversarial network (GAN)-based edge-enhancement network (EEGAN) for robust satellite image SR reconstruction along with the adversarial learning strategy that is insensitive to noise. In particular, EEGAN consists of two main subnetworks: an ultradense subnetwork (UDSN) and an edge-enhancement subnetwork (EESN). In UDSN, a group of 2-D dense blocks is assembled for feature extraction and to obtain an intermediate high-resolution result that looks sharp but is eroded with artifacts and noises as previous GAN-based methods do. Then, EESN is constructed to extract and enhance the image contours by purifying the noise-contaminated components with mask processing. The recovered intermediate image and enhanced edges can be combined to generate the result that enjoys high credibility and clear contents. Extensive experiments on Kaggle Open Source Data set , Jilin-1 video satellite images, and Digitalglobe show superior reconstruction performance compared to the state-of-the-art SR approaches.
One of the challenging tasks in modern aquatic remote sensing is the retrieval of near-surface concentrations of Total Suspended Solids (TSS). This study aims to present a Statistical, inherent ...Optical property (IOP) -based, and muLti-conditional Inversion proceDure (SOLID) for enhanced retrievals of satellite-derived TSS under a wide range of in-water bio-optical conditions in rivers, lakes, estuaries, and coastal waters. In this study, using a large in situ database (N > 3500), the SOLID model is devised using a three-step procedure: (a) water-type classification of the input remote sensing reflectance (Rrs), (b) retrieval of particulate backscattering (bbp) in the red or near-infrared (NIR) regions using semi-analytical, machine-learning, and empirical models, and (c) estimation of TSS from bbp via water-type-specific empirical models. Using an independent subset of our in situ data (N = 2729) with TSS ranging from 0.1 to 2626.8 g/m3, the SOLID model is thoroughly examined and compared against several state-of-the-art algorithms (Miller and McKee, 2004; Nechad et al., 2010; Novoa et al., 2017; Ondrusek et al., 2012; Petus et al., 2010). We show that SOLID outperforms all the other models to varying degrees, i.e.,from 10 to >100%, depending on the statistical attributes (e.g., global versus water-type-specific metrics). For demonstration purposes, the model is implemented for images acquired by the MultiSpectral Imager aboard Sentinel-2A/B over the Chesapeake Bay, San-Francisco-Bay-Delta Estuary, Lake Okeechobee, and Lake Taihu. To enable generating consistent, multimission TSS products, its performance is further extended to, and evaluated for, other missions, such as the Ocean and Land Color Instrument (OLCI), Moderate Resolution Imaging Spectroradiometer (MODIS), Visible Infrared Imaging Radiometer Suite (VIIRS), and Operational Land Imager (OLI). Sensitivity analyses on uncertainties induced by the atmospheric correction indicate that 10% uncertainty in Rrs leads to <20% uncertainty in TSS retrievals from SOLID. While this study suggests that SOLID has a potential for producing TSS products in global coastal and inland waters, our statistical analysis certainly verifies that there is still a need for improving retrievals across a wide spectrum of particle loads.
•Model (SOLID) is developed for estimating TSS in coastal/inland waters.•Validated with a wide range of trophic/turbidity conditions•Performance is thoroughly gauged against five other models.•Model produces stable performance in optically complex aquatic ecosystems.•Performance is assessed for several satellite missions.
Due to the urgent demand for remote sensing big data analysis, large-scale remote sensing image retrieval (LSRSIR) attracts increasing attention from researchers. Generally, LSRSIR can be divided ...into two categories as follows: uni-source LSRSIR (US-LSRSIR) and cross-source LSRSIR (CS-LSRSIR). More specifically, US-LSRSIR means the inquiry remote sensing image and images in the searching data set come from the same remote sensing data source, whereas CS-LSRSIR is designed to retrieve remote sensing images with a similar content to the inquiry remote sensing image that are from a different remote sensing data source. In the literature, US-LSRSIR has been widely exploited, but CS-LSRSIR is rarely discussed. In practical situations, remote sensing images from different kinds of remote sensing data sources are continually increasing, so there is a great motivation to exploit CS-LSRSIR. Therefore, this paper focuses on CS-LSRSIR. To cope with CS-LSRSIR, this paper proposes source-invariant deep hashing convolutional neural networks (SIDHCNNs), which can be optimized in an end-to-end manner using a series of well-designed optimization constraints. To quantitatively evaluate the proposed SIDHCNNs, we construct a dual-source remote sensing image data set that contains eight typical land-cover categories and 10 000 dual samples in each category. Extensive experiments show that the proposed SIDHCNNs can yield substantial improvements over several baselines involving the most recent techniques.
Central to the looming paradigm shift toward data-intensive science, machine-learning techniques are becoming increasingly important. In particular, deep learning has proven to be both a major ...breakthrough and an extremely powerful tool in many fields. Shall we embrace deep learning as the key to everything? Or should we resist a black-box solution? These are controversial issues within the remote-sensing community. In this article, we analyze the challenges of using deep learning for remote-sensing data analysis, review recent advances, and provide resources we hope will make deep learning in remote sensing seem ridiculously simple. More importantly, we encourage remote-sensing scientists to bring their expertise into deep learning and use it as an implicit general model to tackle unprecedented, large-scale, influential challenges, such as climate change and urbanization.
The application of the convolutional neural network has shown to greatly improve the accuracy of building extraction from remote sensing imagery. In this paper, we created and made open a ...high-quality multisource data set for building detection, evaluated the accuracy obtained in most recent studies on the data set, demonstrated the use of our data set, and proposed a Siamese fully convolutional network model that obtained better segmentation accuracy. The building data set that we created contains not only aerial images but also satellite images covering 1000 km 2 with both raster labels and vector maps. The accuracy of applying the same methodology to our aerial data set outperformed several other open building data sets. On the aerial data set, we gave a thorough evaluation and comparison of most recent deep learning-based methods, and proposed a Siamese U-Net with shared weights in two branches, and original images and their down-sampled counterparts as inputs, which significantly improves the segmentation accuracy, especially for large buildings. For multisource building extraction, the generalization ability is further evaluated and extended by applying a radiometric augmentation strategy to transfer pretrained models on the aerial data set to the satellite data set. The designed experiments indicate our data set is accurate and can serve multiple purposes including building instance segmentation and change detection; our result shows the Siamese U-Net outperforms current building extraction methods and could provide valuable reference.
Cross-modal hashing plays a pivotal role in large-scale remote sensing (RS) ship image retrieval. RS ship images often exhibit similar overall appearance with subtle differences. Existing hashing ...methods typically employ feature non-interaction strategies to generate common hash codes, which may not effectively capture the correlations between cross-modal ship images to reduce inter-modality discrepancies. To address this issue, we propose a novel cross-modal hashing approach based on Feature Semi-Interaction and Semantic Ranking (FSISR) for RS ship image retrieval. Our FSISR approach not only captures intricate correlations between different ship image modalities, but also enables the construction of hash tables for large-scale retrieval. FSISR comprises a feature semi-interaction module and a semantic ranking objective function. The semi-interaction module utilizes clustering centers from one modality to learn the correlations between two modalities and generate robust shared representations. The objective function optimizes these representations in a common Hamming space, consisting of a shared semantic alignment loss and a margin-free ranking loss. The alignment loss employs a shared semantic layer to preserve label-level similarity, while the ranking loss incorporates hard examples to establish a margin-free loss that captures similarity ranking relationships. We evaluate the performance of our method on benchmark datasets and demonstrate its effectiveness for cross-modal RS ship image retrieval. https://github.com/sunyuxi/FSISR.
The Global Precipitation Measurement (GPM) Integrated Multi-satellite Retrievals for GPM (IMERG) products provide quasi-global (60° N–60° S) precipitation estimates, beginning March 2014, from the ...combined use of passive microwave (PMW) and infrared (IR) satellites comprising the GPM constellation. The IMERG products are available in the form of near-real-time data, i.e., IMERG Early and Late, and in the form of post-real-time research data, i.e., IMERG Final, after monthly rain gauge analysis is received and taken into account. In this study, IMERG version 3 Early, Late, and Final (IMERG-E,IMERG-L, and IMERG-F) half-hourly rainfall estimates are compared with gauge-based gridded rainfall data from the WegenerNet Feldbach region (WEGN) high-density climate station network in southeastern Austria. The comparison is conducted over two IMERG 0.1° × 0.1° grid cells, entirely covered by 40 and 39 WEGN stations each, using data from the extended summer season (April–October) for the first two years of the GPM mission. The entire data are divided into two rainfall intensity ranges (low and high) and two seasons (warm and hot), and we evaluate the performance of IMERG, using both statistical and graphical methods. Results show that IMERG-F rainfall estimates are in the best overall agreement with the WEGN data, followed by IMERG-L and IMERG-E estimates, particularly for the hot season. We also illustrate, through rainfall event cases, how insufficient PMW sources and errors in motion vectors can lead to wide discrepancies in the IMERG estimates. Finally, by applying the method of Villarini and Krajewski (2007), we find that IMERG-F half-hourly rainfall estimates can be regarded as a 25 min gauge accumulation, with an offset of +40 min relative to its nominal time.
A high degree of consistency and comparability among chlorophyll algorithms is necessary to meet the goals of merging data from concurrent overlapping ocean color missions for increased coverage of ...the global ocean and to extend existing time series to encompass data from recently launched missions and those planned for the near future, such as PACE, OLCI, HawkEye, EnMAP and SABIA-MAR. To accomplish these goals, we developed 65 empirical ocean color (OC) chlorophyll algorithms for 25 satellite instruments using the largest available and most globally representative database of coincident in situ chlorophyll a and remote sensing reflectances. Excellent internal consistency was achieved across these OC ‘Version -7’ algorithms, as demonstrated by a median regression slope and coefficient of determination (R2) of 0.985 and 0.859, respectively, among 903 pairwise comparisons of OC-modeled chlorophyll. SeaWiFS and MODIS-Aqua satellite-to-in situ match-up results indicated equivalent, and sometimes superior, performance to current heritage chlorophyll algorithms.
During the past forty years of ocean color research the violet band (412 nm) has rarely been used in empirical algorithms to estimate chlorophyll concentrations in oceanic surface water. While the peak in chlorophyll-specific absorption coincides with the 443 nm band present on most ocean color sensors, the magnitude of chlorophyll-specific absorption at 412 nm can reach upwards of ~70% of that at 443 nm. Nearly one third of total chlorophyll-specific absorption between 400 and 700 nm occurs below 443 nm, suggesting that bands below 443 nm, such as the 412 nm band present on most ocean color sensors, may also be useful in detecting chlorophyll under certain conditions and assumptions. The 412 nm band is also the brightest band (that is, with the most dominant magnitude) in remotely sensed reflectances retrieved by heritage passive ocean color instruments when chlorophyll is less than ~0.1 mg m−3, which encompasses ~24% of the global ocean. To attempt to exploit this additional spectral information, we developed two new families of OC algorithms, the OC5 and OC6 algorithms, which include the 412 nm band in the MBR. By using this brightest band in MBR empirical chlorophyll algorithms, the highest possible dynamic range of MBR may be achieved in these oligotrophic areas.
The terms oligotrophic, mesotrophic, and eutrophic get frequent use in the scientific literature to designate trophic status; however, quantitative definitions in terms of chlorophyll levels are arbitrarily defined. We developed a new, reproducible, bio-optically based index for trophic status based on the frequency of the brightest, maximum band in the MBR for the OC6_SEAWIFS algorithm, along with remote sensing reflectances from the entire SeaWiFS mission. This index defines oligotrophic water as chlorophyll less than ~0.1 mg m−3, eutrophic water as chlorophyll above 1.67 mg m−3 and mesotrophic water as chlorophyll between 0.1 and 1.67 mg m−3. Applying these criteria to the 40-year mean global ocean chlorophyll data set revealed that oligotrophic, mesotrophic, and eutrophic water occupy ~24%, 67%, and 9%, respectively, of the area of the global ocean on average.
•65 empirical chlorophyll algorithms for 25 ocean color satellites were developed.•All algorithms achieved excellent internal consistency across satellite instruments.•Additional spectral information improves chlorophyll algorithm signal-to-noise.•Ocean color radiometry provides a reproducible bio-optical index for trophic status.