The Prairie Pothole Region of North America is characterized by millions of depressional wetlands, which provide critical habitats for globally significant populations of migratory waterfowl and ...other wildlife species. Due to their relatively small size and shallow depth, these wetlands are highly sensitive to climate variability and anthropogenic changes, exhibiting inter- and intra-annual inundation dynamics. Moderate-resolution satellite imagery (e.g., Landsat, Sentinel) alone cannot be used to effectively delineate these small depressional wetlands. By integrating fine spatial resolution Light Detection and Ranging (LiDAR) data and multi-temporal (2009–2017) aerial images, we developed a fully automated approach to delineate wetland inundation extent at watershed scales using Google Earth Engine. Machine learning algorithms were used to classify aerial imagery with additional spectral indices to extract potential wetland inundation areas, which were further refined using LiDAR-derived landform depressions. The wetland delineation results were then compared to the U.S. Fish and Wildlife Service National Wetlands Inventory (NWI) geospatial dataset and existing global-scale surface water products to evaluate the performance of the proposed method. We tested the workflow on 26 watersheds with a total area of 16,576 km2 in the Prairie Pothole Region. The results showed that the proposed method can not only delineate current wetland inundation status but also demonstrate wetland hydrological dynamics, such as wetland coalescence through fill-spill hydrological processes. Our automated algorithm provides a practical, reproducible, and scalable framework, which can be easily adapted to delineate wetland inundation dynamics at broad geographic scales.
•A fully automated algorithm was developed to map wetland inundation dynamics.•Multiple wetland inundation maps (1-m) were produced for the Prairie Pothole Region.•Mapped wetlands show high accuracy when compared to existing surface water products.•The algorithm is scalable for mapping wetland inundation at large geographic scales.
A multitude of disturbance agents, such as wildfires, land use, and climate‐driven expansion of woody shrubs, is transforming the distribution of plant functional types across Arctic–Boreal ...ecosystems, which has significant implications for interactions and feedbacks between terrestrial ecosystems and climate in the northern high‐latitude. However, because the spatial resolution of existing land cover datasets is too coarse, large‐scale land cover changes in the Arctic–Boreal region (ABR) have been poorly characterized. Here, we use 31 years (1984–2014) of moderate spatial resolution (30 m) satellite imagery over a region spanning 4.7 × 106 km2 in Alaska and northwestern Canada to characterize regional‐scale ABR land cover changes. We find that 13.6 ± 1.3% of the domain has changed, primarily via two major modes of transformation: (a) simultaneous disturbance‐driven decreases in Evergreen Forest area (−14.7 ± 3.0% relative to 1984) and increases in Deciduous Forest area (+14.8 ± 5.2%) in the Boreal biome; and (b) climate‐driven expansion of Herbaceous and Shrub vegetation (+7.4 ± 2.0%) in the Arctic biome. By using time series of 30 m imagery, we characterize dynamics in forest and shrub cover occurring at relatively short spatial scales (hundreds of meters) due to fires, harvest, and climate‐induced growth that are not observable in coarse spatial resolution (e.g., 500 m or greater pixel size) imagery. Wildfires caused most of Evergreen Forest Loss and Evergreen Forest Gain and substantial areas of Deciduous Forest Gain. Extensive shifts in the distribution of plant functional types at multiple spatial scales are consistent with observations of increased atmospheric CO2 seasonality and ecosystem productivity at northern high‐latitudes and signal continental‐scale shifts in the structure and function of northern high‐latitude ecosystems in response to climate change.
Climate change and disturbances are rapidly altering Arctic–Boreal land cover, but such changes are poorly quantified, confounding studies of northern high‐latitude change. We used multidecadal time series of 30 m satellite remote sensing to map and quantify areas of vegetation change across NASA's Arctic–Boreal Vulnerability Experiment (ABoVE), spanning much of western Canada and Alaska, and found that 13% of the domain experienced land cover change. Fire and logging drove net declines of Evergreen Forest area by 15%, while post‐disturbance recovery expanded Deciduous Forest area by 15% and climate warming expanded Shrub and Herb area by 7%.
Every year man-made and natural disasters impact the lives of millions of people. The frequency of occurrence of such disasters is steadily increasing since the last 50 years, and this has resulted ...in considerable loss of life, destruction of infrastructure, and social and economic disruption. A focussed and comprehensive solution is needed encompassing all aspects, including early detection of disaster scenarios, prevention, recovery, and management to minimize the losses. This survey paper presents a critical analysis of the existing methods and technologies that are relevant to a disaster scenario, such as WSN, remote sensing technique, artificial intelligence, IoT, UAV, and satellite imagery, to encounter the issues associated with disaster monitoring, detection, and management. In case of emergency conditions arising out of a typical disaster scenario, there is a strong likelihood that the communication networks will be partially disrupted; thus the alternate networks can play a vital role in disaster detection and management. It focuses on the role of the alternate networks and the associated technologies in maintaining connectivity in various disaster scenarios. It presents a comprehensive study on multiple disasters such as landslide, forest fire, and an earthquake based on the latest technologies to monitor, detect, and manage the various disasters. It focuses on several parameters that are necessary for disaster detection and monitoring and offers appropriate solutions. It also touches upon big data analytics for disaster management. Several techniques are explored, along with their merits and demerits. Open challenges are highlighted, and possible future directions are given.
Land cover classification of Landsat images is one of the most important applications developed from Earth observation satellites. The last four decades were marked by different developments in land ...cover classification methods of Landsat images. This paper reviews the developments in land cover classification methods for Landsat images from the 1970s to date and highlights key ways to optimize analysis of Landsat images in order to attain the desired results. This review suggests that the development of land cover classification methods grew alongside the launches of a new series of Landsat sensors and advancements in computer science. Most classification methods were initially developed in the 1970s and 1980s; however, many advancements in specific classifiers and algorithms have occurred in the last decade. The first methods of land cover classification to be applied to Landsat images were visual analyses in the early 1970s, followed by unsupervised and supervised pixel-based classification methods using maximum likelihood, K-means and Iterative Self-Organizing Data Analysis Technique (ISODAT) classifiers. After 1980, other methods such as sub-pixel, knowledge-based, contextual-based, object-based image analysis (OBIA) and hybrid approaches became common in land cover classification. Attaining the best classification results with Landsat images demands particular attention to the specifications of each classification method such as selecting the right training samples, choosing the appropriate segmentation scale for OBIA, pre-processing calibration, choosing the right classifier and using suitable Landsat images. All these classification methods applied on Landsat images have strengths and limitations. Most studies have reported the superior performance of OBIA on different landscapes such as agricultural areas, forests, urban settlements and wetlands; however, OBIA has challenges such as selecting the optimal segmentation scale, which can result in over or under segmentation, and the low spatial resolution of Landsat images. Other classification methods have the potential to produce accurate classification results when appropriate procedures are followed. More research is needed on the application of hybrid classifiers as they are considered more complex methods for land cover classification.
In order to better manage anthropogenic CO2 emissions, improved methods of quantifying emissions are needed at all spatial scales from the national level down to the facility level. Although the ...Orbiting Carbon Observatory 2 (OCO‐2) satellite was not designed for monitoring power plant emissions, we show that in some cases, CO2 observations from OCO‐2 can be used to quantify daily CO2 emissions from individual middle‐ to large‐sized coal power plants by fitting the data to plume model simulations. Emission estimates for U.S. power plants are within 1–17% of reported daily emission values, enabling application of the approach to international sites that lack detailed emission information. This affirms that a constellation of future CO2 imaging satellites, optimized for point sources, could monitor emissions from individual power plants to support the implementation of climate policies.
Plain Language Summary
Burning coal for electricity generation accounts for more than 40% of humanity's current global CO2 emissions. To better manage CO2 emissions, improved methods of quantifying emissions are needed at all spatial scales. Although the Orbiting Carbon Observatory 2 (OCO‐2) satellite was not designed for monitoring power plant emissions, we show that in select cases, CO2 observations from OCO‐2 can be used to quantify daily CO2 emissions from individual middle‐ to large‐sized coal power plants by fitting the data to a simple model. Demonstrating the method on U.S. power plants with reliable reported emission data enabled application of the approach to international sites that have less or lower quality information available on emissions. Space agencies around the world are currently exploring how to design satellite missions to help address climate change and support Monitoring, Reporting and Verification (MRV) of CO2 emissions for climate agreements. This work affirms that a constellation of CO2 imaging satellites, with a design optimized for point sources, could monitor CO2 emissions from individual fossil fuel burning power plants to support that objective.
Key Points
The combustion of coal for electricity generation accounts for more than 40% of global anthropogenic CO2 emissions
Orbiting Carbon Observatory 2 observations can be used to quantify CO2 emissions from individual coal power plants, in selected cases
This work suggests that a future constellation of CO2 imaging satellites could monitor fossil fuel power plant CO2 emissions to support climate policy
Spatiotemporal data fusion, as a feasible and low-cost solution for producing time-series satellite images with both high spatial and temporal resolution, has undergone rapid development over the ...past two decades with more than one hundred spatiotemporal fusion methods developed. Accuracy assessment of fused images is crucial for users to select appropriate methods for real-world applications. However, commonly used assessment metrics do not comprehensively cover multiple aspects of spatiotemporal fused image quality, contain redundant information, and are not comparable across different study areas. To address these problems, this study proposed a novel framework to assess all-round performances of spatiotemporal fusion methods. Four accuracy metrics, including RMSE, AD, Edge, and local binary patterns (LBP), were selected as the optimal set of assessment metrics according to the assessment criteria. These metrics not only quantify the spectral and spatial information in the fused images but also greatly alleviate information redundancy and feature computational simplicity. Furthermore, inspired by Taylor diagrams, we designed an all-round performance assessment (APA) diagram to provide a visual tool for a comprehensive assessment of the performance of spatiotemporal fusion methods, supporting cross-comparison of different spatiotemporal fusion methods by considering the effects of input data and land surface characteristics. The case study in three typical sites demonstrated that the proposed framework can better differentiate the performances of six spatiotemporal fusion methods. This new framework can promote the cross-comparison of different spatiotemporal fusion methods and guide users to select suitable methods for real-world applications, as well as facilitate the establishment of a standard accuracy assessment procedure for spatiotemporal fusion methods.
•A framework was proposed to assess all-round performances of fused images.•RMSE, AD, Edge, LBP were selected as optimal metrics.•A polar diagram was designed to visualize accuracy measurements with bottom lines.•It can guide users to select appropriate fusion methods for different applications.
Cross-prediction-powered inference Zrnic, Tijana; Candès, Emmanuel J
Proceedings of the National Academy of Sciences - PNAS,
2024-Apr-09, 2024-04-09, 20240409, Letnik:
121, Številka:
15
Journal Article
Recenzirano
Odprti dostop
While reliable data-driven decision-making hinges on high-quality labeled data, the acquisition of quality labels often involves laborious human annotations or slow and expensive scientific ...measurements. Machine learning is becoming an appealing alternative as sophisticated predictive techniques are being used to quickly and cheaply produce large amounts of predicted labels; e.g., predicted protein structures are used to supplement experimentally derived structures, predictions of socioeconomic indicators from satellite imagery are used to supplement accurate survey data, and so on. Since predictions are imperfect and potentially biased, this practice brings into question the validity of downstream inferences. We introduce cross-prediction: a method for valid inference powered by machine learning. With a small labeled dataset and a large unlabeled dataset, cross-prediction imputes the missing labels via machine learning and applies a form of debiasing to remedy the prediction inaccuracies. The resulting inferences achieve the desired error probability and are more powerful than those that only leverage the labeled data. Closely related is the recent proposal of prediction-powered inference A. N. Angelopoulos, S. Bates, C. Fannjiang, M. I. Jordan, T. Zrnic,
, 669-674 (2023), which assumes that a good pretrained model is already available. We show that cross-prediction is consistently more powerful than an adaptation of prediction-powered inference in which a fraction of the labeled data is split off and used to train the model. Finally, we observe that cross-prediction gives more stable conclusions than its competitors; its CIs typically have significantly lower variability.
Accurate information on flood extent and exposure is critical for disaster management in data-scarce, vulnerable regions, such as Sub-Saharan Africa (SSA). However, uncertainties in flood extent ...affect flood exposure estimates. This study developed a framework to examine the spatiotemporal pattern of floods and to assess flood exposure through utilization of satellite images, ground-based participatory mapping of flood extent, and socio-economic data. Drawing on a case study in the White Volta basin in Western Africa, our results showed that synergetic use of multi-temporal radar and optical satellite data improved flood mapping accuracy (77% overall agreement compared with participatory mapping outputs), in comparison with existing global flood datasets (43% overall agreement for the moderate-resolution imaging spectroradiometer (MODIS) Near Real-Time (NRT) Global Flood Product). Increases in flood extent were observed according to our classified product, as well as two existing global flood products. Similarly, increased flood exposure was also observed, however its estimation remains highly uncertain and sensitive to the input dataset used. Population exposure varied greatly depending on the population dataset used, while the greatest farmland and infrastructure exposure was estimated using a composite flood map derived from three products, with lower exposure estimated from each flood product individually. The study shows that there is considerable scope to develop an accurate flood mapping system in SSA and thereby improve flood exposure assessment and develop mitigation and intervention plans.
Nowadays, establishing security in data transmission is essential, and it is achieved by cryptography. Encryption of still or video images in specific applications such as Internet of Things, medical ...and satellite imaging, in applications requiring high-speed encryption, or even in applications where a personal computer is unavailable or cannot be used needs special hardware. In this paper, an image encryption algorithm based on chaos theory named Parallel Chaotic Checksum-based Image Encryption or
PCCIE
algorithm is proposed that has been able to provide a fast, efficient and secure algorithm with a hardware perspective on the design and parallel system structure. Using high-level synthesis,
PCCIE
is implemented on a Field Programmable Gate Array (FPGA). The proposed algorithm using small and independent local image buffers solved FPGA internal memory limitation for encryption of large images. Ultimately, a Hexa-core crypto-processor with single-precision floating-point and fixed-point precision has been designed, capable of encrypting
256
×
256
and Full HD images in 2.13 and 59.52 milliseconds, respectively, at 469 and 16 frames per second.