Freely available Global Digital Elevation Models (GDEMs) are essential for many scientific and humanitarian applications. Recently, TanDEM-X 90 has been released with a global coverage at 3 arc sec ...resolution. Its release is sure to generate keen interest as it provides an alternative to the widely used Shuttle Radar Topography Mission (SRTM) DEM, especially for flood risk management as for low slope floodplains height errors can become particularly significant. Here, we provide a first accuracy assessment of TanDEM-X 90 for selected floodplain sites and compare it to other popular global DEMs – the Shuttle Radar Topography Mission (SRTM) and the error-reduced version of SRTM called Multi-Error-Removed-Improved-Terrain (MERIT) DEM. We characterize vertical height errors by comparing against high resolution LiDAR DEMs for 32 floodplain locations in 6 continents. Results indicate that the average vertical accuracy of TanDEM-X 90 and MERIT are similar and are both a significant improvement on SRTM. We further our analysis by assessing vertical accuracy by landcover, with our results suggesting that TanDEM-X 90 is the most accurate global DEM in all landcover categories tested except short vegetation and tree-covered areas where MERIT is demonstrably more accurate. Lastly, we present the first characterization of the spatial error structure of any TanDEM-X DEM product, and find the spatial error structure is similar to MERIT, with MERIT generally having lower sill values and larger ranges than TanDEM-X 90 and SRTM. Our findings suggest that TanDEM-X 90 has the potential to become the benchmark global DEM in floodplains with careful removal of errors from vegetation, and at this stage should be used alongside MERIT in any flood risk application.
Display omitted
•First vertical accuracy assessment of TanDEM-X 90 for floodplain areas•Comparison to other freely available global DEMs•TanDEM-X 90 accuracy correlated to landcover type•First spatial error structure assessment of TanDEM-X 90•TanDEM-X 90 has the potential to be the benchmark global DEM for floodplains.
Abstract
Elevation data are fundamental to many applications, especially in geosciences. The latest global elevation data contains forest and building artifacts that limit its usefulness for ...applications that require precise terrain heights, in particular flood simulation. Here, we use machine learning to remove buildings and forests from the Copernicus Digital Elevation Model to produce, for the first time, a global map of elevation with buildings and forests removed at 1 arc second (∼30 m) grid spacing. We train our correction algorithm on a unique set of reference elevation data from 12 countries, covering a wide range of climate zones and urban extents. Hence, this approach has much wider applicability compared to previous DEMs trained on data from a single country. Our method reduces mean absolute vertical error in built-up areas from 1.61 to 1.12 m, and in forests from 5.15 to 2.88 m. The new elevation map is more accurate than existing global elevation maps and will strengthen applications and models where high quality global terrain information is required.
High-resolution global flood risk maps are increasingly used to inform disaster risk planning and response, particularly in lower income countries with limited data or capacity. However, current ...approaches do not adequately account for spatial variation in social vulnerability, which is a key determinant of variation in outcomes for exposed populations. Here we integrate annual average exceedance probability estimates from a high-resolution fluvial flood model with gridded population and poverty data to create a global vulnerability-adjusted risk index for flooding (VARI Flood) at 90-meter resolution. The index provides estimates of relative risk within or between countries and changes how we understand the geography of risk by identifying 'hotspots' characterised by high population density and high levels of social vulnerability. This approach, which emphasises risks to human well-being, could be used as a complement to traditional population or asset-centred approaches.
Open-access global Digital Elevation Models (DEM) have been crucial in enabling flood studies in data-sparse areas. Poor resolution (>30 m), significant vertical errors and the fact that these DEMs ...are over a decade old continue to hamper our ability to accurately estimate flood hazard. The limited availability of high-accuracy DEMs dictate that dated open-access global DEMs are still used extensively in flood models, particularly in data-sparse areas. Nevertheless, high-accuracy DEMs have been found to give better flood estimations, and thus can be considered a ‘must-have’ for any flood model. A high-accuracy open-access global DEM is not imminent, meaning that editing or stochastic simulation of existing DEM data will remain the primary means of improving flood simulation. This article provides an overview of errors in some of the most widely used DEM data sets, along with the current advances in reducing them via the creation of new DEMs, editing DEMs and stochastic simulation of DEMs. We focus on a geostatistical approach to stochastically simulate floodplain DEMs from several open-access global DEMs based on the spatial error structure. This DEM simulation approach enables an ensemble of plausible DEMs to be created, thus avoiding the spurious precision of using a single DEM and enabling the generation of probabilistic flood maps. Despite this encouraging step, an imprecise and outdated global DEM is still being used to simulate elevation. To fundamentally improve flood estimations, particularly in rapidly changing developing regions, a high-accuracy open-access global DEM is urgently needed, which in turn can be used in DEM simulation.
The Shuttle Radar Topography Mission has long been used as a source topographic information for flood hazard models, especially in data‐sparse areas. Error corrected versions have been produced, ...culminating in the latest global error reduced digital elevation model (DEM)—the Multi‐Error‐Removed‐Improved‐Terrain (MERIT) DEM. This study investigates the spatial error structure of MERIT and Shuttle Radar Topography Mission, before simulating plausible versions of the DEMs using fitted semivariograms. By simulating multiple DEMs, we allow modelers to explore the impact of topographic uncertainty on hazard assessment even in data‐sparse locations where typically only one DEM is currently used. We demonstrate this for a flood model in the Mekong Delta and a catchment in Fiji using deterministic DEMs and DEM ensembles simulated using our approach. By running an ensemble of simulated DEMs we avoid the spurious precision of using a single DEM in a deterministic simulation. We conclude that using an ensemble of the MERIT DEM simulated using semivariograms by land cover class gives inundation estimates closer to a light detection and ranging‐based benchmark. This study is the first to analyze the spatial error structure of the MERIT DEM and the first to simulate DEMs and apply these to flood models at this scale. The research workflow is available via an R package called DEMsimulation.
Plain Language Summary
A lack of accurate digital elevation models (DEMs) for flood inundation modeling in data‐sparse regions means that predictions of flood inundation are subject to substantial errors. These errors have rarely been assessed due to a lack of information on the spatial structure of DEM errors. In this study, we analyze the vertical DEM error and how this error varies spatially for both the widely used Shuttle Radar Topography Mission (SRTM) DEM and an error reduced variant of SRTM called Multi‐Error‐Removed‐Improved‐Terrain (MERIT) DEM for 20 lowland locations. We then use the spatial error characteristics to simulate plausible versions of topography. By simulating many statistically plausible topographies, flood models can assess the effects of uncertain topography on predicted flood extents. We demonstrate this by using a collection of simulated DEMs in flood models for two locations. We conclude that using an ensemble of MERIT DEMs simulated using the spatial error disaggregated by land cover class gives flood estimates closest to that of a benchmark flood model. This study is of interest to others as our calculated spatial error relationships can be used to simulate floodplain topography in the MERIT/SRTM data sets through our open‐source code, allowing for probabilistic flood maps to be produced.
Key Points
Assessed vertical error and estimated semivariograms for MERIT and SRTM DEMs for 20 lowland locations
Calculated spatial error structure can be used to simulate floodplain in the MERIT and SRTM DEMs
Using simulated DEMs in flood models for two locations gives more realistic flood estimates compared to using a single DEM
Digital elevation models (DEMs) provide fundamental depictions of the three-dimensionalshape of the Earth’s surface and are useful to a wide range of disciplines. Ideally, DEMs record theinterface ...between the atmosphere and the lithosphere using a discrete two-dimensional grid, withcomplexities introduced by the intervening hydrosphere, cryosphere, biosphere, and anthroposphere.The treatment of DEM surfaces, affected by these intervening spheres, depends on their intendeduse, and the characteristics of the sensors that were used to create them. DEM is a general term,and more specific terms such as digital surface model (DSM) or digital terrain model (DTM) recordthe treatment of the intermediate surfaces. Several global DEMs generated with optical (visible andnear-infrared) sensors and synthetic aperture radar (SAR), as well as single/multi-beam sonars andproducts of satellite altimetry, share the common characteristic of a georectified, gridded storagestructure. Nevertheless, not all DEMs share the same vertical datum, not all use the same conventionfor the area on the ground represented by each pixel in the DEM, and some of them have variable dataspacings depending on the latitude. This paper highlights the importance of knowing, understandingand reflecting on the sensor and DEM characteristics and consolidates terminology and definitions ofkey concepts to facilitate a common understanding among the growing community of DEM users,who do not necessarily share the same background
Global flood models (GFMs) and earth observation (EO) play a crucial role in characterising flooding, especially in data-sparse, under-resourced regions of the world. However, validation studies are ...often limited to a handful of historic events and do not directly assess the ability of these products to simulate flood hazard-the probability that flooding will occur in a given location. As a result, it is difficult for stakeholders to decipher the ability of either models or observations to identify flood hazard and make decisions to mitigate for flooding. Here, we leverage flood observations from 20 years of MODIS data to compare the recorded flooding with what would be expected given the hazard simulated by a GFM. We devise an approach, Flood Expectation Per Pixel, and apply it across four large basins in Africa-Congo, Niger, Nile and Volta representing a variety of biomes. We estimate the uncertainty of EO to capture flood events due to burned areas, cloud cover and vegetation, incorporating uncertainty estimates when comparing to modelled hazard. We found that at lower return periods (RPs) (<20 years), the EO data records less flooding than the GFM, suggesting GFMs overpredict frequent flooding. For RPs between 50 and 100 years, GFM and EO data show greater consistency given the uncertainties we consider. For large RPs (100 years) the EO observations show more flooding than expected given the GFM data, potentially due to data errors and non-fluvial flooding, however there are too few observations to draw significant conclusions at these RPs. The EO record indicates that the GFM can differentiate between flood RPs. We find EO and GFM complement each other and thus should be used in tandem to inform strategies to mitigate floods across the hazard spectrum from frequent to extreme flood events.
Abstract
A large number of historical simulations and future climate projections are available from Global Climate Models, but these are typically of coarse resolution, which limits their ...effectiveness for assessing local scale changes in climate and attendant impacts. Here, we use a novel statistical downscaling model capable of replicating extreme events, the Bias Correction Constructed Analogues with Quantile mapping reordering (BCCAQ), to downscale daily precipitation, air-temperature, maximum and minimum temperature, wind speed, air pressure, and relative humidity from 18 GCMs from the Coupled Model Intercomparison Project Phase 6 (CMIP6). BCCAQ is calibrated using high-resolution reference datasets and showed a good performance in removing bias from GCMs and reproducing extreme events. The globally downscaled data are available at the Centre for Environmental Data Analysis (
https://doi.org/10.5285/c107618f1db34801bb88a1e927b82317
) for the historical (1981–2014) and future (2015–2100) periods at 0.25° resolution and at daily time step across three Shared Socioeconomic Pathways (SSP2-4.5, SSP5-3.4-OS and SSP5-8.5). This new climate dataset will be useful for assessing future changes and variability in climate and for driving high-resolution impact assessment models.
Southern Asia experiences some of the most damaging climate events in the world, with loss of life from some cyclones in the hundreds of thousands. Despite this, research on climate extremes in the ...region is substantially lacking compared to other parts of the world. To understand the narrative of how an extreme event in the region may change in the future, we consider Super Cyclone Amphan, which made landfall in May 2020, bringing storm surges of 2–4 m to coastlines of India and Bangladesh. Using the latest CMIP6 climate model projections, coupled with storm surge, hydrological, and socio‐economic models, we consider how the population exposure to a storm surge of Amphan's scale changes in the future. We vary future sea level rise and population changes consistent with projections out to 2100, but keep other factors constant. Both India and Bangladesh will be negatively impacted, with India showing >200% increased exposure to extreme storm surge flooding (>3 m) under a high emissions scenario and Bangladesh showing an increase in exposure of >80% for low‐level flooding (>0.1 m). It is only when we follow a low‐emission scenario, consistent with the 2°C Paris Agreement Goal, that we see no real change in Bangladesh's storm surge exposure, mainly due to the population and climate signals cancelling each other out. For India, even with this low‐emission scenario, increases in flood exposure are still substantial (>50%). While here we attribute only the storm surge flooding component of the event to climate change, we highlight that tropical cyclones are multifaceted, and damages are often an integration of physical and social components. We recommend that future climate risk assessments explicitly account for potential compounding factors.
Tropical cyclones in South Asia can be devistating, with the 2020 Super Cyclone Amphan being the costliest on record for India and Banglesdesh. We consider how the population exposed to a tropical cyclone storm surge, and subsequnet flooding, will change in the future given varying degrees of sea level rise. We show significant increases in exposure for both countries, but when future population movement is taken into account, the increased flood impacts are smaller in Bangledesh due to migration of people away from the coast.
Flood inundation modeling across large data sparse areas has been increasing in recent years, driven by a desire to provide hazard information for a wider range of locations. The sophistication of ...these models has steadily advanced over the past decade due to improvements in remote sensing and modeling capability. There are now several global flood models (GFMs) that seek to simulate water surface dynamics across all rivers and floodplains regardless of data scarcity. However, flood models in data sparse areas lack river bathymetry because this cannot be observed remotely, meaning that a variety of methods for approximating river bathymetry have been developed from uniform flow or downstream hydraulic geometry theory. We argue that bathymetry estimation in these models should follow gradually varying flow theory to account for both uniform and nonuniform flows. We demonstrate that existing methods for bathymetry estimation in GFMs are only accurate for kinematic water surface profiles and are unable to simulate unbiased water surface profiles for reaches with diffusive or shallow water wave properties. The use of gradually varied flow theory to estimate bathymetry in a GFM reduced model error compared to a target water surface profile by 66% and eliminated bias due to backwater effects. For a large‐scale test case in Mozambique this reduced flood extents by 40% and floodplain storage by 79% at the 5 years return period. The wet bias associated with uniform flow derived channels could have significant implications for modeling the role floodplains play in attenuating river discharges, potentially overstating their role.
Key Points
Flood models in data sparse areas must estimate river bathymetry
Existing methods are prone to over‐prediction bias
Channel estimation based on gradually varied flow theory is substantially more accurate