Current estimates of global flood exposure are made using datasets that distribute population counts homogenously across large lowland floodplain areas. When intersected with simulated water depths, ...this results in a significant mis-estimation. Here, we use new highly resolved population information to show that, in reality, humans make more rational decisions about flood risk than current demographic data suggest. In the new data, populations are correctly represented as risk-averse, largely avoiding obvious flood zones. The results also show that existing demographic datasets struggle to represent concentrations of exposure, with the total exposed population being spread over larger areas. In this analysis we use flood hazard data from a ~90 m resolution hydrodynamic inundation model to demonstrate the impact of different population distributions on flood exposure calculations for 18 developing countries spread across Africa, Asia and Latin America. The results suggest that many published large-scale flood exposure estimates may require significant revision.
In this paper we seek to understand the nature of flood spatial dependence over the conterminous United States. We extend an existing conditional multivariate statistical model to enable its ...application to this large and heterogenous region and apply it to a 40‐year data set of ~2,400 U.S. Geological Survey gauge series records to simulate 1,000 years of U.S. flooding comprising more than 63,000 individual events with realistic spatial dependence. A continental‐scale hydrodynamic model at 30 m resolution is then used to calculate the economic loss arising from each of these events. From this we are able to compute the probability that different values of U.S. annual total economic loss due to flooding are exceeded (i.e., a loss‐exceedance curve). Comparing these data to an observed flood loss‐exceedance curve for the period 1988–2017 shows a reasonable match for annual losses with probability below 10% (e.g., >1 in 10‐year return period). This analysis suggests that there is a 1% chance of U.S. annual fluvial flood losses exceeding $78Bn in any given year, and a 0.1% chance of them exceeding $136Bn. Analysis of the set of stochastic events and losses yields new insights into the nature of flooding and flood risk in the United States. In particular, we confirm the strong relationship between flood affected area and event peak magnitude, but show considerable variability in this relationship between adjacent U.S. regions. The analysis provides a significant advance over previous national flood risk analyses as it gives the full loss‐exceedance curve instead of simply the average annual loss.
Plain Language Summary
Traditional flood risk analyses make the assumption that flow probability (the chance that a given river discharge is exceeded) does not vary within river catchments within an event. Real floods, however, do not look like this: In some places flooding is more severe than in others. Over a few tens of kilometers of river assuming the same event return period everywhere is perfectly fine, but over larger areas it breaks down. At national scales traditional risk analyses can only estimate the average annual loss. To estimate the total annual losses that might occur in more extreme flooding years the risk analysis needs to be based on more realistic spatial patterns of flooding. In this paper we use a sophisticated statistical model, based on U.S. Geological Survey river flow data, to simulate 1,000 years of spatially realistic U.S. flooding comprising more than 63,000 individual events. By calculating the damage for each event as a dollar value, we are able to estimate the probability of the United States experiencing particular levels of annual flood losses. We show that there is a 1% chance of U.S. annual fluvial flood losses exceeding $78Bn in any given year, and a 0.1% chance of them exceeding $136Bn.
Key Points
1,000 years of realistic U.S. flood patterns, comprising >63,000 individual events, are simulated using a statistical model
Monetary losses for each event are calculated using a continental hydrodynamic model at 30 m resolution
The analysis suggests that there is a 1% chance of U.S. annual fluvial flood losses exceeding $78Bn in any given year
•We assimilate satellite SAR-derived water level with an ensemble filter (ETKF).•We show filter localization is strictly required to avoid spurious correlations.•We propose a novel along-network ...metric for filter localization in the river network.•Joint inflow bias correction, for local filters, leads to improved forecast skills.•Joint EO-based estimate of friction and distributed bathymetry seems also feasible.
Satellite-based (e.g., Synthetic Aperture Radar SAR) water level observations (WLOs) of the floodplain can be sequentially assimilated into a hydrodynamic model to decrease forecast uncertainty. This has the potential to keep the forecast on track, so providing an Earth Observation (EO) based flood forecast system. However, the operational applicability of such a system for floods developed over river networks requires further testing. One of the promising techniques for assimilation in this field is the family of ensemble Kalman (EnKF) filters. These filters use a limited-size ensemble representation of the forecast error covariance matrix. This representation tends to develop spurious correlations as the forecast-assimilation cycle proceeds, which is a further complication for dealing with floods in either urban areas or river junctions in rural environments. Here we evaluate the assimilation of WLOs obtained from a sequence of real SAR overpasses (the X-band COSMO-Skymed constellation) in a case study. We show that a direct application of a global Ensemble Transform Kalman Filter (ETKF) suffers from filter divergence caused by spurious correlations. However, a spatially-based filter localization provides a substantial moderation in the development of the forecast error covariance matrix, directly improving the forecast and also making it possible to further benefit from a simultaneous online inflow error estimation and correction. Additionally, we propose and evaluate a novel along-network metric for filter localization, which is physically-meaningful for the flood over a network problem. Using this metric, we further evaluate the simultaneous estimation of channel friction and spatially-variable channel bathymetry, for which the filter seems able to converge simultaneously to sensible values. Results also indicate that friction is a second order effect in flood inundation models applied to gradually varied flow in large rivers. The study is not conclusive regarding whether in an operational situation the simultaneous estimation of friction and bathymetry helps the current forecast. Overall, the results indicate the feasibility of stand-alone EO-based operational flood forecasting.
Large‐scale flood modelling approaches designed for regional to continental scales usually rely on relatively simple assumptions to represent the potentially highly complex river bathymetry at the ...watershed scale based on digital elevation models (DEMs) with a resolution in the range of 25–30 m. Here, high‐resolution (1 m) LiDAR DEMs are employed to present a novel large‐scale methodology using a more realistic estimation of bathymetry based on hydrogeomorphological GIS tools to extract water surface slope. The large‐scale 1D/2D flood model LISFLOOD‐FP is applied to validate the simulated flood levels using detailed water level data in four different watersheds in Quebec (Canada), including continuous profiles over extensive distances measured with the HydroBall technology. A GIS‐automated procedure allows to obtain the average width required to run LISFLOOD‐FP. The GIS‐automated procedure to estimate bathymetry from LiDAR water surface data uses a hydraulic inverse problem based on discharge at the time of acquisition of LiDAR data. A tiling approach, allowing several small independent hydraulic simulations to cover an entire watershed, greatly improves processing time to simulate large watersheds with a 10‐m resampled LiDAR DEM. Results show significant improvements to large‐scale flood modelling at the watershed scale with standard deviation in the range of 0.30 m and an average fit of around 90%. The main advantage of the proposed approach is to avoid the need to collect expensive bathymetry data to efficiently and accurately simulate flood levels over extensive areas.
A novel approach based on LiDAR data shows significant improvements to large‐scale flood modelling at the watershed scale with standard deviation in the range of 0.30 m and an average fit of around 90%. Novelties include a GIS‐automated procedure to estimate bathymetry from LiDAR water surface data using hydraulic inverse problem combined with a tiling approach to greatly improve processing time to simulate large watersheds with a 10‐m resampled LiDAR DEM.
•We assimilate satellite SAR-derived water level with an ensemble filter (ETKF).•We evaluate the forecast sensitivity to satellite first visit and revisit time.•Online correction of imposed bias ...clearly improves the 2D flood model/DA forecast.•Imagery obtained early in the flood has a large influence on the forecast.•Revisit interval is most influential for early observations.
Satellite-based Synthetic Aperture Radar (SAR) has proved useful for obtaining information on flood extent, which, when intersected with a Digital Elevation Model (DEM) of the floodplain, provides water level observations that can be assimilated into a hydrodynamic model to decrease forecast uncertainty. With an increasing number of operational satellites with SAR capability, information on the relationship between satellite first visit and revisit time and forecast performance is required to optimise the operational scheduling of satellite imagery. By using an Ensemble Transform Kalman Filter (ETKF) and a synthetic analysis with the 2D hydrodynamic model LISFLOOD-FP based on a real flooding case affecting an urban area (summer 2007, Tewkesbury, Southwest UK), we evaluate the sensitivity of the forecast performance to visit parameters. We emulate a generic hydrologic–hydrodynamic modelling cascade by imposing a bias and spatiotemporal correlations to the inflow error ensemble into the hydrodynamic domain. First, in agreement with previous research, estimation and correction for this bias leads to a clear improvement in keeping the forecast on track. Second, imagery obtained early in the flood is shown to have a large influence on forecast statistics. Revisit interval is most influential for early observations. The results are promising for the future of remote sensing-based water level observations for real-time flood forecasting in complex scenarios.
Corporate Digital Literacy Mandates Brown, Lisa R.; McCray, Pamela; Neal, Jeff L. ...
International journal of advanced corporate learning,
05/2023, Letnik:
16, Številka:
2
Journal Article
Recenzirano
Odprti dostop
Envision an employee showing up faithfully every day for work but cognitively checked out every minute (i.e., quiet quitting). This article adapts a futurist perspective to describe the adult ...education pedagogy of experiential learning in juxtaposition to the limitations of behaviorist employee training incentives. The authors conceptually apply Spiral Dynamic Theory (SDT) based predictive strategies to capitalize on the assumptions of intrinsic and extrinsic motivation themes among contemporary adult workers. The field of Adult and Continuing Education caters its teaching and learning to people who are 25 years of age and older. As employees, they bring to the corporate work environment a unique set of skills and life experiences that require pedagogical delivery that is innovative and motivating. Research shows that older adults are often technology averse. Therefore, scaffolding the employee’s use of technology and social media as expectations of the work tasks could help improve low digital literacy and increase self-efficacy. This paper offers Spiral Dynamic Theory (SDT) as an instrument for adult training and professional development design.
Bioluminescence imaging (BLI) is becoming indispensable to the study of transgene expression during development and, in many in vivo models of disease such as cancer, for high throughput drug ...screening in vitro. Because reaction of d-luciferin with firefly luciferase (fLuc) produces photons of sufficiently long wavelength to permit imaging in intact animals, use of this substrate and enzyme pair has become the method of choice for performing BLI in vivo. We now show that expression of the ATP-binding cassette (ABC) family transporter ABCG2/BCRP affects BLI signal output from the substrate d-luciferin. In vitro studies show that d-luciferin is a substrate for ABCG2/BCRP but not for the MDR1 P-glycoprotein (ABCB1/Pgp), multidrug resistance protein 1 (MRP1/ABCC1), or multidrug resistance protein 2 (MRP2/ABCC2). d-Luciferin uptake within cells is shown to be modulated by ABC transporter inhibitors, including the potent and selective ABCG2/BCRP inhibitor fumitremorgin C. Images of xenografts engineered to express transgenic ABCG2/BCRP, as well as xenografts derived from the human prostate cancer cell line 22Rv1 that naturally express ABCG2/BCRP, show that ABCG2/BCRP expression and function within regions of interest substantially influence d-luciferin-dependent bioluminescent output in vivo. These findings highlight the need to consider ABCG2/BCRP effects during d-luciferin-based BLI and suggest novel high throughput methods for identifying new ABCG2/BCRP inhibitors.
We propose a machine learning‐based approach to estimate the flood defense standard (FDS) for unlabeled sites. We adopted random forest regression (RFR) to characterize the relationship between the ...declared FDS and 10 explanatory factors contained in publicly available data sets. We compared RFR with multiple linear regression (MLR) and demonstrated the proposed approach in the conterminous United States (CONUS) and England, respectively. The results showed the following: (a) RFR performed better than MLR, with a Nash–Sutcliffe efficiency of 0.85 in the CONUS and 0.76 in England. Unsatisfactory performances of MLR indicated that the relationship between the FDS and explanatory factors did not obey an explicit linear function. (b) RFR revealed river flood factors had higher importance than physical and socio‐economic factors in the FDS estimation. The proposed RFR achieved the highest performance using all factors for prediction and could not provide good predictions (NSE < 0.65) using physical or socio‐economic factors individually. (c) We estimated the FDS for all unlabeled sites in the CONUS and England. Approximately 80% and 29% of sites were identified as high or highest standard (>100‐year return period) in the CONUS and England, respectively. (d) We incorporated the estimated FDS in large‐scale flood modeling and compared the model results with official flood hazard maps in three case studies. We identified obvious overestimations in protected areas when flood defenses were not taken into account; flood defenses were successfully represented using the proposed approach.
Key Points
A machine learning‐based approach was developed for flood defense standard (FDS) estimation using publicly available datasets
This approach achieved good accuracy for FDS estimation in the conterminous United States and England
Three case studies were used to test the reliable representation of the proposed approach in large‐scale flood hazard modeling