We present a community data set of daily forcing and hydrologic response data for 671 small- to medium-sized basins across the contiguous United States (median basin size of 336 km2) that spans a ...very wide range of hydroclimatic conditions. Area-averaged forcing data for the period 1980-2010 was generated for three basin spatial configurations - basin mean, hydrologic response units (HRUs) and elevation bands - by mapping daily, gridded meteorological data sets to the subbasin (Daymet) and basin polygons (Daymet, Maurer and NLDAS). Daily streamflow data was compiled from the United States Geological Survey National Water Information System. The focus of this paper is to (1) present the data set for community use and (2) provide a model performance benchmark using the coupled Snow-17 snow model and the Sacramento Soil Moisture Accounting Model, calibrated using the shuffled complex evolution global optimization routine. After optimization minimizing daily root mean squared error, 90% of the basins have Nash-Sutcliffe efficiency scores greater than or equal to 0.55 for the calibration period and 34% greater than or equal to 0.8. This benchmark provides a reference level of hydrologic model performance for a commonly used model and calibration system, and highlights some regional variations in model performance. For example, basins with a more pronounced seasonal cycle generally have a negative low flow bias, while basins with a smaller seasonal cycle have a positive low flow bias. Finally, we find that data points with extreme error (defined as individual days with a high fraction of total error) are more common in arid basins with limited snow and, for a given aridity, fewer extreme error days are present as the basin snow water equivalent increases.
This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to ...produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.
The ability to effectively manage water resources to meet present and future human and environmental needs is essential. Such an ability necessitates a comprehensive understanding of hydrologic ...processes that affect streamflow at a watershed scale. In the United States, water-resources management at scales ranging from local to national can benefit from a nationally consistent, process-based watershed modeling capability to provide the requisite understanding. The National Hydrologic Model (NHM) infrastructure, which was developed by the U.S. Geological Survey to support coordinated, comprehensive, and consistent hydrologic modeling at multiple scales for the conterminous United States, provides this essential capability. NHM-based applications provide information to enable more effective water-resources planning and management, fill knowledge gaps in ungaged areas, and support basic scientific inquiry. In the future, as process algorithms and data sets improve, the NHM infrastructure will continue to evolve to better support the nation's water-resources research and management needs.
•Nationally consistent, locally informed simulation of water budget components.•Reduced initial costs for watershed studies using available models and data.•NHM infrastructure is implemented with two model codes, potentially more in future.•Flexible, open-source infrastructure that is extensible for research and operations.•NHM-PRMS is available for the hydrologic community to download and use now.
Because use of high-resolution hydrologic models is becoming more widespread and estimates are made over large domains, there is a pressing need for systematic evaluation of their performance. Most ...evaluation efforts to date have focused on smaller basins that have been relatively undisturbed by human activity, but there is also a need to benchmark model performance more comprehensively, including basins impacted by human activities. This study benchmarks the long-term performance of two process-oriented, high-resolution, continental-scale hydrologic models that have been developed to assess water availability and risks in the United States (US): the National Water Model v2.1 application of WRF-Hydro (NWMv2.1) and the National Hydrologic Model v1.0 application of the Precipitation–Runoff Modeling System (NHMv1.0). The evaluation is performed on 5390 streamflow gages from 1983 to 2016 (∼ 33 years) at a daily time step, including both natural and human-impacted catchments, representing one of the most comprehensive evaluations over the contiguous US. Using the Kling–Gupta efficiency as the main evaluation metric, the models are compared against a climatological benchmark that accounts for seasonality. Overall, the model applications show similar performance, with better performance in minimally disturbed basins than in those impacted by human activities. Relative regional differences are also similar: the best performance is found in the Northeast, followed by the Southeast, and generally worse performance is found in the Central and West areas. For both models, about 80 % of the sites exceed the seasonal climatological benchmark. Basins that do not exceed the climatological benchmark are further scrutinized to provide model diagnostics for each application. Using the underperforming subset, both models tend to overestimate streamflow volumes in the West, which could be attributed to not accounting for human activities, such as active management. Both models underestimate flow variability, especially the highest flows; this was more pronounced for NHMv1.0. Low flows tended to be overestimated by NWMv2.1, whereas there were both over and underestimations for NHMv1.0, but they were less severe. Although this study focused on model diagnostics for underperforming sites based on the seasonal climatological benchmark, metrics for all sites for both model applications are openly available online.
•Fuzzy logic was used to introduce the human factor into the frequency calculation.•Organizational, job and personal characteristics are the qualitative model variables.•A fuzzy frequency modifier ...was created.•The model was applied in two real case studies.•The case studies risk results were improved due to the inclusion of the human factor.
The frequency of occurrence of an accident scenario is one of the key aspects to take into consideration in the field of risk assessment. This frequency is commonly assessed by a generic failure frequency approach. Although every data source takes into account different variables, aspects such as the human factor are not explicitly detailed, mainly because this factor is laborious to quantify. In the present work, the generic failure frequencies are modified using fuzzy logic. This theory allows the inclusion of qualitative variables that are not considered by traditional methods and to deal with the uncertainty involved. This methodology seems to be a suitable tool to integrate the human factor in risk assessment since it is specially oriented to rationalize the uncertainty related to imprecision or vagueness. A fuzzy modifier has been developed in order to introduce the human factor in the failure frequency estimation.
In order to design the proposed model, it is necessary to consider the opinion of the experts. Therefore, a questionnaire on the variables was designed and replied by forty international experts. To test the model, it was applied to two real case studies of chemical plants. New frequency values were obtained and together with the consequence assessment, new iso-risk curves were plotted allowing to compare them to the ones resulting from a quantitative risk analysis (QRA). Since the human factor is now reflected in the failure frequency estimation, the results are more realistic and accurate, and consequently they improve the final risk assessment.
Summary
This paper aims to evaluate the suitability of the ECOSSE model to estimate soil heterotrophic respiration (Rh) from arable land and short rotation coppices of poplar and willow. Between 2011 ...and 2013, we measured Rh with automatic closed dynamic chambers on root exclusion plots at one site in the UK (willow, mixed commercial genotypes of Salix spp.) and two sites in Italy (arable and poplar, Populus × Canadensis Moench, Oudemberg genotype), and compared these measured fluxes to simulated values of Rh with the ECOSSE model. Correlation coefficients (r) between modelled and measured monthly Rh data were strong and significant, with a range between 0.81 and 0.96 for all three types of vegetation. There was no significant error and bias in the model for any site. The model was able to predict seasonal trends in Rh at all three sites even though it occasionally underestimated the flux values during warm weather in spring and summer. Because of the strong correlation between the measured and modelled values, it is unlikely that underestimation of the flux is the result of missing processes in the model. Therefore, further detailed monitoring of Rh is needed to modify the model. In this research, a limited set of input data was used to simulate Rh at the three sites. Nevertheless, overall results of the model evaluation suggest that the ECOSSE model simulates soil Rh adequately under all land uses tested and that continuous and direct measurements (such as automatic chambers installed on root‐exclusion plots) are a useful tool to test model performance to simulate Rh at the site level.
Highlights
Model evaluation is crucial to predict soil carbon balance accurately.
Modelled and measured heterotrophic respiration were compared for three land uses.
The model performed well statistically for all three vegetation types.
Modelled heterotrophic respiration should be evaluated by comparison to continuous measurements.
A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed‐parameter, physical‐process hydrological simulation code. The extension does ...not require extensive on‐glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while maintaining model usability. PRMSglacier is validated on two basins in Alaska, Wolverine, and Gulkana Glacier basin, which have been studied since 1966 and have a substantial amount of data with which to test model performance over a long period of time covering a wide range of climatic and hydrologic conditions. When error in field measurements is considered, the Nash‐Sutcliffe efficiencies of streamflow are 0.87 and 0.86, the absolute bias fractions of the winter mass balance simulations are 0.10 and 0.08, and the absolute bias fractions of the summer mass balances are 0.01 and 0.03, all computed over 42 years for the Wolverine and Gulkana Glacier basins, respectively. Without taking into account measurement error, the values are still within the range achieved by the more computationally expensive codes tested over shorter time periods.
Key Points
Details of glacier runoff module addition to existing hydrological simulation code (Precipitation Runoff Modeling System (PRMS))
Module designed to work in remote areas with works with limited or no on‐glacier measurements
Module tested on two well‐studied glaciers and showed comparable results to other models with more data and computation demands
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, ...automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model\'s simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey\'s Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated consistently with measured values.
•Monte Carlo simulation was used to introduce human factor into frequency calculation.•The model includes organizational, job and personal characteristics factors.•The model was applied in two real ...chemical plants as case studies.•The final risk assessment was more conservative and realistic.
The frequency of occurrence of an accident is a key aspect in the risk assessment field. Variables such as the human factor (HF), which is a major cause of undesired events in process industries, are usually not considered explicitly, mainly due to the uncertainty generated due to the lack of knowledge and the complexity associated to it.
In this work, failure frequencies are modified through Monte Carlo (MC) simulation including the uncertainty generated by HF. MC is one of the most commonly approach used for uncertainty assessment based on probability distribution functions that represent all the variables included in the model.
This technique has been also proved to be very useful in the risk assessment field. The model takes into account the uncertainty and variability generated by several HF variables.
In order to test the model, it has been applied to two real case studies, obtaining new frequency values for the different scenarios. Together with the consequences assessment, new isorisk curves were plotted. Since the uncertainty generated by the HF has now been taken in to account through MC simulation, these new values are more realistic and accurate. As a result, an improvement of the final risk assessment is achieved.