•System recovery under multihazards modeled using series of semi-Markov processes.•Model considers inter-event dependencies during the system recovery process.•Multihazard resilience of a residential ...building in Charleston, SC, is studied.•Considering inter-event dependencies leads to lesser-predicted resilience.•Reduction in resilience depends on intensities and arrival times of hazards.
Civil infrastructure systems are subjected to multiple hazards, including natural and anthropogenic, that disrupt their function or the level of service offered. Estimating the function recovery of these systems (or how soon normalcy of operations will be restored) when subjected to repeated hazard events by considering the inter-event dependencies is an important problem in multihazard infrastructure resilience. However, this problem has been less addressed in the field. This paper proposes a series of semi-Markov processes model to capture the inter-event dependencies in infrastructure recovery when subjected to successive hazard events. Recovery after each new hazard event is represented by a unique semi-Markov process that models the reduced recovery rates and the increased recovery times caused by the system’s incomplete recovery from the preceding event. Two novel formulations of the inter-event dependency modeling, namely Maximal Effects Dependency (considers the worst impact of two successive hazard events) and Cumulative Effects Dependency (considers the aggregated impacts of two successive hazard events), are proposed and discussed. The model is demonstrated by considering the following applications: Three-state system subjected to deterministic and random occurrences of identical hazard events; and Multihazard resilience of a building in Charleston, SC, considering earthquake and hurricane hazards. Results indicate that considering inter-event dependencies in recovery modeling can lead to lesser-predicted resilience, thereby affecting resilience-based decision-making.
Multivariate Bayesian inference can bring significant benefits to seismic hazard analysis: its multivariate feature enables computing scalar and vector hazard without making any approximations; ...Correlations between intensity measures are implicitly modeled, permitting direct simulation of ground motion selection tools such as the conditional mean spectrum and the generalized conditioning intensity measure. Its updating feature enables a seamless integration of new ground motion data into the hazard results. In this paper, we first develop a multivariate Bayesian ground motion model through the NGA-West2 database. The model functional form considers fault type, magnitude and distance dependencies, and also the linear and the rock intensity-dependent site response. We use a hybrid Markov chain Monte Carlo sampling to perform Bayesian inference consisting of Gibbs step and a multilevel Metropolis–Hastings step. We then perform several checks on the model to ensure that it is unbiased. Finally, we illustrate the merits of this multivariate Bayesian analysis through practical and contemporary examples, which include: ground motion model updating with ground motion data recorded in the last four years and not part of the NGA-West2 database; computation of scalar and vector seismic hazard using the un-updated and updated ground motion models for Los Angeles, CA; and simulation of the conditional mean spectrum under scalar and vector IM conditioning while accounting for different sources of aleatoric and epistemic uncertainties.
Tristructural isotropic (TRISO)-coated particle fuel is a robust nuclear fuel and determining its reliability is critical for the success of advanced nuclear technologies. However, TRISO failure ...probabilities are small and the associated computational models are expensive. We used coupled active learning, multifidelity modeling, and subset simulation to estimate the failure probabilities of TRISO fuels using several 1D and 2D models. With multifidelity modeling, we replaced expensive high-fidelity (HF) model evaluations with information fusion from two low-fidelity (LF) models. For the 1D TRISO models, we considered three multifidelity modeling strategies: only Kriging, Kriging LF prediction plus Kriging correction, and deep neural network (DNN) LF prediction plus Kriging correction. While the results across these multifidelity modeling strategies compared satisfactorily, strategies employing information fusion from two LF models called the HF model least often. Next, for the 2D TRISO model, we considered two multifidelity modeling strategies: DNN LF prediction plus Kriging correction (data-driven) and 1D TRISO LF prediction plus Kriging correction (physics-based). The physics-based strategy, as expected, consistently required the fewest calls to the HF model. However, the data-driven strategy had a lower overall simulation time since the DNN predictions are instantaneous, and the 1D TRISO model requires a non-negligible simulation time.
•TRISO, a robust nuclear fuel, is associated with small failure probabilities.•Active learning with multifidelity modeling to efficiently estimate TRISO failure.•Multifidelity information fusion from two low-fidelity models most efficient.•Physics-based multifidelity strategy reduces calls to high-fidelity model.•Data-driven multifidelity strategy has less overall simulation time.
While multifidelity modeling provides a cost-effective way to conduct uncertainty quantification with computationally expensive models, much greater efficiency can be achieved by adaptively deciding ...the number of required high-fidelity (HF) simulations, depending on the type and complexity of the problem and the desired accuracy in the results. We propose a framework for active learning with multifidelity modeling emphasizing the efficient estimation of rare events. Our framework works by fusing a low-fidelity (LF) prediction with an HF-inferred correction, filtering the corrected LF prediction to decide whether to call the high-fidelity model, and for enhanced subsequent accuracy, adapting the correction for the LF prediction after every HF model call. The framework does not make any assumptions as to the LF model type or its correlations with the HF model. In addition, for improved robustness when estimating smaller failure probabilities, we propose using dynamic active learning functions that decide when to call the HF model. We demonstrate our framework using several academic case studies (including some high-dimensional problems) and two finite element model case studies: estimating Navier-Stokes velocities using the Stokes approximation and estimating stresses in a transversely isotropic model subjected to displacements via a coarsely meshed isotropic model. Across these case studies, not only did the proposed framework estimate the failure probabilities accurately, but compared with either Monte Carlo or a standard variance reduction method, it also required only a small fraction of the calls to the HF model.
•Low-fidelity models used to estimate rare events with few high-fidelity model calls.•Active learning using Gaussian Process adaptively decides HF model calls.•Flexibility over LF model choice: reduced physics/DOFs or poorly trained surrogate.•Active learning and Subset Simulation variance reduction method are coupled.•Example demonstrations include Navier-Stokes and solid mechanics problems.
Nonlinear site response modeling is a crucial aspect of Probabilistic Seismic Hazard Analysis. Site amplification models routinely rely on a rock intensity measure to characterize the strength of the ...bedrock motion. However, the adequacy of such intensity measures towards predicting amplifications across the oscillator period range has not been investigated in the literature. This paper analyzes the adequacy of rock intensity measures using state of the art criteria established in Performance-Based Earthquake Engineering and techniques from Information Theory. The efficiency and the sufficiency of several rock intensity measure are assessed. It was found that spectral accelerations at bedrock at short periods usually are adequate for predicting amplifications across the period range. This supports the current practice of using Peak Ground Acceleration in Ground Motion Models. However, for extremely soft sites, which demonstrate nonlinear effects well into the long period range, it is better practice to ensure that amplification factors and spectral acceleration share the same oscillator period. Finally, for predicting the peak shear strain (an important parameter that controls nonlinearity of site response), Peak Ground Velocity is generally adequate, and this conclusion is in line with the commonly used definitions of proxy shear strains.
•Intensity measures for site amplification assessed using sufficiency and efficiency.•Stiff sites: peak ground acceleration adequate for amplification at several periods.•Soft sites: better for amplification and spectral acceleration to share same period.•Peak ground velocity best predictor of peak shear strain across considered profiles.
Statistical nuclear fuel failure analysis is critical for the design and development of advanced reactor technologies. Although Monte Carlo Sampling (MCS) is a standard method of statistical failure ...analysis for fuels, the low failure probabilities of some advanced fuel forms and the correspondingly large number of required model evaluations limit its application to low-fidelity (e.g., 1-D) fuel models. In this paper, we present four other statistical methods for fuel failure analysis in Bison, considering tri-structural isotropic (TRISO)-coated particle fuel as a case study. The statistical methods considered are Latin hypercube sampling (LHS), adaptive importance sampling (AIS), subset simulation (SS), and the Weibull theory. Using these methods, we analyzed both 1-D and 2-D representations of TRISO models to compute failure probabilities and the distributions of fuel properties that result in failures. The results of these methods compare well across all TRISO models considered. Overall, SS and the Weibull theory were deemed the most efficient, and can be applied to both 1-D and 2-D TRISO models to compute failure probabilities. Moreover, since SS also characterizes the distribution of parameters that cause TRISO failures, and can consider failure modes not described by the Weibull criterion, it may be preferred over the other methods. Finally, a discussion on the efficacy of different statistical methods of assessing nuclear fuel safety is provided.
Seismic fragility functions can be evaluated using the cloud analysis method with linear regression which makes three fundamental assumptions about the relation between structural response and ...seismic intensity: log-linear median relationship, constant standard deviation, and Gaussian distributed errors. While cloud analysis with linear regression is a popular method, the degree to which these individual and compounded assumptions affect the fragility and the risk of mid-rise buildings needs to be systematically studied. This paper conducts such a study considering three building archetypes that make up a bulk of the building stock: RC moment frame, steel moment frame, and wood shear wall. Gaussian kernel methods are employed to capture the data-driven variations in the median structural response and standard deviation and the distributions of residuals with the intensity level. With reference to the Gaussian kernels approach, it is found that while the linear regression assumptions may not affect the fragility functions of lower damage states, this conclusion does not hold for the higher damage states (such as the Complete state). In addition, the effects of linear regression assumptions on the seismic risk are evaluated. For predicting the demand hazard, it is found that the linear regression assumptions can impact the computed risk for larger structural response values. However, for predicting the loss hazard with downtime as the decision variable, linear regression can be considered adequate for all practical purposes.
The ability of tri-structural isotropic (TRISO) fuel to contain fission products is largely dictated by the quality of the manufacturing process, since most of the fission product release is expected ...to occur due to coating layer failure in a small number of particles containing defects. The Bison fuel performance code has capabilities to predict failure in individual particles, accounting for the presence of defects, and to apply statistical analysis methods to compute the probability of failure in a set of fuel particles. Bison has recently undergone significant development both to improve its physical representations of fuel particle behavior and to improve the efficiency of its statistical failure calculations. Physical model improvements include new capabilities to account for the pressure generated by fission gases on inner pyrolytic carbon (IPyC) crack surfaces and to use local material coordinate orientation to accurately incorporate the anisotropy in the material properties in aspherical particles. To improve statistical modeling efficency, a direct integration approach which involves directly integrating the failure probability function associated with statistically varying parameters has been developed. The direct integration approach is much more efficient than the Monte Carlo (MC) schemes commenly employed, and allows Bison to directly run high-dimensional fuel performance models, which improves the accuracy of failure probability calculations. A set of benchmark problems is considered here to compare the MC and direct integration approaches, and a statistical failure analysis of compacts in the Advanced Gas Reactor (AGR)-2 experiments is performed using the direct integration approach.