Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, ...simulation‐based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3–18 times relative to the pre‐Kaikoura probability, and the absolute probability is in the range of 0.6–7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.
Plain Language Summary
Over the last two decades, slow slip events, which are earthquake‐free slip episodes lasting days to months, have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. In this study, we develop a model for estimating the probability of large subduction‐zone earthquakes following slow slip events and nearby earthquakes. The method has been applied to the southern Hikurangi subduction fault, New Zealand, which is surrounded on all sides by the recent magnitude 7.8 Kaikoura earthquake and triggered slow slip events further north. Our models accounting for uncertainties in the input parameters indicate that the annual probability of a Magnitude < 7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3–18 times relative to the pre‐Kaikoura probability, and the absolute probability is in the range of 0.6–7%, which is lower than what might have been anticipated. The results also provide new insights into how the probability of a large earthquake is affected by nearby slow slip events. The developed approach can be applied to evaluate the potential for triggering a large earthquake following slow slip events in other subduction zones.
Key Points
Simple physical model for estimating the probability of large earthquakes following slow slip events (SSEs) is developed
The model is applied to the locked southern Hikurangi subduction zone encircled by the 2016 Kaikoura earthquake and widespread SSEs
Earthquake probability following SSEs may be estimated from the ratio of the stressing rate to the mean stress drop of earthquakes
Despite a lack of reliable deterministic earthquake precursors, seismologists have significant predictive information about earthquake activity from an increasingly accurate understanding of the ...clustering properties of earthquakes. In the past 15 years, time-dependent earthquake probabilities based on a generic short-term clustering model have been made publicly available in near-real time during major earthquake sequences. These forecasts describe the probability and number of events that are, on average, likely to occur following a mainshock of a given magnitude, but are not tailored to the particular sequence at hand and contain no information about the likely locations of the aftershocks. Our model builds upon the basic principles of this generic forecast model in two ways: it recasts the forecast in terms of the probability of strong ground shaking, and it combines an existing time-independent earthquake occurrence model based on fault data and historical earthquakes with increasingly complex models describing the local time-dependent earthquake clustering. The result is a time-dependent map showing the probability of strong shaking anywhere in California within the next 24 hours. The seismic hazard modelling approach we describe provides a better understanding of time-dependent earthquake hazard, and increases its usefulness for the public, emergency planners and the media.
Purpose
Functional electrical stimulation (FES) is considered an upcoming treatment modality for a number of laryngeal diseases. However, sound data are scarce when it comes to surface FES to treat ...voice disorders. Aim of the present study was to identify and differentiate suitable surface FES patterns to activate internal laryngeal muscles.
Methods
Non-invasive FES was performed in a cohort of 17 elderly woman. Our user-customized electrical stimulation setup allowed us to deliver ten different stimulation patterns (rectangular and sawtooth shaped) with variation of frequency and amplitude. Stimulation outcome, i.e., vocal fold (VF) reaction, was continuously verified by transnasal endoscopy.
Results
Responses to FES using ten different stimulation patterns varied inter-individually. None of the stimulation parameter sets could elicit a VF reaction in all participants.
Conclusion
Based on our findings we conclude that individual fitting is necessary when defining surface stimulation parameters. To overcome limitations of previous studies, devices with freely programmable patterns are required as shown here. Endoscopic control of VF reaction is absolutely essential to ensure effectiveness of the delivered patterns.
Strain rates have been included in multiplicative hybrid modelling of the long-term spatial distribution of earthquakes in New Zealand (NZ) since 2017. Previous modelling has shown a strain rate ...model to be the most informative input to explain earthquake locations over a fitting period from 1987 to 2006 and a testing period from 2012 to 2015. In the present study, three different shear strain rate models have been included separately as covariates in NZ multiplicative hybrid models, along with other covariates based on known fault locations, their associated slip rates, and proximity to the plate interface. Although the strain rate models differ in their details, there are similarities in their contributions to the performance of hybrid models in terms of information gain per earthquake (IGPE). The inclusion of each strain rate model improves the performance of hybrid models during the previously adopted fitting and testing periods. However, the hybrid models, including strain rates, perform poorly in a reverse testing period from 1951 to 1986. Molchan error diagrams show that the correlations of the strain rate models with earthquake locations are lower over the reverse testing period than from 1987 onwards. Smoothed scatter plots of the strain rate covariates associated with target earthquakes versus time confirm the relatively low correlations before 1987. Moreover, these analyses show that other covariates of the multiplicative models, such as proximity to the plate interface and proximity to mapped faults, were better correlated with earthquake locations prior to 1987. These results suggest that strain rate models based on only a few decades of available geodetic data from a limited network of GNSS stations may not be good indicators of where earthquakes occur over a long time frame.
We perform a retrospective forecast experiment on the 1992 Landers sequence comparing the predictive power of commonly used model frameworks for short‐term earthquake forecasting. We compare a ...modified short‐term earthquake probability (STEP) model, six realizations of the epidemic‐type aftershock sequence (ETAS) model, and four models that combine Coulomb stress changes calculations and rate‐and‐state theory to generate seismicity rates (CRS models). We perform the experiment under the premise of a controlled environment with predefined conditions for the testing region and data for all modelers. We evaluate the forecasts with likelihood tests to analyze spatial consistency and the total amount of forecasted events versus observed data. We find that (1) 9 of the 11 models perform superior compared to a simple reference model, (2) ETAS models forecast the spatial evolution of seismicity best and perform best in the entire test suite, (3) the modified STEP model matches best the total number of events, (4) CRS models can only compete with empirical statistical models by introducing stochasticity in these models considering uncertainties in the finite‐fault source model, and (5) resolving Coulomb stress changes on 3‐D optimally oriented planes is more adequate for forecasting purposes than using the specified receiver fault concept. We conclude that statistical models perform generally better than the tested physics‐based models and parameter value updates using the occurrence of aftershocks generally improve the predictive power in particular for the purely statistical models in space and time.
Key Points
Retrospective comparative evaluation of short‐term forecast models
Statistical models outperform applied physics‐based models
Suite of statistical tests needs to be evaluated for full result analysis
Vocal fold scarring is a relatively small field in scar research with prerequisites found nowhere else. The deterioration of the delicate tri-layered micro-structure of the epithelium of the vocal ...folds leads to impaired vibration characteristics resulting in a permanent hoarse and breathy voice. Tissue engineering approaches could help to restore the pre-injury status. Despite a considerable progress in this field during the last years, routine clinical applications are not available so far. One reason might be that vocal fold fibroblasts, as the responsible cell type for fibrogenesis, have very particular properties that are only poorly characterized. Moreover, in vivo trials are costly and time consuming and a representative in vitro model does not exist so far. These particular circumstances lead to innovative in vitro strategies and concepts such as macro-molecular crowding that can also be applied in adjacent fields.
Computationally efficient alternatives are proposed to the likelihood-based tests employed by the Collaboratory for the Study of Earthquake Predictability for assessing the performance of earthquake ...likelihood models in the earthquake forecast testing centers. For the conditional
L
-test, which tests the consistency of the earthquake catalogue with a model, an exact test using convolutions of distributions is available when the number of earthquakes in the test period is small, and the central limit theorem provides an approximate test when the number of earthquakes is large. Similar methods are available for the
R
-test, which compares the likelihoods of two competing models. However, the
R
-test, like the
N
-test and
L
-test, is fundamentally a test of consistency of data with a model. We propose an alternative test, based on the classical paired
t
-test, to more directly compare the likelihoods of two models. Although approximate and predicated on a normality assumption, this new
T
-test is not computer-intensive, is easier to interpret than the
R
-test, and becomes increasingly dependable as the number of earthquakes increases.