Ice sheet marine margins via outlet glaciers are susceptible to climate change and are expected to respond through retreat, steepening, and acceleration, although with significant spatial ...heterogeneity. However, research on ice–ocean interactions has continued to rely on decentralized, manual mapping of features at the ice–ocean interface, impeding progress in understanding the response of glaciers and ice sheets to climate change. The proliferation of remote-sensing images lays the foundation for a better understanding of ice–ocean interactions and also necessitates the automation of terminus delineation. While deep learning (DL) techniques have already been applied to automate the terminus delineation, none involve sufficient quality control and automation to enable DL applications to “big data” problems in glaciology. Here, we build on established methods to create a fully automated pipeline for terminus delineation that makes several advances over prior studies. First, we leverage existing manually picked terminus traces (16 440) as training data to significantly improve the generalization of the DL algorithm. Second, we employ a rigorous automated screening module to enhance the data product quality. Third, we perform a thoroughly automated uncertainty quantification on the resulting data. Finally, we automate several steps in the pipeline allowing data to be regularly delivered to public databases with increased frequency. The automation level of our method ensures the sustainability of terminus data production. Altogether, these improvements produce the most complete and high-quality record of terminus data that exists for the Greenland Ice Sheet (GrIS). Our pipeline has successfully picked 278 239 termini for 295 glaciers in Greenland from Landsat 5, 7, 8 and Sentinel-1 and Sentinel-2 images, spanning the period from 1984 to 2021. The pipeline has been tested on glaciers in Greenland with an error of 79 m. The high sampling frequency and the controlled quality of our terminus data will enable better quantification of ice sheet change and model-based parameterizations of ice–ocean interactions.
Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with ...limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self‐similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function‐based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune‐type stress drop with seismic moment but that this observed deviation from self‐similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.
Plain Language Summary
Just as a line of music can be characterized in terms of its amplitude and pitch, earthquakes can be characterized in terms of their magnitude and frequency content. The frequency content of an earthquake depends on its size, with smaller earthquakes having systematically higher “pitches” than larger ones. Previous studies in earthquake seismology have assumed that the frequency content of earthquakes exhibits a particularly simple form of scaling with earthquake size known as self‐similarity. Under this paradigm, large earthquakes are perfectly scaled‐up versions of small ones, with the physical properties of the earthquake scaling in much the same way as font size does on a computer. In this article, the authors develop a new method to examine the frequency content of tens of thousands of earthquakes occurring in different regions of Southern California over the past 15 years. The authors find that the frequency content of these earthquakes deviated significantly from the self‐similar model, with larger earthquakes being enriched in more high‐frequency energy than expected. This result has important implications for earthquake hazard, as the most damaging ground motions are generated by the high‐frequency seismic waves of the largest earthquakes.
Key Points
We apply a new spectral decomposition method to examine earthquake source scaling
The data are consistent with an increase in average Brune‐type stress drop with seismic moment
The inferred deviation from self‐similarity is model dependent and trades off with the assumed high‐frequency falloff rate
Fault complexity has been linked to high‐frequency earthquake radiation, although the underlying physical mechanisms are not well understood. Fault complexity is commonly modeled with rough single ...faults; however, real‐world faults are additionally complex, existing within networks of other faults. In this study, we introduce two new ways of defining fault complexity using mapped fault traces, characterizing fault networks in terms of their degree of alignment and density. We find that both misalignment and density correlate with enhanced high‐frequency seismic radiation across Southern California, with misalignment showing a stronger correlation. This robust correlation suggests that high‐frequency radiation may arise in part from fault‐fault interactions within networks of misaligned faults. Fault‐fault interactions may therefore have important consequences for earthquake rupture dynamics, energetics and earthquake hazards and should not be overlooked.
Plain Language Summary
The faults on which earthquakes occur sometimes form complex interconnected patterns. The level of this complexity may promote higher frequency ground motions coming from earthquakes occurring on such faults. We describe ways of quantifying the complexity of groups of faults based on how they are aligned and how densely they are spaced. We find that high‐frequency ground motions in Southern California tend to correlate with misaligned faults, suggesting that structural interactions between different parts of the fault system may play a role in generating the ground motions felt during earthquakes. Future work involving rupture simulations, ground motion modeling, and hazards assessments in complex fault geometries should consider the effects of these structural interactions.
Key Points
We introduce two new metrics quantifying the complexity of interacting faults
In Southern California, enhanced high‐frequency seismic radiation is associated with earthquakes on misaligned faults
High‐frequency seismic radiation may thus be produced in part by fault‐fault interactions
Accurate precipitation monitoring is crucial for understanding climate change and rainfall-driven hazards at a local scale. However, the current suite of monitoring approaches, including weather ...radar and rain gauges, have different insufficiencies such as low spatial and temporal resolution and difficulty in accurately detecting potentially destructive precipitation events such as hailstorms. In this study, we develop an array-based method to monitor rainfall with seismic nodal stations, offering both high spatial and temporal resolution. We analyze seismic records from 1825 densely spaced, high-frequency seismometers in Oklahoma, and identify signals from nine precipitation events that occurred during the one-month station deployment in 2016. After removing anthropogenic noise and Earth structure response, the obtained precipitation spatial pattern mimics the one from a nearby operational weather radar, while offering higher spatial (~ 300 m) and temporal (< 10 s) resolution. We further show the potential of this approach to monitor hail with joint analysis of seismic intensity and independent precipitation rate measurements, and advocate for coordinated seismological-meteorological field campaign design.
The scaling of rupture properties with magnitude is of critical importance to earthquake early warning systems that rely on source characterization using limited snapshots of waveform data. ...ShakeAlert, a prototype earthquake early warning system that is being developed for the western United States, provides real‐time estimates of earthquake magnitude based on P wave peak ground displacements measured at stations triggered by the event. The algorithms used in ShakeAlert assume that the displacement measurements at each station are statistically independent and that there exists a linear and time‐independent relation between log peak ground displacement and earthquake magnitude. Here we challenge this basic assumption using the largest data set assembled for this purpose to date: a comprehensive database of more than 140,000 vertical‐component waveforms from M4.5 to M9 earthquakes occurring near Japan from 1997 through 2018 and recorded by the K‐NET and KiK‐net strong‐motion networks. By analyzing the time evolution of P wave peak ground displacements for these earthquakes, we show that there is a break, or saturation, in the magnitude‐displacement scaling that depends on the length of the measurement time window. We demonstrate that the magnitude at which this saturation occurs is well‐explained by a simple and nondeterministic model of earthquake rupture growth. We then use the predictions of this saturation model to develop a Bayesian framework for estimating posterior uncertainties in real‐time magnitude estimates.
Key Points
We analyze P wave peak displacements (Pd) of magnitude M4.5‐9 earthquakes in Japan from 1997 to 2018
Time‐dependent saturation in the linear scaling between log10 Pd and magnitude is consistent with nondeterministic rupture
We develop a Bayesian framework for rapid calculations of time‐dependent uncertainties in real‐time magnitude estimates
Empirical Green's functions (EGFs) are widely applied to correct earthquake spectra for attenuation and other path effects in order to estimate corner frequencies and stress drops, but these source ...parameter estimates often exhibit poor agreement between different studies. We examine this issue by analyzing a compact cluster of over 3,000 aftershocks of the 1992 Landers earthquake. We apply and compare two different analysis and modeling methods: (1) the spectral decomposition and global EGF fitting approach and (2) a more traditional EGF method of modeling spectral ratios. We find that spectral decomposition yields event terms that are consistent with stacks of spectral ratios for individual events, but source parameter estimates nonetheless vary between the methods. The main source of differences comes from the modeling approach used to estimate the EGF. The global EGF‐fitting approach suffers from parameter trade‐offs among the absolute stress drop, the stress drop scaling with moment, and the high‐frequency falloff rate but has the advantage that the relative spectral shapes and stress drops among the different events in the cluster are well resolved even if their absolute levels are not. The spectral ratio approach solves for a different EGF for each target event without imposing any constraint on the corner frequency, fc, of the smaller events, and so can produce biased results for target event fc. Placing constraints on the small‐event fc improves the performance of the spectral ratio method and enables the two methods to yield very similar results.
Key Points
Empirical Green's function (EGF) approaches to resolve earthquake corner frequency suffer from parameter trade‐offs
The spectral ratio method for estimating corner frequency produces biased results if the smaller event corner frequency is unconstrained
Relative stress drop estimates in compact seismicity clusters are well resolved and show changes in average stress drop over short distances
Laboratory earthquake experiments provide important observational constraints for our understanding of earthquake physics. Here we leverage continuous waveform data from a network of piezoceramic ...sensors to study the spatial and temporal evolution of microslip activity during a shear experiment with synthetic fault gouge. We combine machine learning techniques with ray theoretical seismology to detect, associate, and locate tens of thousands of microslip events within the gouge layer. Microslip activity is concentrated near the center of the system but is highly variable in space and time. While microslip activity rate increases as failure approaches, the spatiotemporal evolution can differ substantially between stick‐slip cycles. These results illustrate that even within a single, well‐constrained laboratory experiment, the dynamics of earthquake nucleation can be highly complex.
Plain Language Summary
The fault systems that produce damaging earthquakes are difficult to study directly due to their depth and spatial extent in the Earth's crust. Laboratory earthquake experiments can provide insight into the relevant physical processes active in real earthquake systems. In experiments with granular material that emulates the crushed‐up gouge material of real faults, larger labquakes are always preceded by smaller, foreshock events. In this work, we provide a detailed study of the space‐time evolution of these microslip foreshocks in one such experiment. We show that even in these simplified analogs of real earthquake cycles, earthquake nucleation processes and frictional behavior can vary dramatically from cycle to cycle. In tectonic fault zones on Earth, such complexity will only be magnified.
Key Points
We study the spatiotemporal evolution of microslip events in laboratory earthquake experiments with synthetic granular fault gouge
We combine machine learning and conventional seismic processing techniques to develop a catalog of more than 30,000 microslip events
Microslip activity increases as failure approaches but exhibits a complex spatiotemporal pattern that varies throughout the experiment
An energetic earthquake sequence occurred during September to October 2017 near Sulphur Peak, Idaho. The normal‐faulting Mw 5.3 mainshock of 2 September 2017 was widely felt in Idaho, Utah, and ...Wyoming. Over 1,000 aftershocks were located within the first 2 months, 29 of which had magnitudes ≥4.0 ML. High‐accuracy locations derived with data from a temporary seismic array show that the sequence occurred in the upper (<10 km) crust of the Aspen Range, east of the northern section of the range‐bounding, west‐dipping East Bear Lake Fault. Moment tensors for 77 of the largest events show normal and strike‐slip faulting with a summed aftershock moment that is 1.8–2.4 times larger than the mainshock moment. We propose that the unusually high productivity of the 2017 Sulphur Peak sequence can be explained by aseismic afterslip, which triggered a secondary swarm south of the coseismic rupture zone beginning ~1 day after the mainshock.
Plain Language Summary
During the fall of 2017, an energetic sequence of earthquakes was recorded in southeastern Idaho. The mainshock had a moment magnitude of Mw 5.3, yet thousands of aftershocks were detected. We found that the unusually high productivity of this earthquake sequence can be explained by extra sliding that occurred just after the mainshock. This extra sliding happened too slowly to generate seismic waves, but it was large enough to alter the stress in the crust such that the extra aftershocks were created. Our finding suggests that in this region of Idaho, some of the strain that is built up by tectonic forces is released in slow‐slip or creep events. This discovery will ultimately lead to more accurate forecasts of seismic hazard in the region.
Key Points
The 2017 Sulphur Peak earthquake sequence was very energetic, with a summed aftershock moment 1.8–2.4 times that of the Mw 5.3 mainshock
Magnitude‐time histories are consistent with a standard mainshock‐aftershock sequence augmented by an afterslip‐driven swarm
The 2017 sequence is co‐located with swarm‐like sequences from 1960 and 1982, implying that SE Idaho may be prone to repeating creep events
We present a novel approach for resolving modes of rupture directivity in large populations of earthquakes. A seismic spectral decomposition technique is used to first produce relative measurements ...of radiated energy for earthquakes in a spatially compact cluster. The azimuthal distribution of energy for each earthquake is then assumed to result from one of several distinct modes of rupture propagation. Rather than fitting a kinematic rupture model to determine the most likely mode of rupture propagation, we instead treat the modes as latent variables and learn them with a Gaussian mixture model. The mixture model simultaneously determines the number of events that best identify with each mode. The technique is demonstrated on four datasets in California, each with compact clusters of several thousand earthquakes with comparable slip mechanisms. We show that the datasets naturally decompose into distinct rupture propagation modes that correspond to different rupture directions, and the fault plane is unambiguously identified for all cases. We find that these small earthquakes exhibit unilateral ruptures 63–73% of the time on average. The results provide important observational constraints on the physics of earthquakes and faults.
Key Points
We develop an unsupervised machine learning approach to resolving directivity modes in earthquake populations
The problem is formulated as recovering the latent modes of rupture propagation that exist in the data naturally
Across four strike‐slip datasets in California with thousands of earthquakes, unilateral ruptures occur 63–73% of the time
Abstract
Earthquakes are clustered in space and time, with individual sequences composed of events linked by stress transfer and triggering mechanisms. On a global scale, variations in the ...productivity of earthquake sequences—a normalized measure of the number of triggered events—have been observed and associated with regional variations in tectonic setting. Here, we focus on resolving systematic variations in the productivity of crustal earthquake sequences in California and Nevada—the two most seismically active states in the western United States. We apply a well-tested nearest-neighbor algorithm to automatically extract earthquake sequence statistics from a unified 40 yr compilation of regional earthquake catalogs that is complete to M ∼ 2.5. We then compare earthquake sequence productivity to geophysical parameters that may influence earthquake processes, including heat flow, temperature at seismogenic depth, complexity of quaternary faulting, geodetic strain rates, depth to crystalline basement, and faulting style. We observe coherent spatial variations in sequence productivity, with higher values in the Walker Lane of eastern California and Nevada than along the San Andreas fault system in western California. The results illuminate significant correlations between productivity and heat flow, temperature, and faulting that contribute to the understanding and ability to forecast crustal earthquake sequences in the area.