SUMMARY
Late-time transient electromagnetic (TEM) data contain deep subsurface information and are important for resolving deeper electrical structures. However, due to their relatively small signal ...amplitudes, TEM responses later in time are often dominated by ambient noises. Therefore, noise removal is critical to the application of TEM data in imaging electrical structures at depth. De-noising techniques for TEM data have been developed rapidly in recent years. Although strong efforts have been made to improving the quality of the TEM responses, it is still a challenge to effectively extract the signals due to unpredictable and irregular noises. In this study, we develop a new type of neural network architecture by combining the long short-term memory (LSTM) network with the autoencoder structure to suppress noise in TEM signals. The resulting LSTM-autoencoders yield excellent performance on synthetic data sets including horizontal components of the electric field and vertical component of the magnetic field generated by different sources such as dipole, loop and grounded line sources. The relative errors between the de-noised data sets and the corresponding noise-free transients are below 1% for most of the sampling points. Notable improvement in the resistivity structure inversion result is achieved using the TEM data de-noised by the LSTM-autoencoder in comparison with several widely-used neural networks, especially for later-arriving signals that are important for constraining deeper structures. We demonstrate the effectiveness and general applicability of the LSTM-autoencoder by de-noising experiments using synthetic 1-D and 3-D TEM signals as well as field data sets. The field data from a fixed loop survey using multiple receivers are greatly improved after de-noising by the LSTM-autoencoder, resulting in more consistent inversion models with significantly increased exploration depth. The LSTM-autoencoder is capable of enhancing the quality of the TEM signals at later times, which enables us to better resolve deeper electrical structures.
SUMMARY
In a recent study, we showed that convolutional neural networks (CNNs) applied to network seismic traces can be used for rapid prediction of earthquake peak ground motion intensity measures ...(IMs) at distant stations using only recordings from stations near the epicentre. The predictions are made without any previous knowledge concerning the earthquake location and magnitude. This approach differs significantly from the standard procedure adopted by earthquake early warning systems that rely on location and magnitude information. In the previous study, we used 10 s, raw, multistation (39 stations) waveforms for the 2016 earthquake sequence in central Italy for 915 M ≥ 3.0 events (CI data set). The CI data set has a large number of spatially concentrated earthquakes and a dense network of stations. In this work, we applied the same CNN model to an area of central western Italy. In our initial application of the technique, we used a data set consisting of 266 M ≥ 3.0 earthquakes recorded by 39 stations. We found that the CNN model trained using this smaller-sized data set performed worse compared to the results presented in the previously published study. To counter the lack of data, we explored the adoption of ‘transfer learning’ (TL) methodologies using two approaches: first, by using a pre-trained model built on the CI data set and, next, by using a pre-trained model built on a different (seismological) problem that has a larger data set available for training. We show that the use of TL improves the results in terms of outliers, bias and variability of the residuals between predicted and true IM values. We also demonstrate that adding knowledge of station relative positions as an additional layer in the neural network improves the results. The improvements achieved through the experiments were demonstrated by the reduction of the number of outliers by 5 per cent, the residuals R median by 39 per cent and their standard deviation by 11 per cent.
SUMMARY
The detection and characterization of signals of interest in the presence of (in)coherent ambient noise is central to the analysis of infrasound array data. Microbaroms have an extended ...source region and a dynamical character. From the perspective of an infrasound array, these coherent noise sources appear as interfering signals that conventional beamform methods may not correctly resolve. This limits the ability of an infrasound array to dissect the incoming wavefield into individual components. In this paper, this problem will be addressed by proposing a high-resolution beamform technique in combination with the CLEAN algorithm. CLEAN iteratively selects the maximum of the f/k spectrum (i.e. following the Bartlett or the Capon method) and removes a percentage of the corresponding signal from the cross-spectral density matrix. In this procedure, the array response is deconvolved from the f/k spectral density function. The spectral peaks are retained in a ‘clean’ spectrum. A data-driven stopping criterion for CLEAN is proposed, which relies on the framework of Fisher statistics. This allows the construction of an automated algorithm that continuously extracts coherent energy until the point is reached that only incoherent noise is left in the data. CLEAN is tested on a synthetic data set and is applied to data from multiple International Monitoring System infrasound arrays. The results show that the proposed method allows for the identification of multiple microbarom source regions in the Northern Atlantic that would have remained unidentified if conventional methods had been applied.
Abstract
Using data from the Complete Nearby (redshift
z
host
< 0.02) sample of Type Ia Supernovae (CNIa0.02), we find a linear relation between two parameters derived from the
B
−
V
color curves of ...Type Ia supernovae: the
color stretch
s
BV
and the rising color slope
s
0
*
(
B
−
V
)
after the peak, and this relation applies to the full range of
s
BV
. The
s
BV
parameter is known to be tightly correlated with the peak luminosity, especially for
fast decliners
(dim Type Ia supernovae), and the luminosity correlation with
s
BV
is markedly better than with the classic light-curve width parameters such as Δ
m
15
(
B
). Thus, our new linear relation can be used to infer peak luminosity from
s
0
*
. Unlike
s
BV
(or Δ
m
15
(
B
)), the measurement of
s
0
*
(
B
−
V
)
does not rely on a well-determined time of light-curve peak or color maximum, making it less demanding on the light-curve coverage than past approaches.
This paper formulates the traffic flow forecasting task by introducing a maximum correntropy deduced Kalman filter. The traditional Kalman filter is based on minimum mean square error, which performs ...well under Gaussian noises. However, the real traffic flow data are fulfilled with non-Gaussian noises. The traditional Kalman filter may rot under this situation. The Kalman filter deduced by maximum correntropy criteria is insensitive to non-Gaussian noises, meanwhile retains the optimal state mean and covariance propagation of the traditional Kalman filter. To achieve this, a fix-point algorithm is embedded to update the posterior estimations of maximum correntropy deduced Kalman filter. Extensive experiments on four benchmark datasets demonstrate the outperformance of this model for traffic flow forecasting.
•Formulate a noise-immune short-term traffic flow forecasting model.•Solve the optimal solution each step by a fix-point iterative algorithm.•Demonstrate the outperformance by evaluation on four benchmark datasets.
In this paper, we elaborate over the well-known interpretability issue in echo-state networks (ESNs). The idea is to investigate the dynamics of reservoir neurons with time-series analysis techniques ...developed in complex systems research. Notably, we analyze time series of neuron activations with recurrence plots (RPs) and recurrence quantification analysis (RQA), which permit to visualize and characterize high-dimensional dynamical systems. We show that this approach is useful in a number of ways. First, the 2-D representation offered by RPs provides a visualization of the high-dimensional reservoir dynamics. Our results suggest that, if the network is stable, reservoir and input generate similar line patterns in the respective RPs. Conversely, as the ESN becomes unstable, the patterns in the RP of the reservoir change. As a second result, we show that an RQA measure, called <inline-formula> <tex-math notation="LaTeX">L_\mathrm {max} </tex-math></inline-formula>, is highly correlated with the well-established maximal local Lyapunov exponent. This suggests that complexity measures based on RP diagonal lines distribution can quantify network stability. Finally, our analysis shows that all RQA measures fluctuate on the proximity of the so-called edge of stability, where an ESN typically achieves maximum computational capability. We leverage on this property to determine the edge of stability and show that our criterion is more accurate than two well-known counterparts, both based on the Jacobian matrix of the reservoir. Therefore, we claim that RPs and RQA-based analyses are valuable tools to design an ESN, given a specific problem.
The volatility of financial returns changes over time and, for the last thirty years, Generalized Autoregressive Conditional Heteroscedasticity (GARCH) models have provided the principal means of ...analyzing, modeling and monitoring such changes. Taking into account that financial returns typically exhibit heavy tails - that is, extreme values can occur from time to time - Andrew Harvey's new book shows how a small but radical change in the way GARCH models are formulated leads to a resolution of many of the theoretical problems inherent in the statistical theory. The approach can also be applied to other aspects of volatility. The more general class of Dynamic Conditional Score models extends to robust modeling of outliers in the levels of time series and to the treatment of time-varying relationships. The statistical theory draws on basic principles of maximum likelihood estimation and, by doing so, leads to an elegant and unified treatment of nonlinear time-series modeling.
Changing climate extremes and invasion by non‐native species are two of the most prominent threats to native faunas. Predicting the relationships between global change and native faunas requires a ...quantitative toolkit that effectively links the timing and magnitude of extreme events to variation in species abundances. Here, we examine how discharge anomalies – unexpected floods and droughts – determine covariation in abundance of native and non‐native fish species in a highly variable desert river in Arizona. We quantified stochastic variation in discharge using Fourier analyses on >15 000 daily observations. We subsequently coupled maximum annual spectral anomalies with a 15‐year time series of fish abundances (1994–2008), using Multivariate Autoregressive State‐Space (MARSS) models. Abiotic drivers (discharge anomalies) were paramount in determining long‐term fish abundances, whereas biotic drivers (species interactions) played only a secondary role. As predicted, anomalous droughts reduced the abundances of native species, while floods increased them. However, in contrast to previous studies, we observed that the non‐native assemblage was surprisingly unresponsive to extreme events. Biological trait analyses showed that functional uniqueness was higher in native than in non‐native fishes. We also found that discharge anomalies influenced diversity patterns at the meta‐community level, with nestedness increasing after anomalous droughts due to the differential impairment of native species. Overall, our results advance the notion that discharge variation is key in determining community trajectories in the long term, predicting the persistence of native fauna even in the face of invasion. We suggest this variation, rather than biotic interactions, may commonly underlie covariation between native and non‐native faunas, especially in highly variable environments. If droughts become increasingly severe due to climate change, and floods increasingly muted due to regulation, fish assemblages in desert rivers may become taxonomically and functionally impoverished and dominated by non‐native taxa.
BACKGROUND: Heat waves are extreme weather events that have been associated with adverse health outcomes. However, there is limited knowledge of heat waves' impact on population morbidity, such as ...emergency department (ED) visits. OBJECTIVES: We investigated associations between heat waves and ED visits for 17 outcomes in Atlanta over a 20-year period, 1993-2012. METHODS: Associations were estimated using Poisson log-linear models controlling for continuous air temperature, dew-point temperature, day of week, holidays, and time trends. We defined heat waves as periods of > 2 consecutive days with temperatures beyond the 98th percentile of the temperature distribution over the period from 1945-2012. We considered six heat wave definitions using maximum, minimum, and average air temperatures and apparent temperatures. Associations by heat wave characteristics were examined. RESULTS: Among all outcome-heat wave combinations, associations were strongest between ED visits for acute renal failure and heat waves defined by maximum apparent temperature at lag 0 relative risk (RR) = 1.15; 95% confidence interval (CI): 1.03-1.29, ED visits for ischemic stroke and heat waves defined by minimum temperature at lag 0 (RR = 1.09; 95% CI: 1.02-1.17), and ED visits for intestinal infection and heat waves defined by average temperature at lag 1 (RR = 1.10; 95% CI: 1.00-1.21). ED visits for all internal causes were associated with heat waves defined by maximum temperature at lag 1 (RR = 1.02; 95% CI: 1.00, 1.04). CONCLUSIONS: Heat waves can confer additional risks of ED visits beyond those of daily air temperature, even in a region with high air-conditioning prevalence.