Abstract
Drawing on theories of intergroup conflict and research on political legitimization of prejudice and crime motivated by bias, this study examines the temporal clustering of hate crimes in ...the aftermath of triggering events in the UK. In addition to domestic and nondomestic terrorist attacks, we consider the effects of the EU referendum widely known as Brexit. Consistent with previous work, the results reveal sharp increases of hate crimes in the aftermath of the antecedent events. However, we found that the effects of the EU referendum were more prolonged and more intense than the effects of the other triggering events. Moreover, the effects of domestic events are generally significant and stronger in magnitude than nondomestic events. Finally, the results show that the duration and decay of the effects of terrorist attacks on hate crimes generally mirror the severity of the galvanizing event. Taken together, our findings underscore the role of the EU referendum in explaining dramatic increases in crime motivated by bias. Accordingly, they are of particular importance to politicians and policy makers and have implications that go beyond the case of Brexit.
Deep learning architectures usually require large scale labeled datasets for achieving good performance on general classification tasks including computer vision and natural language processing. ...Recent techniques of self-supervised learning have opened up new a research frontier where deep learning architectures can learn general features from unlabeled data. The task of self-supervised learning is usually accomplished with some sort of data augmentation through which the deep neural networks can extract relevant information. This paper presents a novel approach for self-supervised learning based time-series analysis based on the SimCLR contrastive learning. We present novel data augmentation techniques, focusing especially on time-series data, and study their effect on the prediction task. We provide comparison with several fault classification approaches on benchmark Tennessee Eastman dataset and report an improvement to 81.43% in the final accuracy using our contrastive learning approach. Furthermore we report a new benchmark of 80.80%, 81.05% and 81.43% for self-supervised training on Tennesee Eastman where a classifier is only trained with 5%, 10% or 50% percent of the available training data. Hence, we can conclude that the contrastive approach is very successful in time-series problems also, and further suitable for usage with partially labeled time-series datasets.
•Novel self-supervised contrastive learning framework for time-series fault detection.•Proposal of novel augmentation techniques for time-series data.•Novel augmentation techniques improve the classification performance.•Comparison of proposed and existing data augmentations for contrastive learning.•State of the art results on the benchmark Tennessee Eastman Process dataset.
With the increasing demand for intelligent transportation systems, short-term traffic flow prediction has become an important research direction. The memory unit of a Long Short-Term Memory (LSTM) ...neural network can store data characteristics over a certain period of time, hence the suitability of this network for time series processing. This paper uses an improved Gate Recurrent Unit (GRU) neural network to study the time series of traffic parameter flows. The LSTM short-term traffic flow prediction based on the flow series is first investigated, and then the GRU model is introduced. The GRU can be regarded as a simplified LSTM. After extracting the spatial and temporal characteristics of the flow matrix, an improved GRU with a bidirectional positive and negative feedback called the Bi-GRU prediction model is used to complete the short-term traffic flow prediction and study its characteristics. The Rectified Adaptive (RAdam) model is adopted to improve the shortcomings of the common optimizer. The cosine learning rate attenuation is also used for the model to avoid converging to the local optimal solution and for the appropriate convergence speed to be controlled. Furthermore, the scientific and reliable model learning rate is set together with the adaptive learning rate in RAdam. In this manner, the accuracy of network prediction can be further improved. Finally, an experiment of the Bi-GRU model is conducted. The comprehensive Bi-GRU prediction results demonstrate the effectiveness of the proposed method.
Increase of extreme events in a warming world Rahmstorf, Stefan; Coumou, Dim
Proceedings of the National Academy of Sciences - PNAS,
11/2011, Volume:
108, Issue:
44
Journal Article
Peer reviewed
Open access
We develop a theoretical approach to quantify the effect of long-term trends on the expected number of extremes in generic time series, using analytical solutions and Monte Carlo simulations. We ...apply our method to study the effect of warming trends on heat records. We find that the number of record-breaking events increases approximately in proportion to the ratio of warming trend to short-term standard deviation. Short-term variability thus decreases the number of heat extremes, whereas a climatic warming increases it. For extremes exceeding a predefined threshold, the dependence on the warming trend is highly nonlinear. We further find that the sum of warm plus cold extremes increases with any climate change, whether warming or cooling. We estimate that climatic warming has increased the number of new global-mean temperature records expected in the last decade from 0.1 to 2.8. For July temperature in Moscow, we estimate that the local warming trend has increased the number of records expected in the past decade fivefold, which implies an approximate 80% probability that the 2010 July heat record would not have occurred without climate warming.
SUMMARY
The ambient infrasonic noise field is complex due to the interference of spatially distributed infrasound sources. Microbaroms are one of the most dominant omnipresent infrasonic sources ...within this wavefield. These microbaroms are generated by nonlinear ocean surface wave interactions, and have a characteristic and continuous signature within the infrasound spectrum. Under noisy conditions, microbaroms can mask infrasonic signals of interest, such as infrasound from volcanoes or explosions, which limits detection and identification of such sources. This study performs an infrasonic climatology for infrasound array I23FR, using five years of data between 2015–2020. The array is located on the Kerguelen Islands, within the Southern Ocean, and is part of the International Monitoring System for the verification of the Comprehensive Nuclear-Test-Ban Treaty. The climatology analysis addresses the expected ambient noise levels, propagation paths and potential sources within the vicinity of an infrasound sensor. Time- and frequency-domain beamforming methods have been applied to analyse the infrasonic wavefield from the I23FR observations. A recently introduced method is applied to compute so-called soundscapes, to be compared with beamform results. Although the comparison indicates a disagreement in amplitude, there is a good agreement in directionality and frequency between both.
Cardiac arrhythmia is a common clinical problem in cardiology defined as the abnormality in heart rhythm. Bradycardia, atrial fibrillation, tachycardia, supraventricular tachycardia, atrial flutter ...and sinus irregularity are common different classifications for arrhythmia. In this study, we develop a new approach to distinguishing between these most common heart rhythms. Our approach is based on dynamical system techniques, namely recurrence entropy of microstates, and recurrence vicinity threshold, in association with artificial intelligence. The results are based on a 12-lead electrocardiogram open dataset with more than 10,000 subjects which includes 11 different heart rhythms. The rhythms and other cardiac conditions of the dataset were labeled by more than one licensed physician. The main contributions of this work are the identification of how different heart rhythms affects the entropy of recurrence microstates and recurrence vicinity threshold parameter, and in doing so, this quantifier may be used as a feature extraction to artificial intelligence classifiers. We expect that our freely available methodology and our algorithm will be useful to communities where real-time physician diagnostics are not easily available. We show the 12 signals arising from ECG (12×5000) data points can be pre-treated using the entropy of recurrence microstates and recurrence threshold, so that only 12×2 scalar values may be used in machine learning techniques. So our method involves a significant reduction of the data set to be analyzed by machine learning algorithms and can bring benefits in situations of pre-testing individuals, due to the minimum processing time and hardware required to perform the analysis. The additional information obtained by the two quantifiers may also be put together with the signals, consolidating data from multiple sources, adding more useful information to the dataset.
A reliable forecast of future events possesses great value. The main purpose of this paper is to propose an innovative learning technique for reinforcing the accuracy of two-step-ahead (2SA) ...forecasts. The real-time recurrent learning (RTRL) algorithm for recurrent neural networks (RNNs) can effectively model the dynamics of complex processes and has been used successfully in one-step-ahead forecasts for various time series. A reinforced RTRL algorithm for 2SA forecasts using RNNs is proposed in this paper, and its performance is investigated by two famous benchmark time series and a streamflow during flood events in Taiwan. Results demonstrate that the proposed reinforced 2SA RTRL algorithm for RNNs can adequately forecast the benchmark (theoretical) time series, significantly improve the accuracy of flood forecasts, and effectively reduce time-lag effects.
In this article, we have demonstrated a novel, generalized Seq2Dense U-Net model for classifying different activities (time-series data obtained from an accelerometer and gyroscope) and sampling them ...by considering individual sampling points. For training and classifying activities, the triaxial data from the smart devices are mapped as a linear set of sampling points via the sliding window technique and fed to the model. Our proposed model overcame the challenge of a multi-class window that incorrectly labels different classes of sampling points within a window as a class, as reported by previous studies. The comparative study shows that our method can recognize pixel-level function from corresponding time series data of human activities. To validate and consolidate the effectiveness of the model, an ablation study has been performed on the components of the classification block. Furthermore, the network depth analysis provides the optimum depth with respect to the number of parameters. We have performed extensive studies on the Sanitation, UCI HAR, and UCI HAPT dataset and calculated various performance matrices to assess the performance of our proposed model. Our proposed model performed significantly better on every individual dataset compared to other models as illustrated in comparative study and provided best results on UCI HAR dataset. For the UCI HAR dataset, our model gives an accuracy score of 0.954781, precision of 0.941473, recall of 0.940618, F1 score as 0.940699 followed by MCC and kappa score as 0.948767 and 0.948630, on test data respectively.
•Long-term increase in forest growth in Finland mainly due to improved forest structure.•Environmental factors induced a significant share (20 – 31 %) of the growth increase.•The recent growth ...reduction mainly caused by environmental factors.
After a rising trend for 1971 – 2013, during which the annual volume growth of the forests of Finland increased by more than 70 %, a recent reduction has been observed. We analyzed the development of annual growth in the forest of Finland, focusing on the component not explainable by changes in growing stock. The data originate from nine consecutive Finnish National Forest Inventories. In the data, diameter increments were measured from increment cores and tree height increments from standing sample trees in the field. We developed models predicting periodic (5 years) annual volume increment per hectare with properties of the trees and stands as predictor variables. Deviations from model-predicted values were interpreted to be induced by environmental variation. The development was analyzed for all tree species combined and separately for three species groups: Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies (L.) Karst.) and broadleaves. We observed a rising growth trend not solely explainable by increased growing stock. The species groups produced rather a similar pattern in different parts of Finland: from the 1960s to the mid-1990s, the observed volume increment was mainly below the model-predicted level, thereafter above it. During the current century, the difference between observed and predicted annual volume increment has shown a downward trend for Scots pine. For Norway spruce, the difference has continued to increase in southern Finland, but shows little change in the north. For broadleaved species, the difference between measured and predicted increment shows a recent increase as well, though not as large as for Norway spruce. The geographical pattern of the environment-induced increment component was described in more detail via maps using a 75 km × 75 km grid. The changing environment has enhanced forest productivity in Finland over a period of nearly six decades, but recent years have not been favorable for Scots pine, which represents 50 % of the growing stock volume of the forests of Finland.
•We tackle the stock market prediction using both technical and fundamental analysis.•We combine technical and fundamental analysis through data science.•Result on the twenty most capitalized ...NASDAQ100 companies supports our proposal.
Stock market prediction is one of the most challenging problems which has been distressing both researchers and financial analysts for more than half a century. To tackle this problem, two completely opposite approaches, namely technical and fundamental analysis, emerged. Technical analysis bases its predictions on mathematical indicators constructed on the stocks price, while fundamental analysis exploits the information retrieved from news, profitability, and macroeconomic factors. The competition between these schools of thought has led to many interesting achievements, however, to date, no satisfactory solution has been found. Our work aims to combine both technical and fundamental analysis through the application of data science and machine learning techniques. In this paper, the stock market prediction problem is mapped in a classification task of time series data. Indicators of technical analysis and the sentiment of news articles are both exploited as input. The outcome is a robust predictive model able to forecast the trend of a portfolio composed by the twenty most capitalized companies listed in the NASDAQ100 index. As a proof of real effectiveness of our approach, we exploit the predictions to run a high frequency trading simulation reaching more than 80% of annualized return. This project represents a step forward to combine technical and fundamental analysis and provides a starting point for developing new trading strategies.