A deep learning approach is proposed to detect data and system anomalies using high-resolution continuous point-on-wave (CPOW) or phasor measurements. Both the anomaly and anomaly-free measurement ...models are assumed to have unknown temporal dependencies and probability distributions. Historical training samples are assumed for the anomaly-free model, while no training samples are available for the anomaly measurements. By transforming the anomaly-free observations into uniform independent and identically distributed sequences via a generative adversarial network, the proposed approach deploys a uniformity test for anomaly detection at the sensor level. A distributed detection scheme that combines sensor level detections at the control center is also proposed which combines local detections to form more reliable detections. Numerical results demonstrate significant improvement over the state-of-the-art solutions for various bad-data cases using real and synthetic CPOW and PMU data sets.
Consideration of the stories included in the narrative works is important for analyzing and providing narrative works (e.g., movies, novels, and comics) to users. In this study, we analyzed the ...stories in a narrative work with three goals: (i) eliciting, (ii) modeling, and (iii) utilizing the stories. Based upon our previous studies regarding ‘character networks’ (i.e., social networks among characters in the stories), we elicited the stories with three methods: (i) composing affective character networks with affective relationships among the characters, (ii) measuring temporal changes in tension according to the flows of the stories, and (iii) detecting affective events which refer to dramatic changes in the tension. The affective relationships contain emotional changes of the characters on each segment of the stories. By aggregating the characters’ emotional changes, we measured the tension of each segment. We called it ‘Affective Fluctuation’ and represented it as a discrete function (Affective Fluctuation Function, AFF). The AFFs enable us to detect affective events by using gradients of them and measure similarities among the stories by comparing their shapes. Also, we proposed a computational model of the stories by annotating the affective events and characters involved in those events. Finally, we demonstrated a practical application with a recommendation method which exploited the similarities between stories. Additionally, we verified the reliabilities and efficiencies of the proposed method for narrative works in the real world.
•The paper presents a novel model and methods for analyzing stories of narrative works.•The proposed methods are focusing on detecting affective events described in the stories.•The affective events are detected by temporal changes of tensions per flows of the stories.•The tensions are measured by affective relationships among characters appeared in the stories.•It has shown its efficiency on recommendation system for the narrative works.
Recent technology advancement has resulted in optimistic view toward the practicability of wireless sensor networks (WSNs) in the context of Internet of Things (IoT) and Cyber Physical Systems (CPS). ...However, to realize their full benefits in a broad range of commercial applications, there are still many technical hitches that need to be overcome. In this paper, we address three vital technical issues in a WSN: (1) distributed event detection, (2) distributed parameter estimation, and (3) network's robustness. We make use of a recent development in social networks called small world characteristics and propose novel fault-resilient distributed detection and estimation methods over a small world WSN (SW-WSN). In particular, a small world WSN has been developed by mounting antenna arrays on sensor nodes for the purpose of beamforming. A low-complexity optimization problem for beamforming is formulated by introducing a new parameter Flow between node pairs. Additionally, a new beamforming algorithm is also proposed which optimizes this flow, leading to optimal beam parameters. The proposed method yields a lower average path length and a higher average clustering coefficient of the network. Experiments are conducted using simulations and real node deployments over a WSN testbed. Analysis and experimental results obtained demonstrate that the proposed SW-WSN model achieves faster convergence rates for both distributed detection and distributed estimation while being resilient to node failures when compared to results obtained using state-of-the-art methods.
•A multi-modal Generative Adversarial Network for traffic event detection.•Semi-supervised learning based on generative adversarial network.•Detecting traffic events with both sensor and social media ...data.•Evaluation based on a large, real-world multi-modal dataset.
Advances in the Internet of Things have enabled the development of many smart city applications and expert systems that help citizens and authorities better understand the dynamics of the cities, and make better planning and utilisation of city resources. Smart cities are composed of complex systems that usually process and analyse big data from the Cyber, Physical, and Social worlds. Traffic event detection is an important and complex task in smart transportation modelling and management. We address this problem using semi-supervised deep learning with data of different modalities, e.g., physical sensor observations and social media data. Unlike most existing studies focusing on data of single modality, the proposed method makes use of data of multiple modalities that appear to complement and reinforce each other. Meanwhile, as the amount of labelled data in big data applications is usually extremely limited, we extend the multi-modal Generative Adversarial Network model to a semi-supervised architecture to characterise traffic events. We evaluate the model with a large, real-world dataset consisting of traffic sensor observations and social media data collected from the San Francisco Bay Area over a period of four months. The evaluation results clearly demonstrate the advantages of the proposed model in extracting and classifying traffic events.
•We categorize the behaviors of people into individual and group interactive behavior.•We propose a hybrid agent system that includes static and dynamic agents in a scene.•We represent the behavior ...of a crowd as a bag of words to detect abnormal behavior.
In this paper, we propose a hybrid agent method to detect abnormal behaviors in a crowded scene. In real-life situations, abnormal behavior occurs by violent movement which is apparent as sudden speeding up, or chaotic movement in a restricted area, or movement contrasting with that of one’s neighbors such as in a panic situation. In our model, we categorize the behaviors of people into individual behavior and group interactive behavior. Individual behavior is defined only by native motion information such as speed and direction. By contrast, group interactive behavior is defined by information concerning interactive motion between neighbors. We propose a hybrid agent system that includes static and dynamic agents to observe efficiently the corresponding individual and interactive behaviors in a crowded scene. The static agent is assigned to a specific spot and analyzes motion information near that spot. Unlike the static agent, the dynamic agent is assigned to a moving object and analyzes motion information of neighbors as well as oneself by following the object’s movement. We represent the behavior of a crowd as a bag of words through the integration of static and dynamic agent information to determine abnormalities in the crowd behavior. The experimental results show that our proposed method efficiently detects abnormal behaviors in crowded scenes.
Nowadays, the world faces extreme climate changes, resulting in an increase of natural disaster events and their severities. In these conditions, the necessity of disaster information management ...systems has become more imperative. Specifically, in this paper, the problem of flood event detection from images with real-world conditions is addressed. That is, the images may be taken in several conditions, including day, night, blurry, clear, foggy, rainy, different lighting conditions, etc. All these abnormal scenarios significantly reduce the performance of the learning algorithms. In addition, many existing image classification methods use datasets that usually include high-resolution images without considering real-world noise. In this paper, we propose a new image classification framework based on adversarial data augmentation and deep learning algorithms to address the aforementioned problems. We validate the performance of the flood event detection framework on a real-world noisy visual dataset collected from social networks.
The acoustic emission (AE) technique is one of the most widely used in the field of structural monitoring. Its popularity mainly stems from the fact that it belongs to the category of non-destructive ...techniques (NDT) and allows the passive monitoring of structures. The technique employs piezoelectric sensors to measure the elastic ultrasonic wave that propagates in the material as a result of the crack formation's abrupt release of energy. The recorded signal can be investigated to obtain information about the source crack, its position, and its typology (Mode I, Mode II). Over the years, many techniques have been developed for the localization, characterization, and quantification of damage from the study of acoustic emission. The onset time of the signal is an essential information item to be derived from waveform analysis. This information combined with the use of the triangulation technique allows for the identification of the crack location. In the literature, it is possible to find many methods to identify, with increasing accuracy, the onset time of the P-wave. Indeed, the precision of the onset time detection affects the accuracy of identifying the location of the crack. In this paper, two techniques for the definition of the onset time of acoustic emission signals are presented. The first method is based on the Akaike Information Criterion (AIC) while the second one relies on the use of artificial intelligence (AI). A recurrent convolutional neural network (R-CNN) designed for sound event detection (SED) is trained on three different datasets composed of seismic signals and acoustic emission signals to be tested on a real-world acoustic emission dataset. The new method allows taking advantage of the similarities between acoustic emissions, seismic signals, and sound signals, enhancing the accuracy in determining the onset time.
This paper presents a novel framework for high-level activity analysis based on late fusion using multi-independent temporal perception layers. The method allows us to handle temporal diversity of ...high-level activities. The framework consists of multi-temporal analysis, multi-temporal perception layers, and late fusion. We build two types of perception layers based on situation graph trees (SGT) and support vector machines (SVMs). The results obtained from the multi-temporal perception layers are fused into an activity score through a step of late fusion. To verify this approach, we apply the framework to violent events detection in visual surveillance and experiments are conducted by using three datasets: BEHAVE, NUS–HGA and some videos from YouTube that show real situations. We also compare the proposed framework with existing single-temporal frameworks. The experiments produced results with accuracy of 0.783 (SGT-based, BEHAVE), 0.702 (SVM-based, BEHAVE), 0.872 (SGT-based, NUS–HGA), and 0.699 (SGT-based, YouTube), thereby showing that using our multi-temporal approach has advantages over single-temporal methods.
At present, the existing abnormal event detection models based on deep learning mainly focus on data represented by a vectorial form, which pay little attention to the impact of the internal ...structure characteristics of feature vector. In addition, a single classifier is difficult to ensure the accuracy of classification. In order to address the above issues, we propose an abnormal event detection hybrid modulation method via feature expectation subgraph calibrating classification in video surveillance scenes in this paper. Our main contribution is to calibrate the classification of a single classifier by constructing feature expectation subgraphs. First, we employ convolutional neural network and long short-term memory models to extract the spatiotemporal features of video frame, and then construct the feature expectation subgraph for each key frame of every video, which could be used to capture the internal sequential and topological relational characteristics of structured feature vector. Second, we project expectation subgraphs on the sparse vector to combine with a support vector classifier to calibrate the results of a linear support vector classifier. Finally, the experiments on a common dataset named UCSDped1 and a coal mining video dataset in comparison with some existing works demonstrate that the performance of the proposed method is better than several the state-of-the-art approaches.
In this article, we set up a novel audio dataset named Gastrointestinal (GI) Sound Set which includes 6 kinds of body sounds Bowel sound, Speech, Snore, Cough, Groan, and Rub. We do sound event ...detection (SED) based on it, and can accurately detect 6 types of sound events. First, the GI Sound Set is collected by wearable auscultation devices. To ensure generalization, patients from five different hospital departments are recruited for data collection, along with a group of healthy subjects. GI Sound Set refers to Google AudioSet in data format but varies in audio length and sampling rate. Second, we extract Mel-filter features from the recordings and investigate the performance of different activation functions and neural network architectures for detecting sound events. We use data augmentation, class balance to deal with the problem of quantitative imbalance between classes on the dataset. We apply multiple instances learning(MIL) to give out not only bag-level results but also frame-level results. In this work, GI Sound Set is the largest body sound dataset to date, and our approach shows state-of-the-art performance with an average score of F1=81.06% evaluated on the test set. Due to its simple network and conventional processing method, our CRNN system has high universality, which can be used in other audio datasets, such as respiratory sound and heart sound.