Increasing demand for larger touch screen panels (TSPs) places more energy burden to mobile systems with conventional sensing methods. To mitigate this problem, taking advantage of the touch event ...sparsity, this brief proposes a novel TSP readout system that can obtain huge energy saving by turning off the readout circuits when none of the sensors are activated. To this end, a novel ultra-low-power always-on event and region of interest detection based on lightweight compressed sensing is proposed. Exploiting the proposed event detector, the context-aware TSP readout system, which can improve the energy efficiency by up to <inline-formula> <tex-math notation="LaTeX">42\times </tex-math></inline-formula>, is presented.
Abnormal event detection is an important task in video surveillance systems. In this paper, we propose novel bidirectional multi-scale aggregation networks (BMAN) for abnormal event detection. The ...proposed BMAN learns spatio-temporal patterns of normal events to detect deviations from the learned normal patterns as abnormalities. The BMAN consists of two main parts: an inter-frame predictor and an appearance-motion joint detector. The inter-frame predictor is devised to encode normal patterns, which generates an inter-frame using bidirectional multi-scale aggregation based on attention. With the feature aggregation, robustness for object scale variations and complex motions is achieved in normal pattern encoding. Based on the encoded normal patterns, abnormal events are detected by the appearance-motion joint detector in which both appearance and motion characteristics of scenes are considered. Comprehensive experiments are performed, and the results show that the proposed method outperforms the existing state-of-the-art methods. The resulting abnormal event detection is interpretable on the visual basis of where the detected events occur. Further, we validate the effectiveness of the proposed network designs by conducting ablation study and feature visualization.
We present adversarial event prediction (AEP), a novel approach to detecting abnormal events through an event prediction setting. Given normal event samples, AEP derives the prediction model, which ...can discover the correlation between the present and future of events in the training step. In obtaining the prediction model, we propose adversarial learning for the past and future of events. The proposed adversarial learning enforces AEP to learn the representation for predicting future events and restricts the representation learning for the past of events. By exploiting the proposed adversarial learning, AEP can produce the discriminative model to detect an anomaly of events without complementary information, such as optical flow and explicit abnormal event samples in the training step. We demonstrate the efficiency of AEP for detecting anomalies of events using the UCSD-Ped, CUHK Avenue, Subway, and UCF-Crime data sets. Experiments include the performance analysis depending on hyperparameter settings and the comparison with existing state-of-the-art methods. The experimental results show that the proposed adversarial learning can assist in deriving a better model for normal events on AEP, and AEP trained by the proposed adversarial learning can surpass the existing state-of-the-art methods.
The reduction of sensor network traffic has become a scientific challenge. Different compression techniques are applied for this purpose, offering general solutions which try to minimize the loss of ...information. Here, a new proposal for traffic reduction by redefining the domains of the sensor data is presented. A configurable data reduction model is proposed focused on periodic duty⁻cycled sensor networks with events triggered by threshold. The loss of information produced by the model is analyzed in this paper in the context of event detection, an unusual approach leading to a set of specific metrics that enable the evaluation of the model in terms of traffic savings, precision, and recall. Different model configurations are tested with two experimental cases, whose input data are extracted from an extensive set of real data. In particular, two new versions of Send⁻on⁻Delta (SoD) and Predictive Sampling (PS) have been designed and implemented in the proposed data⁻domain reduction for threshold⁻based event detection (D2R-TED) model. The obtained results illustrate the potential usefulness of analyzing different model configurations to obtain a cost⁻benefit curve, in terms of traffic savings and quality of the response. Experiments show an average reduction of 76 % of network packages with an error of less than 1%. In addition, experiments show that the methods designed under the proposed D2R⁻TED model outperform the original event⁻triggered SoD and PS methods by 10 % and 16 % of the traffic savings, respectively. This model is useful to avoid network bottlenecks by applying the optimal configuration in each situation.
In recent years, there has been a surge of interest in Artificial Intelligence (AI) systems that can provide human-centric explanations for decisions or predictions. No matter how good and efficient ...an AI model is, users or practitioners find it difficult to trust it if they cannot understand the AI model or its behaviours. Incorporating explainability that is human-centric in event detection systems is significant for building a decision-making process that is more trustworthy and sustainable. Human-centric and semantics-based explainable event detection will achieve trustworthiness, explainability, and reliability, which are currently lacking in AI systems. This paper provides a survey on human-centric explainable AI, explainable event detection, and semantics-based explainable event detection by answering some research questions that bother on the characteristics of human-centric explanations, the state of explainable AI, methods for human-centric explanations, the essence of human-centricity in explainable event detection, research efforts in explainable event solutions, and the benefits of integrating semantics into explainable event detection. The findings from the survey show the current state of human-centric explainability, the potential of integrating semantics into explainable AI, the open problems, and the future directions which can guide researchers in the explainable AI domain.
Methods for identifying human activity have a wide range of potential applications, including security, event time detection, intelligent building environments, and human health. Current ...methodologies typically rely on either wave propagation or structural dynamics principles. However, force-based methods, such as the probabilistic force estimation and event localization algorithm (PFEEL), offer advantages over wave propagation methods by avoiding challenges such as multi-path fading. PFEEL utilizes a probabilistic framework to estimate the force of impacts and the event locations in the calibration space, providing a measure of uncertainty in the estimations.
This paper presents a new implementation of PFEEL using a data-driven model based on Gaussian process regression (GPR). The new approach was evaluated using experimental data collected on an aluminum plate impacted at eighty-one points, with a separation of five centimeters. The results are presented as an area of localization relative to the actual impact location at different probability levels. These results can aid analysts in determining the required precision for various implementations of PFEEL.
•PFEEL algorithm on an aluminum plate to accurately localize impact events.•A Gaussian Process Regression model to provide probabilistic localization and force.•PFEEL-GPR reduces modeling errors and computational costs for event detection.•Bayes to model the system’s input/output improves reliability in event detection.
Multimedia event detection has been one of the major endeavors in video event analysis. A variety of approaches have been proposed recently to tackle this problem. Among others, using semantic ...representation has been accredited for its promising performance and desirable ability for human-understandable reasoning. To generate semantic representation, we usually utilize several external image/video archives and apply the concept detectors trained on them to the event videos. Due to the intrinsic difference of these archives, the resulted representation is presumable to have different predicting capabilities for a certain event. Notwithstanding, not much work is available for assessing the efficacy of semantic representation from the source-level. On the other hand, it is plausible to perceive that some concepts are noisy for detecting a specific event. Motivated by these two shortcomings, we propose a bi-level semantic representation analyzing method. Regarding source-level, our method learns weights of semantic representation attained from different multimedia archives. Meanwhile, it restrains the negative influence of noisy or irrelevant concepts in the overall concept-level. In addition, we particularly focus on efficient multimedia event detection with few positive examples, which is highly appreciated in the real-world scenario. We perform extensive experiments on the challenging TRECVID MED 2013 and 2014 datasets with encouraging results that validate the efficacy of our proposed approach.
Public evaluation campaigns and datasets promote active development in target research areas, allowing direct comparison of algorithms. The second edition of the challenge on detection and ...classification of acoustic scenes and events (DCASE 2016) has offered such an opportunity for development of the state-of-the-art methods, and succeeded in drawing together a large number of participants from academic and industrial backgrounds. In this paper, we report on the tasks and outcomes of the DCASE 2016 challenge. The challenge comprised four tasks: acoustic scene classification, sound event detection in synthetic audio, sound event detection in real-life audio, and domestic audio tagging. We present each task in detail and analyze the submitted systems in terms of design and performance. We observe the emergence of deep learning as the most popular classification method, replacing the traditional approaches based on Gaussian mixture models and support vector machines. By contrast, feature representations have not changed substantially throughout the years, as mel frequency-based representations predominate in all tasks. The datasets created for and used in DCASE 2016 are publicly available and are a valuable resource for further research.
Complex event detection is a retrieval task with the goal of finding videos of a particular event in a large-scale unconstrained Internet video archive, given example videos and text descriptions. ...Nowadays, different multimodal fusion schemes of low-level and high-level features are extensively investigated and evaluated for the complex event detection task. However, how to effectively select the high-level semantic meaningful concepts from a large pool to assist complex event detection is rarely studied in the literature. In this paper, we propose a novel strategy to automatically select semantic meaningful concepts for the event detection task based on both the events-kit text descriptions and the concepts high-level feature descriptions. Moreover, we introduce a novel event oriented dictionary representation based on the selected semantic concepts. Toward this goal, we leverage training images (frames) of selected concepts from the semantic indexing dataset with a pool of 346 concepts, into a novel supervised multitask ℓ p -norm dictionary learning framework. Extensive experimental results on TRECVID multimedia event detection dataset demonstrate the efficacy of our proposed method.