Anomalies are rare observations (e.g., data records or events) that deviate significantly from the others in the sample. Over the past few decades, research on anomaly mining has received increasing ...interests due to the implications of these occurrences in a wide range of disciplines - for instance, security, finance, and medicine. For this reason, anomaly detection, which aims to identify these rare observations, has become one of the most vital tasks in the world and has shown its power in preventing detrimental events, such as financial fraud, network intrusions, and social spam. The detection task is typically solved by identifying outlying data points in the feature space, which, inherently, overlooks the relational information in real-world data. At the same time, graphs have been prevalently used to represent the structural/relational information, which raises the graph anomaly detection problem - identifying anomalous graph objects (i.e., nodes, edges and sub-graphs) in a single graph, or anomalous graphs in a set/database of graphs. Conventional anomaly detection techniques cannot tackle this problem well because of the complexity of graph data (e.g., irregular structures, relational dependencies, node/edge types/attributes/directions/multiplicities/weights, large scale, etc.). However, thanks to the advent of deep learning in breaking these limitations, graph anomaly detection with deep learning has received a growing attention recently. In this survey, we aim to provide a systematic and comprehensive review of the contemporary deep learning techniques for graph anomaly detection. Specifically, we provide a taxonomy that follows a task-driven strategy and categorizes existing work according to the anomalous graph objects that they can detect. We especially focus on the challenges in this research area and discuss the key intuitions, technical details as well as relative strengths and weaknesses of various techniques in each category. From the survey results, we highlight 12 future research directions spanning unsolved and emerging problems introduced by graph data, anomaly detection, deep learning and real-world applications. Additionally, to provide a wealth of useful resources for future studies, we have compiled a set of open-source implementations, public datasets, and commonly-used evaluation metrics. With this survey, our goal is to create a "one-stop-shop" that provides a unified understanding of the problem categories and existing approaches, publicly available hands-on resources, and high-impact open challenges for graph anomaly detection using deep learning.
Recent academic and industry reports confirm that web robots dominate the traffic seen by web servers across the Internet. Because web robots crawl in an unregulated fashion, they may threaten the ...privacy, function, performance, and security of web servers. There is therefore a growing need to be able to identify robot visitors automatically, in offline and in real time, to assess their impact and to potentially protect web servers from abusive bots. Yet contemporary detection approaches, which rely on syntactic log analysis, finding statistical variations between robot and human traffic, analytical learning techniques, or complex software modifications may not be realistic to implement or remain effective as the behavior of robots evolve over time. Instead, this paper presents a novel detection approach that relies on the differences in the resource request patterns of web robots and humans. It rationalizes why differences in resource request patterns are expected to remain intrinsic to robots and humans despite the continuous evolution of their traffic. The performance of the approach, adoptable for both offline and real time settings with a simple implementation, is demonstrated by playing back streams of actual web traffic with varying session lengths and proportions of robot requests.
With the fast development of remote sensing platforms and sensors technology, change detection with heterogeneous remote sensing images (Hete-CD) has become an attractive topic in recent years and ...plays a vital role in land cover change detection for responding to natural disaster emergencies when homogeneous images are unavailable. Although Hete-CD has been developed for about three decades, and various related methods have been developed and applied successfully in practice, a systematic and comprehensive review of the current achievements regarding Hete-CD remains lacking. Therefore, in this article, we first present an overview of Hete-CD in terms of the related literature. Second, the major techniques of Hete-CD are reviewed in terms of publicly available datasets, the taxonomy of major techniques, results, performance, and quantitative evaluation. Then, some classical methods are selected for comparison and discussion. Finally, based on the discussion and literature review, challenges, opportunities, and future directions for Hete-CD are concluded. The review aims to provide a "one-stop-shop" understanding of the problems with the categories of existing approaches, open opportunities and challenges, and potential future directions for Hete-CD.
Cracks are typical line structures that are of interest in many computer-vision applications. In practice, many cracks, e.g., pavement cracks, show poor continuity and low contrast, which bring great ...challenges to image-based crack detection by using low-level features. In this paper, we propose DeepCrack-an end-to-end trainable deep convolutional neural network for automatic crack detection by learning high-level features for crack representation. In this method, multi-scale deep convolutional features learned at hierarchical convolutional stages are fused together to capture the line structures. More detailed representations are made in larger scale feature maps and more holistic representations are made in smaller scale feature maps. We build DeepCrack net on the encoder-decoder architecture of SegNet and pairwisely fuse the convolutional features generated in the encoder network and in the decoder network at the same scale. We train DeepCrack net on one crack dataset and evaluate it on three others. The experimental results demonstrate that DeepCrack achieves F -measure over 0.87 on the three challenging datasets in average and outperforms the current state-of-the-art methods.
Recent progress on salient object detection is substantial, benefiting mostly from the explosive development of Convolutional Neural Networks (CNNs). Semantic segmentation and salient object ...detection algorithms developed lately have been mostly based on Fully Convolutional Neural Networks (FCNs). There is still a large room for improvement over the generic FCN models that do not explicitly deal with the scale-space problem. The Holistically-Nested Edge Detector (HED) provides a skip-layer structure with deep supervision for edge and boundary detection, but the performance gain of HED on saliency detection is not obvious. In this paper, we propose a new salient object detection method by introducing short connections to the skip-layer structures within the HED architecture. Our framework takes full advantage of multi-level and multi-scale features extracted from FCNs, providing more advanced representations at each layer, a property that is critically needed to perform segment detection. Our method produces state-of-the-art results on 5 widely tested salient object detection benchmarks, with advantages in terms of efficiency (0.08 seconds per image), effectiveness, and simplicity over the existing algorithms. Beyond that, we conduct an exhaustive analysis of the role of training data on performance. We provide a training set for future research and fair comparisons.
Efficient detection of targets immersed in a complex background with a low signal-to-clutter ratio (SCR) is very important in infrared search and tracking (IRST) applications. In this paper, we ...address the target detection problem in terms of local image segmentation and propose a novel small target detection algorithm derived from facet kernel and random walker (RW) algorithm which includes four main stages. First, since the RW algorithm is suitable for images with less noises, local order-statistic and mean filtering are applied to remove the pixel-sized noises with high brightness (PNHB) and smooth the infrared images. Second, the infrared image is filtered by the facet kernel to enhance the target pixels and candidate target pixels are extracted by an adaptive threshold operation. Third, inspired by the properties of infrared targets, a novel local contrast descriptor (NLCD) based on the RW algorithm is proposed to achieve clutter suppression and target enhancement. Then, the candidate target pixels are selected as central pixels to construct the local regions and the NLCD map of all local regions is computed. The obtained NLCD map is weighted by the filtered map of facet kernel to further enhance target. Finally, the target is detected by a thresholding operation on the weighted map. Experimental results on three data sets show that the proposed method outperforms conventional baseline methods in terms of target detection accuracy.
Surgical instrument detection in robot-assisted surgery videos is an import vision component for these systems. Most of the current deep learning methods focus on single-tool detection and suffer ...from low detection speed. To address this, the authors propose a novel frame-by-frame detection method using a cascading convolutional neural network (CNN) which consists of two different CNNs for real-time multi-tool detection. An hourglass network and a modified visual geometry group (VGG) network are applied to jointly predict the localisation. The former CNN outputs detection heatmaps representing the location of tool tip areas, and the latter performs bounding-box regression for tool tip areas on these heatmaps stacked with input RGB image frames. The authors’ method is tested on the publicly available EndoVis Challenge dataset and the ATLAS Dione dataset. The experimental results show that their method achieves better performance than mainstream detection methods in terms of detection accuracy and speed.
This book shows you how to adopt data-driven techniques for the problem of radar detection, both per se and in combination with model-based approaches. In particular, the focus is on space-time ...adaptive target detection against a background of interference consisting of clutter, possible jammers, and noise. It is a handy, concise reference for many classic (model-based) adaptive radar detection schemes as well as the most popular machine learning techniques (including deep neural networks) and helps you identify suitable data-driven approaches for radar detection and the main related issues. You'll learn how data-driven tools relate to, and can be coupled or hybridized with, traditional adaptive detection statistics; understand fundamental concepts, schemes, and algorithms from statistical learning, classification, and neural networks domains. The book also walks you through how these concepts and schemes have been adapted for the problem of radar detection in the literature and provides you with a methodological guide for the design, illustrating different possible strategies. You'll be equipped to develop a unified view, under which you can exploit the new possibilities of the data-driven approach even using simulated data. This book is an excellent resource for Radar professionals and industrial researchers, postgraduate students in electrical engineering and the academic community.
Deep learning approaches to anomaly detection (AD) have recently improved the state of the art in detection performance on complex data sets, such as large collections of images or text. These ...results have sparked a renewed interest in the AD problem and led to the introduction of a great variety of new methods. With the emergence of numerous such methods, including approaches based on generative models, one-class classification, and reconstruction, there is a growing need to bring methods of this field into a systematic and unified perspective. In this review, we aim to identify the common underlying principles and the assumptions that are often made implicitly by various methods. In particular, we draw connections between classic "shallow" and novel deep approaches and show how this relation might cross-fertilize or extend both directions. We further provide an empirical assessment of major existing methods that are enriched by the use of recent explainability techniques and present specific worked-through examples together with practical advice. Finally, we outline critical open challenges and identify specific paths for future research in AD.
Classifying concrete defects during a bridge inspection remains a subjective and laborious task. The risk of getting a false result is approximately 50% if different inspectors assess the same ...concrete defect. This is significant in the light of an over-aging bridge stock, decreasing infrastructure maintenance budgets and catastrophic bridge collapses as happened in 2018 in Genoa, Italy. To support an automated inspection and an objective bridge defect classification, we propose a three-staged concrete defect classifier that can multi-classify potentially unhealthy bridge areas into their specific defect type in conformity with existing bridge inspection guidelines. Three separate deep neural pre-trained networks are fine-tuned based on a multi-source dataset consisting of self-collected image samples plus several Departments of Transportation inspection databases. We show that this approach can reliably classify multiple defect types with an average mean score of 85%. Our presented multi-classifier is a contribution towards developing a mostly or fully inspection schema for a more cost-effective and more objective bridge inspection.
•The presented method can automatically multi-classify concrete bridge defects on image patches in accordance with existing inspection guidelines.•The cross-learning strategy of using a pre-trained network and refine this to domain-specific knowledge can be successfully applied for concrete bridge surface defects.•The multi-classifier can adapt to local variations and consider possible (and impossible) defect combinations.