Receiver operating characteristic (ROC) analysis is performed by a curve, called ROC curve, plotted based on detection probability, <inline-formula> <tex-math notation="LaTeX">P_{\text {D}} ...</tex-math></inline-formula>, versus false alarm probability, <inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>, and has been widely used as an evaluation tool for signal detection. Specifically, the area under an ROC curve (AUC) is calculated and used as a detection measure. Unfortunately, finding distributions of <inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula> to generate a continuous ROC curve is practically infeasible. This article investigates approaches to generating a discrete 2D ROC curve of (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>) without appealing for probability distributions. Since <inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula> are determined by the same threshold <inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula> to specify a detector, an ROC curve of (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>) can only be used to evaluate the effectiveness of a detector but not target detectability (TD) and also not background suppressibility (BS). To address this issue, a 3D ROC curve is generated as a function of (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula>) by introducing a specific threshold parameter <inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula> as a third independent variable. As a result, a 3D ROC curve along with its derived three 2D ROC curves of (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>), (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula>), and (<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula>) can further be used to design new quantitative measures to evaluate the effectiveness of a detector and its TD and BS. To demonstrate the full utility of 3D ROC analysis in target detection, extensive experiments are performed on two types of targets, prior target detection and anomaly detection, to conduct a comprehensive analysis on 3D ROC curves using new designed detection measures to evaluate target/anomaly detection performance.
Anomaly detection, a.k.a. outlier detection or novelty detection, has been a lasting yet active research area in various research communities for several decades. There are still some unique problem ...complexities and challenges that require advanced approaches. In recent years, deep learning enabled anomaly detection, i.e.,
deep anomaly detection
, has emerged as a critical direction. This article surveys the research of deep anomaly detection with a comprehensive taxonomy, covering advancements in 3 high-level categories and 11 fine-grained categories of the methods. We review their key intuitions, objective functions, underlying assumptions, advantages, and disadvantages and discuss how they address the aforementioned challenges. We further discuss a set of possible future opportunities and new perspectives on addressing the challenges.
Since their discovery almost three decades ago, DNAzymes have been used extensively in biosensing. Depending on the type of DNAzyme being used, these functional oligonucleotides can act as molecular ...recognition elements within biosensors, offering high specificity to their target analyte, or as reporters capable of transducing a detectable signal. Several parameters need to be considered when designing a DNAzyme-based biosensor. In particular, given that many of these biosensors immobilize DNAzymes onto a sensing surface, selecting an appropriate immobilization strategy is vital. Suboptimal immobilization can result in both DNAzyme detachment and poor accessibility toward the target, leading to low sensing accuracy and sensitivity. Various approaches have been employed for DNAzyme immobilization within biosensors, ranging from amine and thiol-based covalent attachment to non-covalent strategies involving biotin–streptavidin interactions, DNA hybridization, electrostatic interactions, and physical entrapment. While the properties of each strategy inform its applicability within a proposed sensor, the selection of an appropriate strategy is largely dependent on the desired application. This is especially true given the diverse use of DNAzyme-based biosensors for the detection of pathogens, metal ions, and clinical biomarkers. In an effort to make the development of such sensors easier to navigate, this paper provides a comprehensive review of existing immobilization strategies, with a focus on their respective advantages, drawbacks, and optimal conditions for use. Next, common applications of existing DNAzyme-based biosensors are discussed. Last, emerging and future trends in the development of DNAzyme-based biosensors are discussed, and gaps in existing research worthy of exploration are identified.
Directly benefiting from the deep learning methods, object detection has witnessed a great performance boost in recent years. However, drone-view object detection remains challenging for two main ...reasons: (1) Objects of tiny-scale with more blurs w.r.t. ground-view objects offer less valuable information towards accurate and robust detection; (2) The unevenly distributed objects make the detection inefficient, especially for regions occupied by crowded objects. Confronting such challenges, we propose an end-to-end global-local self-adaptive network (GLSAN) in this paper. The key components in our GLSAN include a global-local detection network (GLDN), a simple yet efficient self-adaptive region selecting algorithm (SARSA), and a local super-resolution network (LSRN). We integrate a global-local fusion strategy into a progressive scale-varying network to perform more precise detection, where the local fine detector can adaptively refine the target's bounding boxes detected by the global coarse detector via cropping the original images for higher-resolution detection. The SARSA can dynamically crop the crowded regions in the input images, which is unsupervised and can be easily plugged into the networks. Additionally, we train the LSRN to enlarge the cropped images, providing more detailed information for finer-scale feature extraction, helping the detector distinguish foreground and background more easily. The SARSA and LSRN also contribute to data augmentation towards network training, which makes the detector more robust. Extensive experiments and comprehensive evaluations on the VisDrone2019-DET benchmark dataset and UAVDT dataset demonstrate the effectiveness and adaptivity of our method. Towards an industrial application, our network is also applied to a DroneBolts dataset with proven advantages. Our source codes have been available at https://github.com/dengsutao/glsan .
Band selection, as a special case of the feature selection problem, tries to remove redundant bands and select a few important bands to represent the whole image cube. This has attracted much ...attention, since the selected bands provide discriminative information for further applications and reduce the computational burden. Though hyperspectral band selection has gained rapid development in recent years, it is still a challenging task because of the following requirements: 1) an effective model can capture the underlying relations between different high-dimensional spectral bands; 2) a fast and robust measure function can adapt to general hyperspectral tasks; and 3) an efficient search strategy can find the desired selected bands in reasonable computational time. To satisfy these requirements, a multigraph determinantal point process (MDPP) model is proposed to capture the full structure between different bands and efficiently find the optimal band subset in extensive hyperspectral applications. There are three main contributions: 1) graphical model is naturally transferred to address band selection problem by the proposed MDPP; 2) multiple graphs are designed to capture the intrinsic relationships between hyperspectral bands; and 3) mixture DPP is proposed to model the multiple dependencies in the proposed multiple graphs, and offers an efficient search strategy to select the optimal bands. To verify the superiority of the proposed method, experiments have been conducted on three hyperspectral applications, such as hyperspectral classification, anomaly detection, and target detection. The reliability of the proposed method in generic hyperspectral tasks is experimentally proved on four real-world hyperspectral data sets.
As technology continues to develop, computer vision (CV) applications are becoming increasingly widespread in the intelligent transportation systems (ITS) context. These applications are developed to ...improve the efficiency of transportation systems, increase their level of intelligence, and enhance traffic safety. Advances in CV play an important role in solving problems in the fields of traffic monitoring and control, incident detection and management, road usage pricing, and road condition monitoring, among many others, by providing more effective methods. This survey examines CV applications in the literature, the machine learning and deep learning methods used in ITS applications, the applicability of computer vision applications in ITS contexts, the advantages these technologies offer and the difficulties they present, and future research areas and trends, with the goal of increasing the effectiveness, efficiency, and safety level of ITS. The present review, which brings together research from various sources, aims to show how computer vision techniques can help transportation systems to become smarter by presenting a holistic picture of the literature on different CV applications in the ITS context.
The high dimensionality of a hyperspectral image (HSI) provides the possibility of deeply capturing the underlying and intrinsic characteristics in spectra, such that targets embedded in the ...background can be detected. However, redundant information, deteriorated bands, and other interferences from background challenge the target detection problem. In this article, an effective feature extraction method based on unsupervised networks is proposed to mine intrinsic properties underlying HSIs. Our approach, called spectral regularized unsupervised networks (SRUN), imposes spectral regularization on autoencoder (AE) and variational AE (VAE) to emphasize spectral consistency, which is more suitable for characterizing spectral information of HSIs by hidden nodes than the original AE and VAE models. Then, we conduct a simple feature selection algorithm on the hidden nodes in the deepest code to select specific nodes that contain distinguishability between target and background, which is based on the spectral angular difference between a known target spectrum and spectra of other pixels in input. The selected nodes are further weighted adaptively to obtain a discriminative map depending on the observation that each selected node provides different contribution rates to target detection. Experimental results on several data sets illustrate that the proposed SRUN-based target detection algorithm is suitable for targets at the subpixel level and those with structural information.
Detecting anomalies in data is a vital task, with numerous high-impact applications in areas such as security, finance, health care, and law enforcement. While numerous techniques have been developed ...in past years for spotting outliers and anomalies in unstructured collections of multi-dimensional points, with graph data becoming ubiquitous, techniques for structured
graph
data have been of focus recently. As objects in graphs have long-range correlations, a suite of novel technology has been developed for anomaly detection in graph data. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs. As a key contribution, we give a general framework for the algorithms categorized under various settings: unsupervised versus (semi-)supervised approaches, for static versus dynamic graphs, for attributed versus plain graphs. We highlight the effectiveness, scalability, generality, and robustness aspects of the methods. What is more, we stress the importance of anomaly
attribution
and highlight the major techniques that facilitate digging out the root cause, or the ‘why’, of the detected anomalies for further analysis and sense-making. Finally, we present several real-world applications of graph-based anomaly detection in diverse domains, including financial, auction, computer traffic, and social networks. We conclude our survey with a discussion on open theoretical and practical challenges in the field.
Salient object detection aims to locate objects that capture human attention within images. Previous approaches often pose this as a problem of image contrast analysis. In this work, we model an ...image as a hyper graph that utilizes a set of hyper edges to capture the contextual properties of image pixels or regions. As a result, the problem of salient object detection becomes one of finding salient vertices and hyper edges in the hyper graph. The main advantage of hyper graph modeling is that it takes into account each pixel's (or region's) affinity with its neighborhood as well as its separation from image background. Furthermore, we propose an alternative approach based on center-versus-surround contextual contrast analysis, which performs salient object detection by optimizing a cost-sensitive support vector machine (SVM) objective function. Experimental results on four challenging datasets demonstrate the effectiveness of the proposed approaches against the state-of-the-art approaches to salient object detection.
We developed spin valve tunneling magnetoresistance devices based on MgO barrier and two compositions of CoFeB electrodes capable of sensing magnetic field in tunable ranges with high sensitivity and ...low nonlinearity. The tunable field ranges are due to varying strength of perpendicular anisotropy in a sensing electrode induced by changing its thickness. The sensing field ranges span from (plus-or-minus sign)0.1 mT to (plus-or-minus sign)100 mT. In the narrowest field range devices showed sensitivity up to 91%/mT and nonlinearity below 1.5% of full scale and in the widest field range sensitivity up to 0.076%/mT and nonlinearity below 2% of full scale. The sensing characteristics and their dependence on the electrode thickness suggest that these device structures are useful for design low to medium magnetic field sensors.