The observation of gravitational waves from compact binary coalescences by LIGO and Virgo has begun a new era in astronomy. A critical challenge in making detections is determining whether loud ...transient features in the data are caused by gravitational waves or by instrumental or environmental sources. The citizen-science project Gravity Spy has been demonstrated as an efficient infrastructure for classifying known types of noise transients (glitches) through a combination of data analysis performed by both citizen volunteers and machine learning. We present the next iteration of this project, using similarity indices to empower citizen scientists to create large data sets of unknown transients, which can then be used to facilitate supervised machine-learning characterization. This new evolution aims to alleviate a persistent challenge that plagues both citizen-science and instrumental detector work: the ability to build large samples of relatively rare events. Using two families of transient noise that appeared unexpectedly during LIGO's second observing run, we demonstrate the impact that the similarity indices could have had on finding these new glitch types in the Gravity Spy program.
In this paper, we present novel algorithms for total variation (TV) based blind deconvolution and parameter estimation utilizing a variational framework. Using a hierarchical Bayesian model, the ...unknown image, blur, and hyperparameters for the image, blur, and noise priors are estimated simultaneously. A variational inference approach is utilized so that approximations of the posterior distributions of the unknowns are obtained, thus providing a measure of the uncertainty of the estimates. Experimental results demonstrate that the proposed approaches provide higher restoration performance than non-TV-based methods without any assumptions about the unknown hyperparameters.
With the first direct detection of gravitational waves, the advanced laser interferometer gravitational-wave observatory (LIGO) has initiated a new field of astronomy by providing an alternative ...means of sensing the universe. The extreme sensitivity required to make such detections is achieved through exquisite isolation of all sensitive components of LIGO from non-gravitational-wave disturbances. Nonetheless, LIGO is still susceptible to a variety of instrumental and environmental sources of noise that contaminate the data. Of particular concern are noise features known as glitches, which are transient and non-Gaussian in their nature, and occur at a high enough rate so that accidental coincidence between the two LIGO detectors is non-negligible. Glitches come in a wide range of time-frequency-amplitude morphologies, with new morphologies appearing as the detector evolves. Since they can obscure or mimic true gravitational-wave signals, a robust characterization of glitches is paramount in the effort to achieve the gravitational-wave detection rates that are predicted by the design sensitivity of LIGO. This proves a daunting task for members of the LIGO Scientific Collaboration alone due to the sheer amount of data. In this paper we describe an innovative project that combines crowdsourcing with machine learning to aid in the challenging task of categorizing all of the glitches recorded by the LIGO detectors. Through the Zooniverse platform, we engage and recruit volunteers from the public to categorize images of time-frequency representations of glitches into pre-identified morphological classes and to discover new classes that appear as the detectors evolve. In addition, machine learning algorithms are used to categorize images after being trained on human-classified examples of the morphological classes. Leveraging the strengths of both classification methods, we create a combined method with the aim of improving the efficiency and accuracy of each individual classifier. The resulting classification and characterization should help LIGO scientists to identify causes of glitches and subsequently eliminate them from the data or the detector entirely, thereby improving the rate and accuracy of gravitational-wave observations. We demonstrate these methods using a small subset of data from LIGO's first observing run.
The performance of an automatic facial expression recognition system can be significantly improved by modeling the reliability of different streams of facial expression information utilizing ...multistream hidden Markov models (HMMs). In this paper, we present an automatic multistream HMM facial expression recognition system and analyze its performance. The proposed system utilizes facial animation parameters (FAPs), supported by the MPEG-4 standard, as features for facial expression classification. Specifically, the FAPs describing the movement of the outer-lip contours and eyebrows are used as observations. Experiments are first performed employing single-stream HMMs under several different scenarios, utilizing outer-lip and eyebrow FAPs individually and jointly. A multistream HMM approach is proposed for introducing facial expression and FAP group dependent stream reliability weights. The stream weights are determined based on the facial expression recognition results obtained when FAP streams are utilized individually. The proposed multistream HMM facial expression system, which utilizes stream reliability weights, achieves relative reduction of the facial expression recognition error of 44% compared to the single-stream HMM system.
Using a stochastic framework, we propose two algorithms for the problem of obtaining a single high-resolution image from multiple noisy, blurred, and undersampled images. The first is based on a ...Bayesian formulation that is implemented via the expectation maximization algorithm. The second is based on a maximum a posteriori formulation. In both of our formulations, the registration, noise, and image statistics are treated as unknown parameters. These unknown parameters and the high-resolution image are estimated jointly based on the available observations. We present an efficient implementation of these algorithms in the frequency domain that allows their application to large images. Simulations are presented that test and compare the proposed algorithms.
Following the hierarchical Bayesian framework for blind deconvolution problems, in this paper, we propose the use of simultaneous autoregressions as prior distributions for both the image and blur, ...and gamma distributions for the unknown parameters (hyperparameters) of the priors and the image formation noise. We show how the gamma distributions on the unknown hyperparameters can be used to prevent the proposed blind deconvolution method from converging to undesirable image and blur estimates and also how these distributions can be inferred in realistic situations. We apply variational methods to approximate the posterior probability of the unknown image, blur, and hyperparameters and propose two different approximations of the posterior distribution. One of these approximations coincides with a classical blind deconvolution method. The proposed algorithms are tested experimentally and compared with existing blind deconvolution methods
In this paper the application of image prior combinations to the Bayesian Super Resolution (SR) image registration and reconstruction problem is studied. Two sparse image priors, a Total Variation ...(TV) prior and a prior based on the ℓ1 norm of horizontal and vertical first-order differences (f.o.d.), are combined with a non-sparse Simultaneous Auto Regressive (SAR) prior. Since, for a given observation model, each prior produces a different posterior distribution of the underlying High Resolution (HR) image, the use of variational approximation will produce as many posterior approximations as priors we want to combine. A unique approximation is obtained here by finding the distribution on the HR image given the observations that minimizes a linear convex combination of Kullback–Leibler (KL) divergences. We find this distribution in closed form. The estimated HR images are compared with the ones obtained by other SR reconstruction methods.
The proposed unusual video event detection method is based on unsupervised clustering of object trajectories, which are modeled by hidden Markov models (HMM). The novelty of the method includes a ...dynamic hierarchical process incorporated in the trajectory clustering algorithm to prevent model overfitting and a 2-depth greedy search strategy for efficient clustering.
Designing effective image priors is of great interest to image super-resolution (SR), which is a severely under-determined problem. An edge smoothness prior is favored since it is able to suppress ...the jagged edge artifact effectively. However, for soft image edges with gradual intensity transitions, it is generally difficult to obtain analytical forms for evaluating their smoothness. This paper characterizes soft edge smoothness based on a novel SoftCuts metric by generalizing the Geocuts method . The proposed soft edge smoothness measure can approximate the average length of all level lines in an intensity image. Thus, the total length of all level lines can be minimized effectively by integrating this new form of prior. In addition, this paper presents a novel combination of this soft edge smoothness prior and the alpha matting technique for color image SR, by adaptively normalizing image edges according to their alpha-channel description. This leads to the adaptive SoftCuts algorithm, which represents a unified treatment of edges with different contrasts and scales. Experimental results are presented which demonstrate the effectiveness of the proposed method.
Multi-user video streaming over wireless channels is a challenging problem, where the demand for better video quality and small transmission delays needs to be reconciled with the limited and often ...time-varying communication resources. This paper presents a framework for joint network optimization, source adaptation, and deadline-driven scheduling for multi-user video streaming over wireless networks. We develop a joint adaptation, resource allocation and scheduling (JARS) algorithm, which allocates the communication resource based on the video users' quality of service, adapts video sources based on smart summarization, and schedules the transmissions to meet the frame delivery deadlines. The proposed algorithm leads to near full utilization of the network resources and satisfies the delivery deadlines for all video frames. Substantial performance improvements are achieved compared with heuristic schemes that do not take the interactions between multiple users into consideration.