The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years, several pioneering works suggested ...such a separation be based on variational formulation and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the basis pursuit denoising (BPDN) algorithm and the total-variation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures and the other for the natural scene parts assumed to be piecewise smooth. Both dictionaries are chosen such that they lead to sparse representations over one type of image-content (either texture or piecewise smooth). The use of the BPDN with the two amalgamed dictionaries leads to the desired separation, along with noise removal as a by-product. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly efficient numerical scheme to solve the combined optimization problem posed by our model and to show several experimental results that validate the algorithm's performance.
Representing the image to be inpainted in an appropriate sparse representation dictionary, and combining elements from Bayesian statistics and modern harmonic analysis, we introduce an expectation ...maximization (EM) algorithm for image inpainting and interpolation. From a statistical point of view, the inpainting/interpolation can be viewed as an estimation problem with missing data. Toward this goal, we propose the idea of using the EM mechanism in a Bayesian framework, where a sparsity promoting prior penalty is imposed on the reconstructed coefficients. The EM framework gives a principled way to establish formally the idea that missing samples can be recovered/interpolated based on sparse representations. We first introduce an easy and efficient sparse-representation-based iterative algorithm for image inpainting. Additionally, we derive its theoretical convergence properties. Compared to its competitors, this algorithm allows a high degree of flexibility to recover different structural components in the image (piecewise smooth, curvilinear, texture, etc.). We also suggest some guidelines to automatically tune the regularization parameter.
Aims.We study the relationship between the local environment of galaxies and their star formation rate (SFR) in the Great Observatories Origins Deep Survey, GOODS, at $z\sim1$. Methods.We use ...ultradeep imaging at 24 μm with the MIPS camera onboard ${\it Spitzer}$ to determine the contribution of obscured light to the SFR of galaxies over the redshift range $0.8\leq z \leq1.2$. Accurate galaxy densities are measured thanks to the large sample of ~1200 spectroscopic redshifts with high (~70%) spectroscopic completeness. Morphology and stellar masses are derived from deep HST-ACS imaging, supplemented by ground based imaging programs and photometry from the IRAC camera onboard ${\it Spitzer}$. Results.We show that the star formation-density relation observed locally was reversed at $z\sim 1$: the average SFR of an individual galaxy increased with local galaxy density when the universe was less than half its present age. Hierarchical galaxy formation models (simulated lightcones from the Millennium model) predicted such a reversal to occur only at earlier epochs ($z>2$) and at a lower level. We present a remarkable structure at $z\sim 1.016$, containing X-ray traced galaxy concentrations, which will eventually merge into a Virgo-like cluster. This structure illustrates how the individual SFR of galaxies increases with density and shows that it is the ~1-2 Mpc scale that affects most the star formation in galaxies at $z\sim1$. The SFR of $z\sim1$ galaxies is found to correlate with stellar mass suggesting that mass plays a role in the observed star formation-density trend. However the specific SFR (=SFR/$M_{\star}$) decreases with stellar mass while it increases with galaxy density, which implies that the environment does directly affect the star formation activity of galaxies. Major mergers do not appear to be the unique or even major cause for this effect since nearly half (46%) of the luminous infrared galaxies (LIRGs) at $z\sim 1$ present the HST-ACS morphology of spirals, while only a third present a clear signature of major mergers. The remaining galaxies are divided into compact (9%) and irregular (14%) galaxies. Moreover, the specific SFR of major mergers is only marginally stronger than that of spirals. Conclusions.These findings constrain the influence of the growth of large-scale structures on the star formation history of galaxies. Reproducing the SFR-density relation at $z\sim1$ is a new challenge for models, requiring a correct balance between mass assembly through mergers and in-situ star formation at early epochs.
Aims. We propose a new mass mapping algorithm, specifically designed to recover small-scale information from a combination of gravitational shear and flexion. Including flexion allows us to ...supplement the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map without relying on strong lensing constraints. Methods. To preserve all available small scale information, we avoid any binning of the irregularly sampled input shear and flexion fields and treat the mass mapping problem as a general ill-posed inverse problem, which is regularised using a robust multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators. Results. We tested our reconstruction method on a set of realistic weak lensing simulations corresponding to typical HST/ACS cluster observations and demonstrate our ability to recover substructures with the inclusion of flexion, which are otherwise lost if only shear information is used. In particular, we can detect substructures on the 15′′ scale well outside of the critical region of the clusters. In addition, flexion also helps to constrain the shape of the central regions of the main dark matter halos.
The deconvolution of large survey images with millions of galaxies requires developing a new generation of methods that can take a space-variant point spread function into account. These methods have ...also to be accurate and fast. We investigate how deep learning might be used to perform this task. We employed a U-net deep neural network architecture to learn parameters that were adapted for galaxy image processing in a supervised setting and studied two deconvolution strategies. The first approach is a post-processing of a mere Tikhonov deconvolution with closed-form solution, and the second approach is an iterative deconvolution framework based on the alternating direction method of multipliers (ADMM). Our numerical results based on GREAT3 simulations with realistic galaxy images and point spread functions show that our two approaches outperform standard techniques that are based on convex optimization, whether assessed in galaxy image reconstruction or shape recovery. The approach based on a Tikhonov deconvolution leads to the most accurate results, except for ellipticity errors at high signal-to-noise ratio. The ADMM approach performs slightly better in this case. Considering that the Tikhonov approach is also more computation-time efficient in processing a large number of galaxies, we recommend this approach in this scenario.
We describe a new estimate of the cosmic microwave background (CMB) intensity map reconstructed by a joint analysis of the full Planck 2015 data (PR2) and nine years of WMAP data. The proposed map ...provides more than a mere update of the CMB map introduced in a previous paper since it benefits from an improvement of the component separation method L-GMCA (Local-Generalized Morphological Component Analysis), which facilitates efficient separation of correlated components. Based on the most recent CMB data, we further confirm previous results showing that the proposed CMB map estimate exhibits appealing characteristics for astrophysical and cosmological applications: i) it is a full-sky map as it did not require any inpainting or interpolation postprocessing; ii) foreground contamination is very low even on the galactic center; and iii) the map does not exhibit any detectable trace of thermal Sunyaev-Zel’dovich contamination. We show that its power spectrum is in good agreement with the Planck PR2 official theoretical best-fit power spectrum. Finally, following the principle of reproducible research, we provide the codes to reproduce the L-GMCA, which makes it the only reproducible CMB map.
Context.
Weak lensing mass-mapping is a useful tool for accessing the full distribution of dark matter on the sky, but because of intrinsic galaxy ellipticies, finite fields, and missing data, the ...recovery of dark matter maps constitutes a challenging, ill-posed inverse problem
Aims.
We introduce a novel methodology that enables the efficient sampling of the high-dimensional Bayesian posterior of the weak lensing mass-mapping problem, relying on simulations to define a fully non-Gaussian prior. We aim to demonstrate the accuracy of the method to simulated fields, and then proceed to apply it to the mass reconstruction of the HST/ACS COSMOS field.
Methods.
The proposed methodology combines elements of Bayesian statistics, analytic theory, and a recent class of deep generative models based on neural score matching. This approach allows us to make full use of analytic cosmological theory to constrain the 2pt statistics of the solution, to understand any differences between this analytic prior and full simulations from cosmological simulations, and to obtain samples from the full Bayesian posterior of the problem for robust uncertainty quantification.
Results.
We demonstrate the method in the
κ
TNG simulations and find that the posterior mean significantly outperfoms previous methods (Kaiser–Squires, Wiener filter, Sparsity priors) both for the root-mean-square error and in terms of the Pearson correlation. We further illustrate the interpretability of the recovered posterior by establishing a close correlation between posterior convergence values and the S/N of the clusters artificially introduced into a field. Finally, we apply the method to the reconstruction of the HST/ACS COSMOS field, which yields the highest-quality convergence map of this field to date.
Conclusions.
We find the proposed approach to be superior to previous algorithms, scalable, providing uncertainties, and using a fully non-Gaussian prior.
Strong gravitational lensing provides a wealth of astrophysical information on the baryonic and dark matter content of galaxies. It also serves as a valuable cosmological probe by allowing us to ...measure the Hubble constant independently of other methods. These applications all require the difficult task of inverting the lens equation and simultaneously reconstructing the mass profile of the lens along with the original light profile of the unlensed source. As there is no reason for either the lens or the source to be simple, we need methods that both invert the lens equation with a large number of degrees of freedom and also enforce a well-controlled regularisation that avoids the appearance of spurious structures. This can be beautifully accomplished by representing signals in wavelet space. Building on the Sparse Lens Inversion Technique (SLIT), we present an improved sparsity-based method that describes lensed sources using wavelets and optimises over the parameters given an analytical lens mass profile. We applied our technique on simulated HST and E-ELT data, as well as on real HST images of lenses from the Sloan Lens ACS sample, assuming a lens model. We show that wavelets allowed us to reconstruct lensed sources containing detailed substructures when using both present-day data and very high-resolution images expected from future thirty-metre-class telescopes. In the latter case, wavelets moreover provide a much more tractable solution in terms of quality and computation time compared to using a source model that combines smooth analytical profiles and shapelets. Requiring very little human interaction, our flexible pixel-based technique fits into the ongoing effort to devise automated modelling schemes. It can be incorporated in the standard workflow of sampling analytical lens model parameters while modelling the source on a pixelated grid. The method, which we call SLIT
RONOMY
, is freely available as a new plug-in to the modelling software L
ENSTRONOMY
.
We present a new method for contrast enhancement based on the curvelet transform. The curvelet transform represents edges better than wavelets, and is therefore well-suited for multiscale edge ...enhancement. We compare this approach with enhancement based on the wavelet transform, and the multiscale retinex. In a range of examples, we use edge detection and segmentation, among other processing applications, to provide for quantitative comparative evaluation. Our findings are that curvelet based enhancement out-performs other enhancement methods on noisy images, but on noiseless or near noiseless images curvelet based enhancement is not remarkably better than wavelet based enhancement.
The cosmic microwave background (CMB) is of premier importance for cosmologists in studying the birth of our universe. Unfortunately, most CMB experiments, such as COBE, WMAP, or Planck do not ...directly measure the cosmological signal, because the CMB is mixed up with galactic foregrounds and point sources. For the sake of scientific exploitation, measuring the CMB requires extracting several different astrophysical components (CMB, Sunyaev-Zel’dovich clusters, galactic dust) from multiwavelength observations. Mathematically speaking, the problem of disentangling the CMB map from the galactic foregrounds amounts to a component or source separation problem. In the field of CMB studies, a wide range of source separation methods have been applied that all differ in the way they model the data and in the criteria they rely on to separate components. Two main difficulties are i) that the instrument’s beam varies across frequencies and ii) that the emission laws of most astrophysical components vary across pixels. This paper aims at introducing a very accurate modeling of CMB data, based on sparsity to account for beams’ variability across frequencies, as well as for spatial variations of the components’ spectral characteristics. Based on this new sparse modeling of the data, a sparsity-based component separation method coined local-generalized morphological component analysis (L-GMCA) is described. Extensive numerical experiments have been carried out with simulated Planck data. These experiments show the high efficiency of the proposed component separation methods for estimating a clean CMB map with a very low foreground contamination, which makes L-GMCA of prime interest for CMB studies.