The cosmological utility of galaxy cluster catalogues is primarily limited by our ability to calibrate the relation between halo mass and observable mass proxies such as cluster richness, X-ray ...luminosity or the Sunyaev-Zeldovich signal. Projection effects are a particularly pernicious systematic effect that can impact observable mass proxies; structure along the line of sight can both bias and increase the scatter of the observable mass proxies used in cluster abundance studies. In this work, we develop an empirical method to characterize the impact of projection effects on redMaPPer cluster catalogues. We use numerical simulations to validate our method and illustrate its robustness. We demonstrate that modeling of projection effects is a necessary component for cluster abundance studies capable of reaching $\approx 5\%$ mass calibration uncertainties (e.g. the Dark Energy Survey Year 1 sample). Specifically, ignoring the impact of projection effects in the observable--mass relation --- i.e. marginalizing over a log-normal model only --- biases the posterior of the cluster normalization condition $S_8 \equiv \sigma_8 (\Omega_{\rm m}/0.3)^{1/2}$ by $\Delta S_8 =0.05$, more than twice the uncertainty in the posterior for such an analysis.
ABSTRACT
We present the v1.0 release of CLMM, an open source python library for the estimation of the weak lensing masses of clusters of galaxies. CLMM is designed as a stand-alone toolkit of ...building blocks to enable end-to-end analysis pipeline validation for upcoming cluster cosmology analyses such as the ones that will be performed by the Vera C. Rubin Legacy Survey of Space and Time-Dark Energy Science Collaboration (LSST-DESC). Its purpose is to serve as a flexible, easy-to-install, and easy-to-use interface for both weak lensing simulators and observers and can be applied to real and mock data to study the systematics affecting weak lensing mass reconstruction. At the core of CLMM are routines to model the weak lensing shear signal given the underlying mass distribution of galaxy clusters and a set of data operations to prepare the corresponding data vectors. The theoretical predictions rely on existing software, used as backends in the code, that have been thoroughly tested and cross-checked. Combined theoretical predictions and data can be used to constrain the mass distribution of galaxy clusters as demonstrated in a suite of example Jupyter Notebooks shipped with the software and also available in the extensive online documentation.
The existence of multiple subclasses of Type Ia supernovae (SNe Ia) has been the subject of great debate in the last decade. One major challenge inevitably met when trying to infer the existence of ...one or more subclasses is the time consuming, and subjective, process of subclass definition. In this work, we show how machine learning tools facilitate identification of subtypes of SNe Ia through the establishment of a hierarchical group structure in the continuous space of spectral diversity formed by these objects. Using deep learning, we were capable of performing such identification in a four-dimensional feature space (+1 for time evolution), while the standard principal component analysis barely achieves similar results using 15 principal components. This is evidence that the progenitor system and the explosion mechanism can be described by a small number of initial physical parameters. As a proof of concept, we show that our results are in close agreement with a previously suggested classification scheme and that our proposed method can grasp the main spectral features behind the definition of such subtypes. This allows the confirmation of the velocity of lines as a first-order effect in the determination of SN Ia subtypes, followed by 91bg-like events. Given the expected data deluge in the forthcoming years, our proposed approach is essential to allow a quick and statistically coherent identification of SNe Ia subtypes (and outliers). All tools used in this work were made publicly available in the python package Dimensionality Reduction And Clustering for Unsupervised Learning in Astronomy (dracula) and can be found within COINtoolbox (https://github.com/COINtoolbox/DRACULA).
The pst operon of Escherichia coli, which encodes the phosphate-specific transport system, is composed of five genes, pstS, pstC, pstA, pstB and phoU, whose transcription is induced by phosphate ...starvation. A phosphate-regulated promoter located upstream of the most proximal gene ( pstS) controls the transcription of the entire operon. Though the full-length pst mRNA could be detected by an improved RT-PCR protocol, Northern analysis using several pst-specific probes failed to reveal this transcript. Instead, smaller but distinct pst mRNA species were evident. Primer-extension experiments localized the 5' ends of pst mRNAs within the operon. The data suggest that the full-length mRNA is rapidly processed post-transcriptionally.
Stochastic field distortions caused by atmospheric turbulence are a fundamental limitation to the astrometric accuracy of ground-based imaging. This distortion field is measurable at the locations of ...stars with accurate positions provided by the Gaia DR2 catalog; we develop the use of Gaussian process regression (GPR) to interpolate the distortion field to arbitrary locations in each exposure. We introduce an extension to standard GPR techniques that exploits the knowledge that the 2D distortion field is curl-free. Applied to several hundred 90 s exposures from the Dark Energy Survey as a test bed, we find that the GPR correction reduces the variance of the turbulent astrometric distortions ≈12× , on average, with better performance in denser regions of the Gaia catalog. The rms per-coordinate distortion in the riz bands is typically ≈7 mas before any correction and ≈2 mas after application of the GPR model. The GPR astrometric corrections are validated by the observation that their use reduces, from 10 to 5 mas rms, the residuals to an orbit fit to riz-band observations over 5 yr of the r = 18.5 trans-Neptunian object Eris. We also propose a GPR method, not yet implemented, for simultaneously estimating the turbulence fields and the 5D stellar solutions in a stack of overlapping exposures, which should yield further turbulence reductions in future deep surveys.
ABSTRACT The residuals of the distance moduli of Type Ia supernovae (SNe Ia) relative to a Hubble diagram fit contain information about the inhomogeneity of the Universe, due to weak lensing ...magnification by foreground matter. By correlating the residuals of the Dark Energy Survey Year 5 SN Ia sample (DES-SN5YR) with extragalactic foregrounds from the DES Y3 Gold catalogue, we detect the presence of lensing at $6.0 \sigma$ significance. This is the first detection with a significance level above $5\sigma$. Constraints on the effective mass-to-light ratios and radial profiles of dark matter haloes surrounding individual galaxies are also obtained. We show that the scatter of SNe Ia around the Hubble diagram is reduced by modifying the standardization of the distance moduli to include an easily calculable de-lensing (i.e. environmental) term. We use the de-lensed distance moduli to recompute cosmological parameters derived from SN Ia, finding in Flat wcold dark matter a difference of $\Delta \Omega _{\rm M} = +0.036$ and $\Delta w = -0.056$ compared to the unmodified distance moduli, a change of $\sim 0.3\sigma$. We argue that our modelling of SN Ia lensing will lower systematics on future surveys with higher statistical power. We use the observed dispersion of lensing in DES-SN5YR to constrain $\sigma _8$, but caution that the fit is sensitive to uncertainties at small scales. Nevertheless, our detection of SN Ia lensing opens a new pathway to study matter inhomogeneity that complements galaxy–galaxy lensing surveys and has unrelated systematics.
Context.
Type Ia supernovae (SNe Ia) are useful distance indicators in cosmology, provided their luminosity is standardized by applying empirical corrections based on light-curve properties. One ...factor behind these corrections is dust extinction, which is accounted for in the color–luminosity relation of the standardization. This relation is usually assumed to be universal, which can potentially introduce systematics into the standardization. The “mass step” observed for SN Ia Hubble residuals has been suggested as one such systematic.
Aims.
We seek to obtain a more complete view of dust attenuation properties for a sample of 162 SN Ia host galaxies and to probe their link to the mass step.
Methods.
We inferred attenuation laws toward hosts from both global and local (4 kpc) Dark Energy Survey photometry and composite stellar population model fits.
Results.
We recovered a relation between the optical depth and the attenuation slope, best explained by differing star-to-dust geometry for different galaxy orientations, which is significantly different from the optical depth and extinction slope relation observed directly for SNe. We obtain a large variation of attenuation slopes and confirm these change with host properties, such as the stellar mass and age, meaning a universal SN Ia correction should ideally not be assumed. Analyzing the cosmological standardization, we find evidence for a mass step and a two-dimensional “dust step”, both more pronounced for red SNe. Although comparable, the two steps are not found to be completely analogous.
Conclusions.
We conclude that host galaxy dust data cannot fully account for the mass step, using either an alternative SN standardization with extinction proxied by host attenuation or a dust-step approach.
We report recent cosmological analyses rely on the ability to accurately sample from high-dimensional posterior distributions. A variety of algorithms have been applied in the field, but ...justification of the particular sampler choice and settings is often lacking. Here we investigate three such samplers to motivate and validate the algorithm and settings used for the Dark Energy Survey (DES) analyses of the first 3 years (Y3) of data from combined measurements of weak lensing and galaxy clustering. We employ the full DES Year 1 likelihood alongside a much faster approximate likelihood, which enables us to assess the outcomes from each sampler choice and demonstrate the robustness of our full results. We find that the ellipsoidal nested sampling algorithm MULTINEST reports inconsistent estimates of the Bayesian evidence and somewhat narrower parameter credible intervals than the sliced nested sampling implemented in POLYCHORD. We compare the findings from MULTINEST and POLYCHORD with parameter inference from the Metropolis-Hastings algorithm, finding good agreement. We determine that POLYCHORD provides a good balance of speed and robustness, and recommend different settings for testing purposes and final chains for analyses with DES Y3 data. In conclusion, our methodology can readily be reproduced to obtain suitable sampler settings for future surveys.