ABSTRACT
As the statistical power of galaxy weak lensing reaches per cent level precision, large, realistic, and robust simulations are required to calibrate observational systematics, especially ...given the increased importance of object blending as survey depths increase. To capture the coupled effects of blending in both shear and photometric redshift calibration, we define the effective redshift distribution for lensing, nγ(z), and describe how to estimate it using image simulations. We use an extensive suite of tailored image simulations to characterize the performance of the shear estimation pipeline applied to the Dark Energy Survey (DES) Year 3 data set. We describe the multiband, multi-epoch simulations, and demonstrate their high level of realism through comparisons to the real DES data. We isolate the effects that generate shear calibration biases by running variations on our fiducial simulation, and find that blending-related effects are the dominant contribution to the mean multiplicative bias of approximately $-2{{\ \rm per\ cent}}$. By generating simulations with input shear signals that vary with redshift, we calibrate biases in our estimation of the effective redshift distribution, and demonstrate the importance of this approach when blending is present. We provide corrected effective redshift distributions that incorporate statistical and systematic uncertainties, ready for use in DES Year 3 weak lensing analyses.
Shear peak statistics has gained a lot of attention recently as a practical alternative to the two-point statistics for constraining cosmological parameters. We perform a shear peak statistics ...analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 deg super( 2) field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range ... To predict the peak counts as a function of cosmological parameters, we use a suite of N-body simulations spanning 158 models with varying ... and ..., fixing ..., to which we have applied the DES SV mask and redshift distribution. In our fiducial analysis we measure ..., after marginalizing over the shear multiplicative bias and the error on the mean redshift of the galaxy sample. We introduce models of intrinsic alignments, blending and source contamination by cluster members. These models indicate that peaks with ... would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two-point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. We discuss prospects for future peak statistics analysis with upcoming DES data. (ProQuest: ... denotes formulae/symbols omitted.)
We report the results from a search for z > 6.5 quasars using the Dark Energy Survey (DES) Year 3 data set combined with the VISTA Hemisphere Survey (VHS) and WISE All-Sky Survey. Our photometric ...selection method is shown to be highly efficient in identifying clean samples of high-redshift quasars, leading to spectroscopic confirmation of three new quasars – VDES J0244-5008 (z = 6.724), VDES J0020-3653 (z = 6.834), and VDES J0246-5219 (z = 6.90) – which were selected as the highest priority candidates in the survey data without any need for additional follow-up observations. Here, we have obtained spectroscopic observations in the near-infrared for VDES J0244-5008 and VDES J0020-3653 as well as our previously identified quasar, VDES J0224-4711 at z = 6.50 from Reed et al. We use the near-infrared spectra to derive virial black hole masses from the full width at half-maximum of the Mg ii line. These black hole masses are ≃1–2 × 109 M⊙. Combined with the bolometric luminosities of these quasars of Lbol ≃ 1–3 × 1047, these imply that the Eddington ratios are high, ≃0.6–1.1. We consider the C iv emission line properties of the sample and demonstrate that our high-redshift quasars do not have unusual C iv line properties when compared to carefully matched low-redshift samples. Our new DES + VHS z > 6.5 quasars now add to the growing census of luminous, rapidly accreting supermassive black holes seen well into the epoch of reionization.
ABSTRACT We describe an algorithm for identifying point-source transients and moving objects on reference-subtracted optical images containing artifacts of processing and instrumentation. The ...algorithm makes use of the supervised machine learning technique known as Random Forest. We present results from its use in the Dark Energy Survey Supernova program (DES-SN), where it was trained using a sample of 898,963 signal and background events generated by the transient detection pipeline. After reprocessing the data collected during the first DES-SN observing season (2013 September through 2014 February) using the algorithm, the number of transient candidates eligible for human scanning decreased by a factor of 13.4, while only 1.0% of the artificial Type Ia supernovae (SNe) injected into search images to monitor survey efficiency were lost, most of which were very faint events. Here we characterize the algorithm's performance in detail, and we discuss how it can inform pipeline design decisions for future time-domain imaging surveys, such as the Large Synoptic Survey Telescope and the Zwicky Transient Facility. An implementation of the algorithm and the training data used in this paper are available at at http://portal.nersc.gov/project/dessn/autoscan.
We present results from a study of the photometric redshift performance of the Dark Energy Survey (DES), using the early data from a Science Verification period of observations in late 2012 and early ...2013 that provided science-quality images for almost 200 sq. deg. at the nominal depth of the survey. We assess the photometric redshift (photo-z) performance using about 15 000 galaxies with spectroscopic redshifts available from other surveys. These galaxies are used, in different configurations, as a calibration sample, and photo-z's are obtained and studied using most of the existing photo-z codes. A weighting method in a multidimensional colour-magnitude space is applied to the spectroscopic sample in order to evaluate the photo-z performance with sets that mimic the full DES photometric sample, which is on average significantly deeper than the calibration sample due to the limited depth of spectroscopic surveys. Empirical photo-z methods using, for instance, artificial neural networks or random forests, yield the best performance in the tests, achieving core photo-z resolutions ... ~ 0.08. Moreover, the results from most of the codes, including template-fitting methods, comfortably meet the DES requirements on photo-z performance, therefore, providing an excellent precedent for future DES data sets. (ProQuest: ... denotes formulae/symbols omitted.)
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence ...(...WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as ...WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the ...WL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting X2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation. (ProQuest: ... denotes formulae/symbols omitted.)
Blinding multiprobe cosmological experiments Muir, J; Bernstein, G M; Huterer, D ...
Monthly notices of the Royal Astronomical Society,
04/2020, Letnik:
494, Številka:
3
Journal Article
Recenzirano
Odprti dostop
ABSTRACT
The goal of blinding is to hide an experiment’s critical results – here the inferred cosmological parameters – until all decisions affecting its analysis have been finalized. This is ...especially important in the current era of precision cosmology, when the results of any new experiment are closely scrutinized for consistency or tension with previous results. In analyses that combine multiple observational probes, like the combination of galaxy clustering and weak lensing in the Dark Energy Survey (DES), it is challenging to blind the results while retaining the ability to check for (in)consistency between different parts of the data. We propose a simple new blinding transformation, which works by modifying the summary statistics that are input to parameter estimation, such as two-point correlation functions. The transformation shifts the measured statistics to new values that are consistent with (blindly) shifted cosmological parameters while preserving internal (in)consistency. We apply the blinding transformation to simulated data for the projected DES Year 3 galaxy clustering and weak lensing analysis, demonstrating that practical blinding is achieved without significant perturbation of internal-consistency checks, as measured here by degradation of the χ2 between the data and best-fitting model. Our blinding method’s performance is expected to improve as experiments evolve to higher precision and accuracy.
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science ...results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser–Squires (KS), Wiener filter, and Glimpse. Kaiser–Squires is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. Glimpse uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and Glimpse offer substantial improvements over smoothed Kaiser–Squires with a range of metrics. Both the Wiener filter and Glimpse convergence reconstructions show a 12 percent improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods’ abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for Glimpse. With simulations, we measure the reconstruction of the harmonic phases; the phase residuals’ concentration is improved 17 percent by Glimpse and 18 percent by the Wiener filter. The correlationbetween reconstructions from data and foreground redMaPPer clusters is increased 18 percent by the Wiener filter and 32 percent by Glimpse.
ABSTRACT
The Cold Spot is a puzzling large-scale feature in the Cosmic Microwave Background temperature maps and its origin has been subject to active debate. As an important foreground structure at ...low redshift, the Eridanus supervoid was recently detected, but it was subsequently determined that, assuming the standard ΛCDM model, only about 10–20 per cent of the observed temperature depression can be accounted for via its Integrated Sachs–Wolfe imprint. However, R ≳ 100 h−1Mpc supervoids elsewhere in the sky have shown ISW imprints AISW ≈ 5.2 ± 1.6 times stronger than expected from ΛCDM (AISW = 1), which warrants further inspection. Using the Year-3 redMaGiC catalogue of luminous red galaxies from the Dark Energy Survey, here we confirm the detection of the Eridanus supervoid as a significant underdensity in the Cold Spot’s direction at z < 0.2. We also show, with S/N ≳ 5 significance, that the Eridanus supervoid appears as the most prominent large-scale underdensity in the dark matter mass maps that we reconstructed from DES Year-3 gravitational lensing data. While we report no significant anomalies, an interesting aspect is that the amplitude of the lensing signal from the Eridanus supervoid at the Cold Spot centre is about 30 per cent lower than expected from similar peaks found in N-body simulations based on the standard ΛCDM model with parameters Ωm = 0.279 and σ8 = 0.82. Overall, our results confirm the causal relation between these individually rare structures in the cosmic web and in the CMB, motivating more detailed future surveys in the Cold Spot region.
Context.
We study astrometric residuals from a simultaneous fit of Hyper Suprime-Cam images.
Aims.
We aim to characterize these residuals and study the extent to which they are dominated by ...atmospheric contributions for bright sources.
Methods.
We used Gaussian process interpolation with a correlation function (kernel) measured from the data to smooth and correct the observed astrometric residual field.
Results.
We find that a Gaussian process interpolation with a von Kármán kernel allows us to reduce the covariances of astrometric residuals for nearby sources by about one order of magnitude, from 30 mas
2
to 3 mas
2
at angular scales of ∼1 arcmin. This also allows us to halve the rms residuals. Those reductions using Gaussian process interpolation are similar to recent result published with the Dark Energy Survey dataset. We are then able to detect the small static astrometric residuals due to the Hyper Suprime-Cam sensors effects. We discuss how the Gaussian process interpolation of astrometric residuals impacts galaxy shape measurements, particularly in the context of cosmic shear analyses at the
Rubin
Observatory Legacy Survey of Space and Time.