ABSTRACT
We present a sample of 21 hydrogen-free superluminous supernovae (SLSNe-I) and one hydrogen-rich SLSN (SLSN-II) detected during the five-year Dark Energy Survey (DES). These SNe, located in ...the redshift range 0.220 < z < 1.998, represent the largest homogeneously selected sample of SLSN events at high redshift. We present the observed g, r, i, z light curves for these SNe, which we interpolate using Gaussian processes. The resulting light curves are analysed to determine the luminosity function of SLSNe-I, and their evolutionary time-scales. The DES SLSN-I sample significantly broadens the distribution of SLSN-I light-curve properties when combined with existing samples from the literature. We fit a magnetar model to our SLSNe, and find that this model alone is unable to replicate the behaviour of many of the bolometric light curves. We search the DES SLSN-I light curves for the presence of initial peaks prior to the main light-curve peak. Using a shock breakout model, our Monte Carlo search finds that 3 of our 14 events with pre-max data display such initial peaks. However, 10 events show no evidence for such peaks, in some cases down to an absolute magnitude of <−16, suggesting that such features are not ubiquitous to all SLSN-I events. We also identify a red pre-peak feature within the light curve of one SLSN, which is comparable to that observed within SN2018bsz.
We use a recently introduced statistic called Integrated Bispectrum (IB) to probe the gravity-induced non-Gaussianity at the level of the bispectrum from weak lensing convergence or κ maps. We ...generalize the concept of the IB to spherical coordinates, {This result is next connected to the response function approach.} Finally, we use the Euclid Flagship simulations to compute the IB as a function of redshift and wave number. We also outline how the IB can be computed using a variety of analytical approaches including the ones based on Effective Field Theory (EFT), Halo models and models based on the Separate Universe approach in projection or two-dimension (2D). Comparing these results against simulations we find that the existing theoretical models tend to over-predict the numerical value of the IB. We emphasize the role of the finite volume effect in numerical estimation of the IB. We introduced the concept of squeezed and collapsed tripsectrum for 2D κ maps. We derive the IB for many paramterized theories of modified gravity including the Horndeskii and beyond-Horndeskii theories specifically for the non-degenerate scenarios that are also known as the Gleyzes-Langlois-Piazza-Venizzi or GPLV theories. In addition the cosmological models with clustering quintessence and models involving massive neutrinos are also derived.
Context.
The data from the
Euclid
mission will enable the measurement of the angular positions and weak lensing shapes of over a billion galaxies, with their photometric redshifts obtained together ...with ground-based observations. This large dataset, with well-controlled systematic effects, will allow for cosmological analyses using the angular clustering of galaxies (GC
ph
) and cosmic shear (WL). For
Euclid
, these two cosmological probes will not be independent because they will probe the same volume of the Universe. The cross-correlation (XC) between these probes can tighten constraints and is therefore important to quantify their impact for
Euclid
.
Aims.
In this study, we therefore extend the recently published
Euclid
forecasts by carefully quantifying the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim to decipher the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias, intrinsic alignments (IAs), and knowledge of the redshift distributions.
Methods.
We follow the Fisher matrix formalism and make use of previously validated codes. We also investigate a different galaxy bias model, which was obtained from the Flagship simulation, and additional photometric-redshift uncertainties; we also elucidate the impact of including the XC terms on constraining these latter.
Results.
Starting with a baseline model, we show that the XC terms reduce the uncertainties on galaxy bias by ∼17% and the uncertainties on IA by a factor of about four. The XC terms also help in constraining the
γ
parameter for minimal modified gravity models. Concerning galaxy bias, we observe that the role of the XC terms on the final parameter constraints is qualitatively the same irrespective of the specific galaxy-bias model used. For IA, we show that the XC terms can help in distinguishing between different models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. Finally, we show that the XC terms can lead to a better determination of the mean of the photometric galaxy distributions.
Conclusions.
We find that the XC between GC
ph
and WL within the
Euclid
survey is necessary to extract the full information content from the data in future analyses. These terms help in better constraining the cosmological model, and also lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC significantly helps in constraining the mean of the photometric-redshift distributions, but, at the same time, it requires more precise knowledge of this mean with respect to single probes in order not to degrade the final “figure of merit”.
Measurements of the physical properties of accretion disks in active galactic nuclei are important for better understanding the growth and evolution of supermassive black holes. We present the ...accretion disk sizes of 22 quasars from continuum reverberation mapping with data from the Dark Energy Survey (DES) standard-star fields and the supernova C fields. We construct continuum light curves with the griz photometry that span five seasons of DES observations. These data sample the time variability of the quasars with a cadence as short as 1 day, which corresponds to a rest-frame cadence that is a factor of a few higher than most previous work. We derive time lags between bands with both JAVELIN and the interpolated cross-correlation function method and fit for accretion disk sizes using the JAVELIN thin-disk model. These new measurements include disks around black holes with masses as small as ∼107 M , which have equivalent sizes at 2500 as small as ∼0.1 lt-day in the rest frame. We find that most objects have accretion disk sizes consistent with the prediction of the standard thin-disk model when we take disk variability into account. We have also simulated the expected yield of accretion disk measurements under various observational scenarios for the Large Synoptic Survey Telescope Deep Drilling Fields. We find that the number of disk measurements would increase significantly if the default cadence is changed from 3 days to 2 days or 1 day.
ABSTRACT
Black hole mass measurements outside the local Universe are critically important to derive the growth of supermassive black holes over cosmic time, and to study the interplay between black ...hole growth and galaxy evolution. In this paper, we present two measurements of supermassive black hole masses from reverberation mapping (RM) of the broad C iv emission line. These measurements are based on multiyear photometry and spectroscopy from the Dark Energy Survey Supernova Program (DES-SN) and the Australian Dark Energy Survey (OzDES), which together constitute the OzDES RM Program. The observed reverberation lag between the DES continuum photometry and the OzDES emission line fluxes is measured to be $358^{+126}_{-123}$ and $343^{+58}_{-84}$ d for two quasars at redshifts of 1.905 and 2.593, respectively. The corresponding masses of the two supermassive black holes are 4.4 × 109 and 3.3 × 109 M⊙, which are among the highest redshift and highest mass black holes measured to date with RM studies. We use these new measurements to better determine the C iv radius−luminosity relationship for high-luminosity quasars, which is fundamental to many quasar black hole mass estimates and demographic studies.
Euclid preparation Barnett, R.; Warren, S. J.; Mortlock, D. J. ...
Astronomy and astrophysics (Berlin),
11/2019, Letnik:
631
Journal Article
Recenzirano
Odprti dostop
We provide predictions of the yield of 7 <
z
< 9 quasars from the
Euclid
wide survey, updating the calculation presented in the
Euclid
Red Book in several ways. We account for revisions to the
...Euclid
near-infrared filter wavelengths; we adopt steeper rates of decline of the quasar luminosity function (QLF; Φ) with redshift, Φ ∝ 10
k
(
z
− 6)
,
k
= −0.72, and a further steeper rate of decline,
k
= −0.92; we use better models of the contaminating populations (MLT dwarfs and compact early-type galaxies); and we make use of an improved Bayesian selection method, compared to the colour cuts used for the Red Book calculation, allowing the identification of fainter quasars, down to
J
AB
∼ 23. Quasars at
z
> 8 may be selected from
Euclid
O
Y
J
H
photometry alone, but selection over the redshift interval 7 <
z
< 8 is greatly improved by the addition of
z
-band data from, e.g., Pan-STARRS and LSST. We calculate predicted quasar yields for the assumed values of the rate of decline of the QLF beyond
z
= 6. If the decline of the QLF accelerates beyond
z
= 6, with
k
= −0.92,
Euclid
should nevertheless find over 100 quasars with 7.0 <
z
< 7.5, and ∼25 quasars beyond the current record of
z
= 7.5, including ∼8 beyond
z
= 8.0. The first
Euclid
quasars at
z
> 7.5 should be found in the DR1 data release, expected in 2024. It will be possible to determine the bright-end slope of the QLF, 7 <
z
< 8,
M
1450
< −25, using 8 m class telescopes to confirm candidates, but follow-up with JWST or E-ELT will be required to measure the faint-end slope. Contamination of the candidate lists is predicted to be modest even at
J
AB
∼ 23. The precision with which
k
can be determined over 7 <
z
< 8 depends on the value of
k
, but assuming
k
= −0.72 it can be measured to a 1
σ
uncertainty of 0.07.
ABSTRACT
We present clustering redshift measurements for Dark Energy Survey (DES) lens sample galaxies used in weak gravitational lensing and galaxy clustering studies. To perform these measurements, ...we cross-correlate with spectroscopic galaxies from the Baryon Acoustic Oscillation Survey (BOSS) and its extension, eBOSS. We validate our methodology in simulations, including a new technique to calibrate systematic errors that result from the galaxy clustering bias, and we find that our method is generally unbiased in calibrating the mean redshift. We apply our method to the data, and estimate the redshift distribution for 11 different photometrically selected bins. We find general agreement between clustering redshift and photometric redshift estimates, with differences on the inferred mean redshift found to be below |Δz| = 0.01 in most of the bins. We also test a method to calibrate a width parameter for redshift distributions, which we found necessary to use for some of our samples. Our typical uncertainties on the mean redshift ranged from 0.003 to 0.008, while our uncertainties on the width ranged from 4 to 9 per cent. We discuss how these results calibrate the photometric redshift distributions used in companion papers for DES Year 3 results.
Abstract
We study the galaxy populations in 74 Sunyaev–Zeldovich effect selected clusters from the South Pole Telescope survey, which have been imaged in the science verification phase of the Dark ...Energy Survey. The sample extends up to z ∼ 1.1 with 4 × 1014 M⊙ ≤ M200 ≤ 3 × 1015M⊙. Using the band containing the 4000 Å break and its redward neighbour, we study the colour–magnitude distributions of cluster galaxies to ∼m* + 2, finding that: (1)The intrinsic rest frame g − r colour width of the red sequence (RS) population is ∼0.03 out to z ∼ 0.85 with a preference for an increase to ∼0.07 at z = 1, and (2) the prominence of the RS declines beyond z ∼ 0.6. The spatial distribution of cluster galaxies is well described by the NFW profile out to 4R200 with a concentration of $c_{\mathrm{g}} = 3.59^{+0.20}_{-0.18}$, $5.37^{+0.27}_{-0.24}$ and $1.38^{+0.21}_{-0.19}$ for the full, the RS and the blue non-RS populations, respectively, but with ∼40 per cent to 55 per cent cluster to cluster variation and no statistically significant redshift or mass trends. The number of galaxies within the virial region N200 exhibits a mass trend indicating that the number of galaxies per unit total mass is lower in the most massive clusters, and shows no significant redshift trend. The RS fraction within R200 is (68 ± 3) per cent at z = 0.46, varies from ∼55 per cent at z = 1 to ∼80 per cent at z = 0.1 and exhibits intrinsic variation among clusters of ∼14 per cent. We discuss a model that suggests that the observed redshift trend in RS fraction favours a transformation time-scale for infalling field galaxies to become RS galaxies of 2–3 Gyr.
ABSTRACT
Cosmological information from weak lensing surveys is maximized by sorting source galaxies into tomographic redshift subsamples. Any uncertainties on these redshift distributions must be ...correctly propagated into the cosmological results. We present hyperrank, a new method for marginalizing over redshift distribution uncertainties, using discrete samples from the space of all possible redshift distributions, improving over simple parametrized models. In hyperrank, the set of proposed redshift distributions is ranked according to a small (between one and four) number of summary values, which are then sampled, along with other nuisance parameters and cosmological parameters in the Monte Carlo chain used for inference. This approach can be regarded as a general method for marginalizing over discrete realizations of data vector variation with nuisance parameters, which can consequently be sampled separately from the main parameters of interest, allowing for increased computational efficiency. We focus on the case of weak lensing cosmic shear analyses and demonstrate our method using simulations made for the Dark Energy Survey (DES). We show that the method can correctly and efficiently marginalize over a wide range of models for the redshift distribution uncertainty. Finally, we compare hyperrank to the common mean-shifting method of marginalizing over redshift uncertainty, validating that this simpler model is sufficient for use in the DES Year 3 cosmology results presented in companion papers.
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence ...(...WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as ...WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the ...WL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting X2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation. (ProQuest: ... denotes formulae/symbols omitted.)