Context. Future weak lensing surveys, such as the Euclid mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain ...very low levels of statistical error, and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy. Aims. The aims of this paper are twofold. Firstly, we took steps toward a nonparametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to Euclid . Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Secondly, we studied the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in an Euclid scenario. Methods. We extended the recently proposed resolved components analysis approach, which performs super-resolution on a field of under-sampled observations of a spatially varying, image-valued function. We added a spatial interpolation component to the method, making it a true 2-dimensional PSF model. We compared our approach to PSFEx , then quantified the impact of PSF recovery errors on galaxy shape measurements through image simulations. Results. Our approach yields an improvement over PSFEx in terms of the PSF model and on observed galaxy shape errors, though it is at present far from reaching the required Euclid accuracy. We also find that the usual formalism used for the propagation of PSF model errors to weak lensing quantities no longer holds in the case of an Euclid -like PSF. In particular, different shape measurement approaches can react differently to the same PSF modeling errors.
Candidate von Willebrand factor (VWF) mutations were identified in 70% of index cases in the European study 'Molecular and Clinical Markers for the Diagnosis and Management of type 1 von Willebrand ...Disease'. The majority of these were missense mutations.
To assess whether 14 representative missense mutations are the cause of the phenotype observed in the patients and to examine their mode of pathogenicity.
Transfection experiments were performed with full-length wild-type or mutant VWF cDNA for these 14 missense mutations. VWF antigen levels were measured, and VWF multimer analysis was performed on secreted and intracellular VWF.
For seven of the missense mutations (G160W, N166I, L2207P, C2257S, C2304Y, G2441C, and C2477Y), we found marked intracellular retention and impaired secretion of VWF, major loss of high molecular weight multimers in transfections of mutant constructs alone, and virtually normal multimers in cotransfections with wild-type VWF, establishing the pathogenicity of these mutations. Four of the mutations (R2287W, R2464C, G2518S, and Q2520P) were established as being very probably causative, on the basis of a mild reduction in the secreted VWF or on characteristic faster-running multimeric bands. For three candidate changes (G19R, P2063S, and R2313H), the transfection results were indistinguishable from wild-type recombinant VWF and we could not prove these changes to be pathogenic. Other mechanisms not explored using this in vitro expression system may be responsible for pathogenicity.
The pathogenic nature of 11 of 14 candidate missense mutations identified in patients with type 1 VWD was confirmed. Intracellular retention of mutant VWF is the predominant responsible mechanism.
Context.
In metric theories of gravity with photon number conservation, the luminosity and angular diameter distances are related via the Etherington relation, also known as the distance duality ...relation (DDR). A violation of this relation would rule out the standard cosmological paradigm and point to the presence of new physics.
Aims.
We quantify the ability of
Euclid
, in combination with contemporary surveys, to improve the current constraints on deviations from the DDR in the redshift range 0 <
z
< 1.6.
Methods.
We start with an analysis of the latest available data, improving previously reported constraints by a factor of 2.5. We then present a detailed analysis of simulated
Euclid
and external data products, using both standard parametric methods (relying on phenomenological descriptions of possible DDR violations) and a machine learning reconstruction using genetic algorithms.
Results.
We find that for parametric methods
Euclid
can (in combination with external probes) improve current constraints by approximately a factor of six, while for non-parametric methods
Euclid
can improve current constraints by a factor of three.
Conclusions.
Our results highlight the importance of surveys like
Euclid
in accurately testing the pillars of the current cosmological paradigm and constraining physics beyond the standard cosmological model.
Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological ...information is encoded on small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock
Euclid
cosmic shear and
Planck
cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different non-linear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly non-linear regime, with the Hubble constant,
H
0
, and the clustering amplitude,
σ
8
, affected the most. Improvements in the modelling of non-linear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the non-linear power spectrum, using different corrections for baryon physics does not significantly impact the precision of
Euclid
, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from
Euclid
cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for non-linear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.
Context.
The data from the
Euclid
mission will enable the measurement of the angular positions and weak lensing shapes of over a billion galaxies, with their photometric redshifts obtained together ...with ground-based observations. This large dataset, with well-controlled systematic effects, will allow for cosmological analyses using the angular clustering of galaxies (GC
ph
) and cosmic shear (WL). For
Euclid
, these two cosmological probes will not be independent because they will probe the same volume of the Universe. The cross-correlation (XC) between these probes can tighten constraints and is therefore important to quantify their impact for
Euclid
.
Aims.
In this study, we therefore extend the recently published
Euclid
forecasts by carefully quantifying the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim to decipher the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias, intrinsic alignments (IAs), and knowledge of the redshift distributions.
Methods.
We follow the Fisher matrix formalism and make use of previously validated codes. We also investigate a different galaxy bias model, which was obtained from the Flagship simulation, and additional photometric-redshift uncertainties; we also elucidate the impact of including the XC terms on constraining these latter.
Results.
Starting with a baseline model, we show that the XC terms reduce the uncertainties on galaxy bias by ∼17% and the uncertainties on IA by a factor of about four. The XC terms also help in constraining the
γ
parameter for minimal modified gravity models. Concerning galaxy bias, we observe that the role of the XC terms on the final parameter constraints is qualitatively the same irrespective of the specific galaxy-bias model used. For IA, we show that the XC terms can help in distinguishing between different models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. Finally, we show that the XC terms can lead to a better determination of the mean of the photometric galaxy distributions.
Conclusions.
We find that the XC between GC
ph
and WL within the
Euclid
survey is necessary to extract the full information content from the data in future analyses. These terms help in better constraining the cosmological model, and also lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC significantly helps in constraining the mean of the photometric-redshift distributions, but, at the same time, it requires more precise knowledge of this mean with respect to single probes in order not to degrade the final “figure of merit”.
We present a tomographic weak lensing analysis of the Kilo Degree Survey Data Release 4 (KiDS-1000), using a new pseudo angular power spectrum estimator (pseudo-
C
ℓ
) under development for the ESA
...Euclid
mission. Over 21 million galaxies with shape information are divided into five tomographic redshift bins, ranging from 0.1 to 1.2 in photometric redshift. We measured pseudo-
C
ℓ
using eight bands in the multipole range 76 <
ℓ
< 1500 for auto- and cross-power spectra between the tomographic bins. A series of tests were carried out to check for systematic contamination from a variety of observational sources including stellar number density, variations in survey depth, and point spread function properties. While some marginal correlations with these systematic tracers were observed, there is no evidence of bias in the cosmological inference.
B
-mode power spectra are consistent with zero signal, with no significant residual contamination from
E
/
B
-mode leakage. We performed a Bayesian analysis of the pseudo-
C
ℓ
estimates by forward modelling the effects of the mask. Assuming a spatially flat ΛCDM cosmology, we constrained the structure growth parameter
S
8
=
σ
8
(Ω
m
/0.3)
1/2
= 0.754
−0.029
+0.027
. When combining cosmic shear from KiDS-1000 with baryon acoustic oscillation and redshift space distortion data from recent Sloan Digital Sky Survey (SDSS) measurements of luminous red galaxies, as well as the Lyman-
α
forest and its cross-correlation with quasars, we tightened these constraints to
S
8
= 0.771
−0.032
+0.006
. These results are in very good agreement with previous KiDS-1000 and SDSS analyses and confirm a ∼3
σ
tension with early-Universe constraints from cosmic microwave background experiments.
Context.
Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, ...previously ignored systematic effects must be addressed.
Aims.
In this work, we evaluate the impact of the reduced shear approximation and magnification bias on information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities in high-magnification regions.
Methods.
The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from
Euclid
, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach.
Results.
These effects cause significant biases in Ω
m
,
σ
8
,
n
s
, Ω
DE
,
w
0
, and
w
a
of −0.53
σ
, 0.43
σ
, −0.34
σ
, 1.36
σ
, −0.68
σ
, and 1.21
σ
, respectively. We then show that these lensing biases interact with another systematic effect: the intrinsic alignment of galaxies. Accordingly, we have developed the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to
Euclid
, we find that the additional terms introduced by this correction are sub-dominant.
Context. The standard cosmological model is based on the fundamental assumptions of a spatially homogeneous and isotropic universe on large scales. An observational detection of a violation of these ...assumptions at any redshift would immediately indicate the presence of new physics.
Aims. We quantify the ability of the Euclid mission, together with contemporary surveys, to improve the current sensitivity of null tests of the canonical cosmological constant Λ and the cold dark matter (ΛCDM) model in the redshift range 0 < z < 1.8.
Methods. We considered both currently available data and simulated Euclid and external data products based on a ΛCDM fiducial model, an evolving dark energy model assuming the Chevallier-Polarski-Linder parameterization or an inhomogeneous Lemaître-Tolman-Bondi model with a cosmological constant Λ, and carried out two separate but complementary analyses: a machine learning reconstruction of the null tests based on genetic algorithms, and a theory-agnostic parametric approach based on Taylor expansion and binning of the data, in order to avoid assumptions about any particular model.
Results. We find that in combination with external probes, Euclid can improve current constraints on null tests of the ΛCDM by approximately a factor of three when using the machine learning approach and by a further factor of two in the case of the parametric approach. However, we also find that in certain cases, the parametric approach may be biased against or missing some features of models far from ΛCDM.
Conclusions. Our analysis highlights the importance of synergies between Euclid and other surveys. These synergies are crucial for providing tighter constraints over an extended redshift range for a plethora of different consistency tests of some of the main assumptions of the current cosmological paradigm.
Pair-instability supernovae are theorized supernovae that have not yet been observationally confirmed. They are predicted to exist in low-metallicity environments. Because overall metallicity becomes ...lower at higher redshifts, deep near-infrared transient surveys probing high-redshift supernovae are suitable to discover pair-instability supernovae. The
Euclid
satellite, which is planned launch in 2023, has a near-infrared wide-field instrument that is suitable for a high-redshift supernova survey. The Euclid Deep Survey is planned to make regular observations of three Euclid Deep Fields (40 deg
2
in total) spanning
Euclid
’s six-year primary mission period. While the observations of the Euclid Deep Fields are not frequent, we show that the predicted long duration of pair-instability supernovae would allow us to search for high-redshift pair-instability supernovae with the Euclid Deep Survey. Based on the current observational plan of the
Euclid
mission, we conduct survey simulations in order to estimate the expected numbers of pair-instability supernova discoveries. We find that up to several hundred pair-instability supernovae at
z
≲ 3.5 can be discovered within the Euclid Deep Survey. We also show that pair-instability supernova candidates can be efficiently identified by their duration and color, which can be determined with the current Euclid Deep Survey plan. We conclude that the
Euclid
mission can lead to the first confirmation of pair-instability supernovae if their event rates are as high as those predicted by recent theoretical studies. We also update the expected numbers of superluminous supernova discoveries in the Euclid Deep Survey based on the latest observational plan.
Weak lensing, which is the deflection of light by matter along the line of sight, has proven to be an efficient method for constraining models of structure formation and reveal the nature of dark ...energy. So far, most weak-lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak-lensing surveys such as Euclid , convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While it carry the same information, the lensing signal is more compressed in the convergence maps than in the shear field. This simplifies otherwise computationally expensive analyses, for instance, non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects caused by holes in the data field, field borders, shape noise, and the fact that the shear is not a direct observable (reduced shear). We present the two mass-inversion methods that are included in the official Euclid data-processing pipeline: the standard Kaiser & Squires method (KS), and a new mass-inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS method and includes corrections for mass-mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the Euclid Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions and third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method substantially reduces the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5, while the errors on the third- and fourth-order moments are reduced by factors of about 2 and 10, respectively.