Euclid preparation Adam, R.; Vannier, M.; Maurogordato, S. ...
Astronomy and astrophysics (Berlin),
07/2019, Letnik:
627
Journal Article
Recenzirano
Odprti dostop
Galaxy cluster counts in bins of mass and redshift have been shown to be a competitive probe to test cosmological models. This method requires an efficient blind detection of clusters from surveys ...with a well-known selection function and robust mass estimates, which is particularly challenging at high redshift. The
Euclid
wide survey will cover 15 000 deg
2
of the sky, avoiding contamination by light from our Galaxy and our solar system in the optical and near-infrared bands, down to magnitude 24 in the
H
-band. The resulting data will make it possible to detect a large number of galaxy clusters spanning a wide-range of masses up to redshift ∼2 and possibly higher. This paper presents the final results of the
Euclid
Cluster Finder Challenge (CFC), fourth in a series of similar challenges. The objective of these challenges was to select the cluster detection algorithms that best meet the requirements of the
Euclid
mission. The final CFC included six independent detection algorithms, based on different techniques, such as photometric redshift tomography, optimal filtering, hierarchical approach, wavelet and friend-of-friends algorithms. These algorithms were blindly applied to a mock galaxy catalog with representative
Euclid
-like properties. The relative performance of the algorithms was assessed by matching the resulting detections to known clusters in the simulations down to masses of
M
200
∼ 10
13.25
M
⊙
. Several matching procedures were tested, thus making it possible to estimate the associated systematic effects on completeness to < 3%. All the tested algorithms are very competitive in terms of performance, with three of them reaching > 80% completeness for a mean purity of 80% down to masses of 10
14
M
⊙
and up to redshift
z
= 2. Based on these results, two algorithms were selected to be implemented in the
Euclid
pipeline, the Adaptive Matched Identifier of Clustered Objects (AMICO) code, based on matched filtering, and the PZWav code, based on an adaptive wavelet approach.
In physically realistic, scalar-field-based dynamical dark energy models (including, e.g., quintessence), one naturally expects the scalar field to couple to the rest of the model’s degrees of ...freedom. In particular, a coupling to the electromagnetic sector leads to a time (redshift) dependence in the fine-structure constant and a violation of the weak equivalence principle. Here we extend the previous
Euclid
forecast constraints on dark energy models to this enlarged (but physically more realistic) parameter space, and forecast how well
Euclid
, together with high-resolution spectroscopic data and local experiments, can constrain these models. Our analysis combines simulated
Euclid
data products with astrophysical measurements of the fine-structure constant,
α
, and local experimental constraints, and it includes both parametric and non-parametric methods. For the astrophysical measurements of
α
, we consider both the currently available data and a simulated dataset representative of Extremely Large Telescope measurements that are expected to be available in the 2030s. Our parametric analysis shows that in the latter case, the inclusion of astrophysical and local data improves the
Euclid
dark energy figure of merit by between 8% and 26%, depending on the correct fiducial model, with the improvements being larger in the null case where the fiducial coupling to the electromagnetic sector is vanishing. These improvements would be smaller with the current astrophysical data. Moreover, we illustrate how a genetic algorithms based reconstruction provides a null test for the presence of the coupling. Our results highlight the importance of complementing surveys like
Euclid
with external data products, in order to accurately test the wider parameter spaces of physically motivated paradigms.
Context.
Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, ...previously ignored systematic effects must be addressed.
Aims.
In this work, we evaluate the impact of the reduced shear approximation and magnification bias on information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities in high-magnification regions.
Methods.
The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from
Euclid
, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach.
Results.
These effects cause significant biases in Ω
m
,
σ
8
,
n
s
, Ω
DE
,
w
0
, and
w
a
of −0.53
σ
, 0.43
σ
, −0.34
σ
, 1.36
σ
, −0.68
σ
, and 1.21
σ
, respectively. We then show that these lensing biases interact with another systematic effect: the intrinsic alignment of galaxies. Accordingly, we have developed the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to
Euclid
, we find that the additional terms introduced by this correction are sub-dominant.
Weak lensing, which is the deflection of light by matter along the line of sight, has proven to be an efficient method for constraining models of structure formation and reveal the nature of dark ...energy. So far, most weak-lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak-lensing surveys such as
Euclid
, convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While it carry the same information, the lensing signal is more compressed in the convergence maps than in the shear field. This simplifies otherwise computationally expensive analyses, for instance, non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects caused by holes in the data field, field borders, shape noise, and the fact that the shear is not a direct observable (reduced shear). We present the two mass-inversion methods that are included in the official
Euclid
data-processing pipeline: the standard Kaiser & Squires method (KS), and a new mass-inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS method and includes corrections for mass-mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the
Euclid
Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions and third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method substantially reduces the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5, while the errors on the third- and fourth-order moments are reduced by factors of about 2 and 10, respectively.
Context.
Future weak lensing surveys, such as the
Euclid
mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain ...very low levels of statistical error, and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy.
Aims.
The aims of this paper are twofold. Firstly, we took steps toward a nonparametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to
Euclid
. Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Secondly, we studied the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in an
Euclid
scenario.
Methods.
We extended the recently proposed resolved components analysis approach, which performs super-resolution on a field of under-sampled observations of a spatially varying, image-valued function. We added a spatial interpolation component to the method, making it a true 2-dimensional PSF model. We compared our approach to
PSFEx
, then quantified the impact of PSF recovery errors on galaxy shape measurements through image simulations.
Results.
Our approach yields an improvement over
PSFEx
in terms of the PSF model and on observed galaxy shape errors, though it is at present far from reaching the required
Euclid
accuracy. We also find that the usual formalism used for the propagation of PSF model errors to weak lensing quantities no longer holds in the case of an
Euclid
-like PSF. In particular, different shape measurement approaches can react differently to the same PSF modeling errors.
Purpose
Malignancy prediction in indeterminate thyroid nodules is still challenging. We prospectively evaluated whether the combination of ultrasound (US) risk stratification and molecular testing ...improves the assessment of malignancy risk in Bethesda Category IV thyroid nodules.
Methods
Ninety-one consecutively diagnosed Bethesda Category IV thyroid nodules were prospectively evaluated before surgery by both ACR- and EU-TIRADS US risk-stratification systems and by a further US-guided fine-needle aspiration cytology (FNAC) for the following molecular testing: BRAFV600E, N-RAS codons 12/13, N-RAS codon 61, H-RAS codons 12/13, H-RAS codon 61, K-RAS codons 12/13, and K-RAS codon 61 point-mutations, as well as PAX8/PPARγ, RET/PC1, and RET/PTC 3 rearrangements.
Results
At histology, 37% of nodules were malignant. No significant association was found between malignancy and either EU- or ACR-TIRADS. In total, 58 somatic mutations were identified, including 3 BRAFV600E (5%), 5 N-RAS 12/13 (9%), 13 N-RAS 61 (22%), 7 H-RAS 12/13 (12%), 11 H-RAS 61 (19%), 6 K-RAS 12/13 (10%), 8 K-RAS 61 (14%) mutations and 2 RET/PTC1 (4%), 0 RET/PTC 3 (0%), 3 PAX8/PPARγ (5%) rearrangements. At least one somatic mutation was found in 28% and 44% of benign and malignant nodules, respectively, although malignancy was not statistically associated with the outcome of the mutational test. However, the combination of ACR-, but not EU-, TIRADS with the presence of at least one somatic mutation, was significantly associated with malignant histology (
P
= 0.03).
Conclusion
US risk stratification and FNAC molecular testing may synergistically contribute to improve malignancy risk estimate of Bethesda category IV thyroid nodules.
ABSTRACT
The Euclid mission will observe well over a billion galaxies out to z ∼ 6 and beyond. This will offer an unrivalled opportunity to investigate several key questions for understanding galaxy ...formation and evolution. The first step for many of these studies will be the selection of a sample of quiescent and star-forming galaxies, as is often done in the literature by using well-known colour techniques such as the ‘UVJ’ diagram. However, given the limited number of filters available for the Euclid telescope, the recovery of such rest-frame colours will be challenging. We therefore investigate the use of observed Euclid colours, on their own and together with ground-based u-band observations, for selecting quiescent and star-forming galaxies. The most efficient colour combination, among the ones tested in this work, consists of the (u − VIS) and (VIS − J) colours. We find that this combination allows users to select a sample of quiescent galaxies complete to above $\sim 70{{\ \rm per\ cent}}$ and with less than 15${{\ \rm per\ cent}}$ contamination at redshifts in the range 0.75 < z < 1. For galaxies at high-z or without the u-band complementary observations, the (VIS − Y) and (J − H) colours represent a valid alternative, with $\gt 65{{\ \rm per\ cent}}$ completeness level and contamination below 20${{\ \rm per\ cent}}$ at 1 < z < 2 for finding quiescent galaxies. In comparison, the sample of quiescent galaxies selected with the traditional UVJ technique is only $\sim 20{{\ \rm per\ cent}}$ complete at z < 3, when recovering the rest-frame colours using mock Euclid observations. This shows that our new methodology is the most suitable one when only Euclid bands, along with u-band imaging, are available.
Context.
The ESA
Euclid
space telescope could observe up to 150 000 asteroids as a side product of its primary cosmological mission. Asteroids appear as trailed sources, that is streaks, in the ...images. Owing to the survey area of 15 000 square degrees and the number of sources, automated methods have to be used to find them.
Euclid
is equipped with a visible camera, VIS (VISual imager), and a near-infrared camera, NISP (Near-Infrared Spectrometer and Photometer), with three filters.
Aims.
We aim to develop a pipeline to detect fast-moving objects in
Euclid
images, with both high completeness and high purity.
Methods.
We tested the
StreakDet
software to find asteroids from simulated
Euclid
images. We optimized the parameters of
StreakDet
to maximize completeness, and developed a post-processing algorithm to improve the purity of the sample of detected sources by removing false-positive detections.
Results.
StreakDet
finds 96.9% of the synthetic asteroid streaks with apparent magnitudes brighter than 23rd magnitude and streak lengths longer than 15 pixels (10 arcsec h
−1
), but this comes at the cost of finding a high number of false positives. The number of false positives can be radically reduced with multi-streak analysis, which utilizes all four dithers obtained by
Euclid
.
Conclusions.
StreakDet
is a good tool for identifying asteroids in
Euclid
images, but there is still room for improvement, in particular, for finding short (less than 13 pixels, corresponding to 8 arcsec h
−1
) and/or faint streaks (fainter than the apparent magnitude of 23).
Euclid preparation Desprez, G; Paltani, S; Alvarez-Ayllon, A ...
Astronomy and astrophysics (Berlin),
12/2020, Letnik:
644
Journal Article
Recenzirano
Odprti dostop
Forthcoming large photometric surveys for cosmology require precise and accurate photometric redshift (photo-z) measurements for the success of their main science objectives. However, to date, no ...method has been able to produce photo-zs at the required accuracy using only the broad-band photometry that those surveys will provide. An assessment of the strengths and weaknesses of current methods is a crucial step in the eventual development of an approach to meet this challenge. We report on the performance of 13 photometric redshift code single value redshift estimates and redshift probability distributions (PDZs) on a common set of data, focusing particularly on the 0.2 − 2.6 redshift range that the Euclid mission will probe. We designed a challenge using emulated Euclid data drawn from three photometric surveys of the COSMOS field. The data was divided into two samples: one calibration sample for which photometry and redshifts were provided to the participants; and the validation sample, containing only the photometry to ensure a blinded test of the methods. Participants were invited to provide a redshift single value estimate and a PDZ for each source in the validation sample, along with a rejection flag that indicates the sources they consider unfit for use in cosmological analyses. The performance of each method was assessed through a set of informative metrics, using cross-matched spectroscopic and highly-accurate photometric redshifts as the ground truth. We show that the rejection criteria set by participants are efficient in removing strong outliers, that is to say sources for which the photo-z deviates by more than 0.15(1 + z) from the spectroscopic-redshift (spec-z). We also show that, while all methods are able to provide reliable single value estimates, several machine-learning methods do not manage to produce useful PDZs. We find that no machine-learning method provides good results in the regions of galaxy color-space that are sparsely populated by spectroscopic-redshifts, for example z > 1. However they generally perform better than template-fitting methods at low redshift (z < 0.7), indicating that template-fitting methods do not use all of the information contained in the photometry. We introduce metrics that quantify both photo-z precision and completeness of the samples (post-rejection), since both contribute to the final figure of merit of the science goals of the survey (e.g., cosmic shear from Euclid). Template-fitting methods provide the best results in these metrics, but we show that a combination of template-fitting results and machine-learning results with rejection criteria can outperform any individual method. On this basis, we argue that further work in identifying how to best select between machine-learning and template-fitting approaches for each individual galaxy should be pursued as a priority.
Euclid preparation Blanchard, A; Camera, S; Carbone, C ...
Astronomy and astrophysics (Berlin),
10/2020, Letnik:
642
Journal Article
Recenzirano
Odprti dostop
Aims. The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the ...expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts. Methods. We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required. Results. We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three.