The two-point correlation function of the galaxy distribution is a key cosmological observable that allows us to constrain the dynamical and geometrical state of our Universe. To measure the ...correlation function we need to know both the galaxy positions and the expected galaxy density field. The expected field is commonly specified using a Monte-Carlo sampling of the volume covered by the survey and, to minimize additional sampling errors, this random catalog has to be much larger than the data catalog. Correlation function estimators compare data–data pair counts to data–random and random–random pair counts, where random–random pairs usually dominate the computational cost. Future redshift surveys will deliver spectroscopic catalogs of tens of millions of galaxies. Given the large number of random objects required to guarantee sub-percent accuracy, it is of paramount importance to improve the efficiency of the algorithm without degrading its precision. We show both analytically and numerically that splitting the random catalog into a number of subcatalogs of the same size as the data catalog when calculating random–random pairs and excluding pairs across different subcatalogs provides the optimal error at fixed computational cost. For a random catalog fifty times larger than the data catalog, this reduces the computation time by a factor of more than ten without affecting estimator variance or bias.
ABSTRACT
The COnstrain Dark Energy with X-ray clusters (CODEX) sample contains the largest flux limited sample of X-ray clusters at 0.35 < z < 0.65. It was selected from ROSAT data in the 10 000 ...square degrees of overlap with BOSS, mapping a total number of 2770 high-z galaxy clusters. We present here the full results of the CFHT CODEX programme on cluster mass measurement, including a reanalysis of CFHTLS Wide data, with 25 individual lensing-constrained cluster masses. We employ lensfit shape measurement and perform a conservative colour–space selection and weighting of background galaxies. Using the combination of shape noise and an analytic covariance for intrinsic variations of cluster profiles at fixed mass due to large-scale structure, miscentring, and variations in concentration and ellipticity, we determine the likelihood of the observed shear signal as a function of true mass for each cluster. We combine 25 individual cluster mass likelihoods in a Bayesian hierarchical scheme with the inclusion of optical and X-ray selection functions to derive constraints on the slope α, normalization β, and scatter σln λ|μ of our richness–mass scaling relation model in log-space: ${\langle {\rm In}\,\, \lambda\!\!\mid\!\!\mu\rangle = \alpha\mu + \beta,}
$ with μ = ln (M200c/Mpiv), and Mpiv = 1014.81M⊙. We find a slope $\alpha = 0.49^{+0.20}_{-0.15}$, normalization $\exp (\beta) = 84.0^{+9.2}_{-14.8}$, and $\sigma _{\ln \lambda | \mu } = 0.17^{+0.13}_{-0.09}$ using CFHT richness estimates. In comparison to other weak lensing richness–mass relations, we find the normalization of the richness statistically agreeing with the normalization of other scaling relations from a broad redshift range (0.0 < z < 0.65) and with different cluster selection (X-ray, Sunyaev–Zeldovich, and optical).
The COnstrain Dark Energy with X-ray clusters (CODEX) sample contains the largest flux limited sample of X-ray clusters at 0.35 < z < 0.65. It was selected from ROSAT data in the 10 000 square ...degrees of overlap with BOSS, mapping a total number of 2770 high-z galaxy clusters. We present here the full results of the CFHT CODEX programme on cluster mass measurement, including a reanalysis of CFHTLS Wide data, with 25 individual lensing-constrained cluster masses. We employ lensfit shape measurement and perform a conservative colour–space selection and weighting of background galaxies. Using the combination of shape noise and an analytic covariance for intrinsic variations of cluster profiles at fixed mass due to large-scale structure, miscentring, and variations in concentration and ellipticity, we determine the likelihood of the observed shear signal as a function of true mass for each cluster. We combine 25 individual cluster mass likelihoods in a Bayesian hierarchical scheme with the inclusion of optical and X-ray selection functions to derive constraints on the slope α, normalization β, and scatter σln λ|μ of our richness–mass scaling relation model in log-space: ${\langle {\rm In}\,\, \lambda\!\!\mid\!\!\mu\rangle = \alpha\mu + \beta,} $ with μ = ln (M200c/Mpiv), and Mpiv = 1014.81M⊙. We find a slope $\alpha = 0.49^{+0.20}_{-0.15}$, normalization $\exp (\beta) = 84.0^{+9.2}_{-14.8}$, and $\sigma _{\ln \lambda | \mu } = 0.17^{+0.13}_{-0.09}$ using CFHT richness estimates. In comparison to other weak lensing richness–mass relations, we find the normalization of the richness statistically agreeing with the normalization of other scaling relations from a broad redshift range (0.0 < z < 0.65) and with different cluster selection (X-ray, Sunyaev–Zeldovich, and optical).
The scientific performance of the Planck Low Frequency Instrument (LFI) after one year of in-orbit operation is presented. We describe the main optical parameters and discuss photometric calibration, ...white noise sensitivity, and noise properties. A preliminary evaluation of the impact of the main systematic effects is presented. For each of the performance parameters, we outline the methods used to obtain them from the flight data and provide a comparison with pre-launch ground assessments, which are essentially confirmed in flight.
Euclid preparation Ilbert, O.; de la Torre, S.; Wright, A. H. ...
Astronomy and astrophysics (Berlin),
03/2021, Letnik:
647
Journal Article
Recenzirano
Odprti dostop
The analysis of weak gravitational lensing in wide-field imaging surveys is considered to be a major cosmological probe of dark energy. Our capacity to constrain the dark energy equation of state ...relies on an accurate knowledge of the galaxy mean redshift ⟨
z
⟩. We investigate the possibility of measuring ⟨
z
⟩ with an accuracy better than 0.002 (1 +
z
) in ten tomographic bins spanning the redshift interval 0.2 <
z
< 2.2, the requirements for the cosmic shear analysis of
Euclid
. We implement a sufficiently realistic simulation in order to understand the advantages and complementarity, as well as the shortcomings, of two standard approaches: the direct calibration of ⟨
z
⟩ with a dedicated spectroscopic sample and the combination of the photometric redshift probability distribution functions (
z
PDFs) of individual galaxies. We base our study on the Horizon-AGN hydrodynamical simulation, which we analyse with a standard galaxy spectral energy distribution template-fitting code. Such a procedure produces photometric redshifts with realistic biases, precisions, and failure rates. We find that the current
Euclid
design for direct calibration is sufficiently robust to reach the requirement on the mean redshift, provided that the purity level of the spectroscopic sample is maintained at an extremely high level of > 99.8%. The
z
PDF approach can also be successful if the
z
PDF is de-biased using a spectroscopic training sample. This approach requires deep imaging data but is weakly sensitive to spectroscopic redshift failures in the training sample. We improve the de-biasing method and confirm our finding by applying it to real-world weak-lensing datasets (COSMOS and KiDS+VIKING-450).