The cosmological surveys that are planned for the current decade will provide us with unparalleled observations of the distribution of galaxies on cosmic scales, by means of which we can probe the ...underlying large-scale structure (LSS) of the Universe. This will allow us to test the concordance cosmological model and its extensions. However, precision pushes us to high levels of accuracy in the theoretical modelling of the LSS observables so that no biases are introduced into the estimation of the cosmological parameters. In particular, effects such as redshift-space distortions (RSD) can become relevant in the computation of harmonic-space power spectra even for the clustering of the photometrically selected galaxies, as has previously been shown in literature. In this work, we investigate the contribution of linear RSD, as formulated in the Limber approximation by a previous work, in forecast cosmological analyses with the photometric galaxy sample of the survey. We aim to assess their impact and to quantify the bias on the measurement of cosmological parameters that would be caused if this effect were neglected. We performed this task by producing mock power spectra for photometric galaxy clustering and weak lensing as is expected to be obtained from the survey. We then used a Markov chain Monte Carlo approach to obtain the posterior distributions of cosmological parameters from these simulated observations. When the linear RSD is neglected, significant biases are caused when galaxy correlations are used alone and when they are combined with cosmic shear in the so-called 3times 2pt approach. These biases can be equivalent to as much as $5\ when an underlying Lambda CDM cosmology is assumed When the cosmological model is extended to include the equation-of-state parameters of dark energy, the extension parameters can be shifted by more than $1\ sigma$.
MADAM is a CMB map-making code, designed to make temperature and polarization maps of time-ordered data of total power experiments like P LANCK. The algorithm is based on the destriping technique, ...but it also makes use of known noise properties in the form of a noise prior. The method in its early form was presented in an earlier work by Keihä nen et al. (2005, MNRAS, 360, 390). In this paper we present an update of the method, extended to non-averaged data, and include polarization. In this method the baseline length is a freely adjustable parameter, and destriping can be performed at a different map resolution than that of the final maps. We show results obtained with simulated data. This study is related to P LANCK LFI activities.
The material composition of asteroids is an essential piece of knowledge in the quest to understand the formation and evolution of the Solar System. Visual to near-infrared spectra or multiband ...photometry is required to constrain the material composition of asteroids, but we currently have such data, especially in the near-infrared wavelengths, for only a limited number of asteroids. This is a significant limitation considering the complex orbital structures of the asteroid populations. Up to 150 000 asteroids will be visible in the images of the upcoming ESA
Euclid
space telescope, and the instruments of
Euclid
will offer multiband visual to near-infrared photometry and slitless near-infrared spectra of these objects. Most of the asteroids will appear as streaks in the images. Due to the large number of images and asteroids, automated detection methods are needed. A non-machine-learning approach based on the Streak Det software was previously tested, but the results were not optimal for short and/or faint streaks. We set out to improve the capability to detect asteroid streaks in
Euclid
images by using deep learning. We built, trained, and tested a three-step machine-learning pipeline with simulated
Euclid
images. First, a convolutional neural network (CNN) detected streaks and their coordinates in full images, aiming to maximize the completeness (recall) of detections. Then, a recurrent neural network (RNN) merged snippets of long streaks detected in several parts by the CNN. Lastly, gradient-boosted trees (
XGBoost
) linked detected streaks between different
Euclid
exposures to reduce the number of false positives and improve the purity (precision) of the sample. The deep-learning pipeline surpasses the completeness and reaches a similar level of purity of a non-machine-learning pipeline based on the
StreakDet
software. Additionally, the deep-learning pipeline can detect asteroids 0.25–0.5 magnitudes fainter than
StreakDet
. The deep-learning pipeline could result in a 50% increase in the number of detected asteroids compared to the
StreakDet
software. There is still scope for further refinement, particularly in improving the accuracy of streak coordinates and enhancing the completeness of the final stage of the pipeline, which involves linking detections across multiple exposures.
Planck intermediate results Aghanim, N; Ashdown, M; Aumont, J ...
Astronomy and astrophysics (Berlin),
03/2017, Letnik:
599
Journal Article
Recenzirano
Odprti dostop
The characterization of the Galactic foregrounds has been shown to be the main obstacle in thechallenging quest to detect primordial B-modes in the polarized microwave sky. We make use of the ...Planck-HFI 2015 data release at high frequencies to place new constraints on the properties of the polarized thermal dust emission at high Galactic latitudes. Here, we specifically study the spatial variability of the dust polarized spectral energy distribution (SED), and its potential impact on the determination of the tensor-to-scalar ratio, r. We use the correlation ratio of the C super(BB) sub(scriptl) angular power spectra between the 217 and 353GHz channels as a tracer of these potential variations, computed on different high Galactic latitude regions, ranging from 80% to 20% of the sky. The new insight from Planck data is a departure of the correlation ratio from unity that cannot be attributed to a spurious decorrelation due to the cosmic microwave background, instrumental noise, or instrumental systematics. The effect is marginally detected on each region, but the statistical combination of all the regions gives more than 99% confidence for this variation in polarized dust properties. In addition, we show that the decorrelation increases when there is a decrease in the mean column density of the region of the sky being considered, and we propose a simple power-law empirical model for this dependence, which matches what is seen in the Planck data. We explore the effect that this measured decorrelation has on simulations of the BICEP2-Keck Array/Planck analysis and show that the 2015 constraints from these data still allow a decorrelation between the dust at 150 and 353GHz that is compatible with our measured value. Finally, using simplified models, we show that either spatial variation of the dust SED or of the dust polarization angle are able to produce decorrelations between 217 and 353GHz data similar to the values we observe in the data.
I review standard big bang nucleosynthesis and some versions of nonstandard BBN. The abundances of the primordial isotopes D, He-3, and Li-7 produced in standard BBN can be calculated as a function ...of the baryon density with an accuracy of about 10%. For He-4 the accuracy is better than 1%. The calculated abundances agree fairly well with observations, but the baryon density of the universe cannot be determined with high precision. Possibilities for nonstandard BBN include inhomogeneous and antimatter BBN and nonzero neutrino chemical potentials.
The Euclid satellite, to be launched by ESA in 2022, will be a major instrument for cosmology for the next decades. Euclid is composed of two instruments: the Visible instrument and the Near Infrared ...Spectrometer and Photometer (NISP). In this work, we estimate the implications of correlated readout noise in the NISP detectors for the final in-flight flux measurements. Considering the multiple accumulated readout mode, for which the UTR (Up The Ramp) exposure frames are averaged in groups, we derive an analytical expression for the noise covariance matrix between groups in the presence of correlated noise. We also characterize the correlated readout noise properties in the NISP engineering-grade detectors using long dark integrations. For this purpose, we assume a (1/f) α -like noise model and fit the model parameters to the data, obtaining typical values of \(\sigma ={19.7}_{-0.8}^{+1.1}\) e − Hz−0.5, \({f}_{\mathrm{knee}}=({5.2}_{-1.3}^{+1.8})\times {10}^{-3}\,\mathrm{Hz}\) and \(\alpha ={1.24}_{-0.21}^{+0.26}\). Furthermore, via realistic simulations and using a maximum likelihood flux estimator we derive the bias between the input flux and the recovered one. We find that using our analytical expression for the covariance matrix of the correlated readout noise we diminish this bias by up to a factor of four with respect to the white noise approximation for the covariance matrix. Finally, we conclude that the final bias on the in-flight NISP flux measurements should still be negligible even in the white readout noise approximation, which is taken as a baseline for the Euclid on-board processing to estimate the on-sky flux.
In the original version, the bounds given in Eqs. (87a) and (87b) on the contribution to the early-time optical depth, τ(15, 30), contained a numerical error in deriving the 95th percentile from the ...Monte Carlo samples. The corrected 95% upper bounds are:τ(15, 30) < 0.018 (lowE, flat τ(15, 30), FlexKnot); (1)τ(15, 30) < 0.023 (lowE, flat knot, FlexKnot). (2)These bounds are a factor of ∼3 larger than the originally reported results. Consequently, the new bounds do not significantly improve upon previous results from Planck data presented in Millea & Bouchet (2018) as was stated, but are instead comparable. Equations (1) and (2) give results that are now similar to those of Heinrich & Hu (2021), who used the same Planck 2018 data to derive a 95 % upper bound of 0.020 using the principal component analysis (PCA) model and uniform priors on the PCA mode amplitudes.
We study destriping as a map-making method for temperature-and-polarization data for cosmic microwave background observations. We present a particular implementation of destriping and study the ...residual error in output maps, using simulated data corresponding to the 70 GHz channel of the Planck satellite, but assuming idealized detector and beam properties. The relevant residual map is the difference between the output map and a binned map obtained from the signal + white noise part of the data stream. For destriping it can be divided into six components: unmodeled correlated noise, white noise reference baselines, reference baselines of the pixelization noise from the signal, and baseline errors from correlated noise, white noise, and signal. These six components contribute differently to the different angular scales in the maps. We derive analytical results for the first three components. This study is related to Planck LFI activities.
Euclid preparation Ilbert, O; de la Torre, S; Martinet, N ...
Astronomy & astrophysics,
03/2021, Letnik:
647
Journal Article
Recenzirano
Odprti dostop
The analysis of weak gravitational lensing in wide-field imaging surveys is considered to be a major cosmological probe of dark energy. Our capacity to constrain the dark energy equation of state ...relies on an accurate knowledge of the galaxy mean redshift ⟨z⟩. We investigate the possibility of measuring ⟨z⟩ with an accuracy better than 0.002 (1 + z) in ten tomographic bins spanning the redshift interval 0.2 < z < 2.2, the requirements for the cosmic shear analysis of Euclid. We implement a sufficiently realistic simulation in order to understand the advantages and complementarity, as well as the shortcomings, of two standard approaches: the direct calibration of ⟨z⟩ with a dedicated spectroscopic sample and the combination of the photometric redshift probability distribution functions (zPDFs) of individual galaxies. We base our study on the Horizon-AGN hydrodynamical simulation, which we analyse with a standard galaxy spectral energy distribution template-fitting code. Such a procedure produces photometric redshifts with realistic biases, precisions, and failure rates. We find that the current Euclid design for direct calibration is sufficiently robust to reach the requirement on the mean redshift, provided that the purity level of the spectroscopic sample is maintained at an extremely high level of > 99.8%. The zPDF approach can also be successful if the zPDF is de-biased using a spectroscopic training sample. This approach requires deep imaging data but is weakly sensitive to spectroscopic redshift failures in the training sample. We improve the de-biasing method and confirm our finding by applying it to real-world weak-lensing datasets (COSMOS and KiDS+VIKING-450).