A buyer’s guide to the Hubble constant Shah, Paul; Lemos, Pablo; Lahav, Ofer
The Astronomy and astrophysics review,
12/2021, Letnik:
29, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Since the expansion of the universe was first established by Edwin Hubble and Georges Lemaître about a century ago, the Hubble constant
H
0
which measures its rate has been of great interest to ...astronomers. Besides being interesting in its own right, few properties of the universe can be deduced without it. In the last decade, a significant gap has emerged between different methods of measuring it, some anchored in the nearby universe, others at cosmological distances. The SH0ES team has found
H
0
=
73.2
±
1.3
kms
-
1
Mpc
-
1
locally, whereas the value found for the early universe by the Planck Collaboration is
H
0
=
67.4
±
0.5
kms
-
1
Mpc
-
1
from measurements of the cosmic microwave background. Is this gap a sign that the well-established
Λ
CDM
cosmological model is somehow incomplete? Or are there unknown systematics? And more practically, how should humble astronomers pick between competing claims if they need to assume a value for a certain purpose? In this article, we review results and what changes to the cosmological model could be needed to accommodate them all. For astronomers in a hurry, we provide a buyer’s guide to the results, and make recommendations.
ABSTRACT
In this work, we investigate the systematic uncertainties that arise from the calculation of the peculiar velocity when estimating the Hubble constant (H0) from gravitational wave standard ...sirens. We study the GW170817 event and the estimation of the peculiar velocity of its host galaxy, NGC 4993, when using Gaussian smoothing over nearby galaxies. NGC 4993 being a relatively nearby galaxy, at ∼40 Mpc away, is subject to a significant effect of peculiar velocities. We demonstrate a direct dependence of the estimated peculiar velocity value on the choice of smoothing scale. We show that when not accounting for this systematic, a bias of ${\sim }200~{\rm km\, s^{-1}}$ in the peculiar velocity incurs a bias of ${\sim }4~{\rm km\, s^{-1}\, Mpc^{-1}}$ on the Hubble constant. We formulate a Bayesian model that accounts for the dependence of the peculiar velocity on the smoothing scale and by marginalizing over this parameter we remove the need for a choice of smoothing scale. The proposed model yields $H_0 = 68.6 ^{+14.0} _{-8.5}~{\rm km\, s^{-1}\, Mpc^{-1}}$. We demonstrate that under this model a more robust unbiased estimate of the Hubble constant from nearby GW sources is obtained.
ABSTRACT
We present the first reconstruction of dark matter maps from weak lensing observational data using deep learning. We train a convolution neural network with a U-Net-based architecture on ...over 3.6 × 105 simulated data realizations with non-Gaussian shape noise and with cosmological parameters varying over a broad prior distribution. We interpret our newly created dark energy survey science verification (DES SV) map as an approximation of the posterior mean P(κ|γ) of the convergence given observed shear. Our DeepMass1 method is substantially more accurate than existing mass-mapping methods. With a validation set of 8000 simulated DES SV data realizations, compared to Wiener filtering with a fixed power spectrum, the DeepMass method improved the mean square error (MSE) by 11 per cent. With N-body simulated MICE mock data, we show that Wiener filtering, with the optimal known power spectrum, still gives a worse MSE than our generalized method with no input cosmological parameters; we show that the improvement is driven by the non-linear structures in the convergence. With higher galaxy density in future weak lensing data unveiling more non-linear scales, it is likely that deep learning will be a leading approach for mass mapping with Euclid and LSST.
ABSTRACT
We propose a principled Bayesian method for quantifying tension between correlated data sets with wide uninformative parameter priors. This is achieved by extending the Suspiciousness ...statistic, which is insensitive to priors. Our method uses global summary statistics, and as such it can be used as a diagnostic for internal consistency. We show how our approach can be combined with methods that use parameter space and data space to identify the existing internal discrepancies. As an example, we use it to test the internal consistency of the KiDS-450 data in four photometric redshift bins, and to recover controlled internal discrepancies in simulated KiDS data. We propose this as a diagnostic of internal consistency for present and future cosmological surveys, and as a tension metric for data sets that have non-negligible correlation, such as Large Synoptic Spectroscopic Survey and Euclid.
We investigate the impact of prior models on the upper bound of the sum of neutrino masses, ∑m_{ν}. Using data from the large scale structure of galaxies, cosmic microwave background, type Ia ...supernovae, and big bang nucleosynthesis, we argue that cosmological neutrino mass and hierarchy determination should be pursued using exact models, since approximations might lead to incorrect and nonphysical bounds. We compare constraints from physically motivated neutrino mass models (i.e., ones respecting oscillation experiments) to those from models using standard cosmological approximations. The former give a consistent upper bound of ∑m_{ν}≲0.26 eV (95% CI) and yield the first approximation-independent upper bound for the lightest neutrino mass species, m_{0}^{ν}<0.086 eV (95% CI). By contrast, one of the approximations, which is inconsistent with the known lower bounds from oscillation experiments, yields an upper bound of ∑m_{ν}≲0.15 eV (95% CI); this differs substantially from the physically motivated upper bound.
We present the results of the 2MASS Redshift Survey (2MRS), a ten-year project to map the full three-dimensional distribution of galaxies in the nearby universe. The Two Micron All Sky Survey (2MASS) ...was completed in 2003 and its final data products, including an extended source catalog (XSC), are available online. The 2MASS XSC contains nearly a million galaxies with K sub(s) < or =, slant 13.5 mag and is essentially complete and mostly unaffected by interstellar extinction and stellar confusion down to a galactic latitude of |b| = 5degrees for bright galaxies. Near-infrared wavelengths are sensitive to the old stellar populations that dominate galaxy masses, making 2MASS an excellent starting point to study the distribution of matter in the nearby universe. We selected a sample of 44,599 2MASS galaxies with K sub(s) < or =, slant 11.75 mag and |b| > or =, slanted 5degrees (> or =, slanted8degrees toward the Galactic bulge) as the input catalog for our survey. We obtained spectroscopic observations for 11,000 galaxies and used previously obtained velocities for the remainder of the sample to generate a redshift catalog that is 97.6% complete to well-defined limits and covers 91% of the sky. This provides an unprecedented census of galaxy (baryonic mass) concentrations within 300 Mpc. Earlier versions of our survey have been used in a number of publications that have studied the bulk motion of the Local Group, mapped the density and peculiar velocity fields out to 50 h super(-1) Mpc, detected galaxy groups, and estimated the values of several cosmological parameters. Additionally, we present morphological types for a nearly complete sub-sample of 20,860 galaxies with K sub(s) < or =, slant 11.25 mag and |b| > or =, slanted 10degrees.
We present a new limit of ∑m(v) ≤ 0.28 (95% CL) on the sum of the neutrino masses assuming a flat ΛCDM cosmology. This relaxes slightly to ∑m(ν) ≤ 0.34 and ∑m(v) ≤ 0.47 when quasinonlinear scales are ...removed and w≠ -1, respectively. These are derived from a new photometric catalogue of over 700,000 luminous red galaxies (MegaZ DR7) with a volume of 3.3 (Gpc h(-1))(3) and redshift range 0.45 < z < 0.65. The data are combined with WMAP 5-year CMB, baryon acoustic oscillations, supernovae, and a Hubble Space Telescope prior on h. When combined with WMAP these data are as constraining as adding all supernovae and baryon oscillation data available. The upper limit is one of the tightest constraints on the neutrino from cosmology or particle physics. Further, if these bounds hold, they all predict that current-to-next generation neutrino experiments, such as KATRIN, are unlikely to obtain a detection.
We introduce ANNz, a freely available software package for photometric redshift estimation using artificial neural networks. ANNzlearns the relation between photometry and redshift from an ...appropriate training set of galaxies for which the redshift is already known. Where a large and representative training set is available, ANNzis a highly competitive tool when compared with traditional template‐fitting methods. The ANNzpackage is demonstrated on the Sloan Digital Sky Survey Data Release 1, and for this particular data set the rms redshift error in the range
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \usepackageOT2,OT1{fontenc} \newcommand\cyr{ \renewcommand\rmdefault{wncyr} \renewcommand\sfdefault{wncyss} \renewcommand\encodingdefault{OT2} \normalfont \selectfont} \DeclareTextFontCommand{\textcyr}{\cyr} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} \landscape $0\lesssim z\lesssim 0.7$ \end{document}
is
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \usepackageOT2,OT1{fontenc} \newcommand\cyr{ \renewcommand\rmdefault{wncyr} \renewcommand\sfdefault{wncyss} \renewcommand\encodingdefault{OT2} \normalfont \selectfont} \DeclareTextFontCommand{\textcyr}{\cyr} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} \landscape $\sigma _{\mathrm{rms}\,}=0.023$ \end{document}
. Nonideal conditions (spectroscopic sets that are small or brighter than the photometric set for which redshifts are required) are simulated, and the impact on the photometric redshift accuracy is assessed.2
ABSTRACT
Cosmological studies of large-scale structure have relied on two-point statistics, not fully exploiting the rich structure of the cosmic web. In this paper we show how to capture some of ...this cosmic web information by using the minimum spanning tree (MST), for the first time using it to estimate cosmological parameters in simulations. Discrete tracers of dark matter such as galaxies, N-body particles or haloes are used as nodes to construct a unique graph, the MST, that traces skeletal structure. We study the dependence of the MST on cosmological parameters using haloes from a suite of COmoving Lagrangian Acceleration (COLA) simulations with a box size of $250\ h^{-1}\, {\rm Mpc}$, varying the amplitude of scalar fluctuations (As), matter density (Ωm), and neutrino mass (∑mν). The power spectrum P and bispectrum B are measured for wavenumbers between 0.125 and 0.5 $h\, {\rm Mpc}^{-1}$, while a corresponding lower cut of ∼12.6 $h^{-1}\, {\rm Mpc}$ is applied to the MST. The constraints from the individual methods are fairly similar but when combined we see improved 1σ constraints of $\sim 17{{\ \rm per\ cent}}$ ($\sim 12{{\ \rm per\ cent}}$) on Ωm and $\sim 12{{\ \rm per\ cent}}$ ($\sim 10{{\ \rm per\ cent}}$) on As with respect to P (P + B) thus showing the MST is providing additional information. The MST can be applied to current and future spectroscopic surveys (BOSS, DESI, Euclid, PSF, WFIRST, and 4MOST) in 3D and photometric surveys (DES and LSST) in tomographic shells to constrain parameters and/or test systematics.