Abstract
We present a blind time-delay strong lensing (TDSL) cosmographic analysis of the doubly imaged quasar SDSS 1206+4332 . We combine the relative time delay between the quasar images, Hubble ...Space Telescope imaging, the Keck stellar velocity dispersion of the lensing galaxy, and wide-field photometric and spectroscopic data of the field to constrain two angular diameter distance relations. The combined analysis is performed by forward modelling the individual data sets through a Bayesian hierarchical framework, and it is kept blind until the very end to prevent experimenter bias. After unblinding, the inferred distances imply a Hubble constant H0 = 68.8$^{+5.4}_{-5.1}$ km s−1 Mpc−1, assuming a flat Λ cold dark matter cosmology with uniform prior on Ωm in 0.05, 0.5. The precision of our cosmographic measurement with the doubly imaged quasar SDSS 1206+4332 is comparable with those of quadruply imaged quasars and opens the path to perform on selected doubles the same analysis as anticipated for quads. Our analysis is based on a completely independent lensing code than our previous three H0LiCOW systems and the new measurement is fully consistent with those. We provide the analysis scripts paired with the publicly available software to facilitate independent analysis (footnote with link to www.h0licow.org). The consistency between blind measurements with independent codes provides an important sanity check on lens modelling systematics. By combining the likelihoods of the four systems under the same prior, we obtain H0 = 72.5$^{+2.1}_{-2.3}$ km s−1 Mpc−1. This measurement is independent of the distance ladder and other cosmological probes.
Measuring time delays between the multiple images of gravitationally lensed quasars is now recognized as a competitive way to constrain the cosmological parameters, and it is complementary with other ...cosmological probes. This requires long and well sampled optical light curves of numerous lensed quasars, such as those obtained by the COSMOGRAIL collaboration. High-quality data from our monitoring campaign call for novel numerical techniques to robustly measure the delays, as well as the associated random and systematic uncertainties, even in the presence of microlensing variations. We propose three different point estimators to measure time delays, which are explicitly designed to handle light curves with extrinsic variability. These methods share a common formalism, which enables them to process data from n-image lenses. Since the estimators rely on significantly contrasting ideas, we expect them to be sensitive to different bias sources. For each method and data set, we empirically estimate both the precision and accuracy (bias) of the time delay measurement using simulated light curves with known time delays that closely mimic the observations. Finally, we test the self-consistency of our approach, and we demonstrate that our bias estimation is serviceable. These new methods, including the empirical uncertainty estimator, will represent the standard benchmark for analyzing the COSMOGRAIL light curves.
Context. The precise determination of the present-day expansion rate of the Universe, expressed through the Hubble constant H0, is one of the most pressing challenges in modern cosmology. Assuming ...flat ΛCDM, H0 inference at high redshift using cosmic microwave background data from Planck disagrees at the 4.4σ level with measurements based on the local distance ladder made up of parallaxes, Cepheids, and Type Ia supernovae (SNe Ia), often referred to as Hubble tension. Independent cosmological-model-insensitive ways to infer H0 are of critical importance. Aims. We apply an inverse distance ladder approach, combining strong-lensing time-delay distance measurements with SN Ia data. By themselves, SNe Ia are merely good indicators of relative distance, but by anchoring them to strong gravitational lenses we can obtain an H0 measurement that is relatively insensitive to other cosmological parameters. Methods. A cosmological parameter estimate was performed for different cosmological background models, both for strong-lensing data alone and for the combined lensing + SNe Ia data sets. Results. The cosmological-model dependence of strong-lensing H0 measurements is significantly mitigated through the inverse distance ladder. In combination with SN Ia data, the inferred H0 consistently lies around 73–74 km s−1 Mpc−1, regardless of the assumed cosmological background model. Our results agree closely with those from the local distance ladder, but there is a > 2σ tension with Planck results, and a ∼1.5σ discrepancy with results from an inverse distance ladder including Planck, baryon acoustic oscillations, and SNe Ia. Future strong-lensing distance measurements will reduce the uncertainties in H0 from our inverse distance ladder.
ABSTRACT
We report the spectroscopic follow-up of 175 lensed quasar candidates selected using Gaia Data Release 2 observations following Paper III of this series. Systems include 86 confirmed lensed ...quasars and a further 17 likely lensed quasars based on imaging and/or similar spectra. We also confirm 11 projected quasar pairs and 11 physical quasar pairs, while 25 systems are left as unclassified quasar pairs – pairs of quasars at the same redshift, which could be either distinct quasars or potential lensed quasars. Especially interesting objects include eight quadruply imaged quasars of which two have BAL sources, an apparent triple, and a doubly lensed LoBaL quasar. The source redshifts and image separations of these new lenses range between 0.65–3.59 and 0.78–6.23 arcsec, respectively. We compare the known population of lensed quasars to an updated mock catalogue at image separations between 1 and 4 arcsec, showing a very good match at z < 1.5. At z > 1.5, only 47 per cent of the predicted number are known, with 56 per cent of these missing lenses at image separations below 1.5 arcsec. The missing higher redshift, small-separation systems will have fainter lensing galaxies, and are partially explained by the unclassified quasar pairs and likely lenses presented in this work, which require deeper imaging. Of the 11 new reported projected quasar pairs, 5 have impact parameters below 10 kpc, almost tripling the number of such systems, which can probe the innermost regions of quasar host galaxies through absorption studies. We also report four new lensed galaxies discovered through our searches, with source redshifts ranging from 0.62 to 2.79.
The upcoming Large Synoptic Survey Telescope (LSST) will detect many strongly lensed Type Ia supernovae (LSNe Ia) for time-delay cosmography. This will provide an independent and direct way for ...measuring the Hubble constant H0, which is necessary to address the current 4.4σ tension in H0 between the local distance ladder and the early Universe measurements. We present a detailed analysis of different observing strategies (also referred to as cadence strategy) for the LSST, and quantify their impact on time-delay measurement between multiple images of LSNe Ia. For this, we simulated observations by using mock LSNe Ia for which we produced mock-LSST light curves that account for microlensing. Furthermore, we used the free-knot splines estimator from the software PyCS to measure the time delay from the simulated observations. We find that using only LSST data for time-delay cosmography is not ideal. Instead, we advocate using LSST as a discovery machine for LSNe Ia, enabling time delay measurements from follow-up observations from other instruments in order to increase the number of systems by a factor of 2–16 depending on the observing strategy. Furthermore, we find that LSST observing strategies, which provide a good sampling frequency (the mean inter-night gap is around two days) and high cumulative season length (ten seasons with a season length of around 170 days per season), are favored. Rolling cadences subdivide the survey and focus on different parts in different years; these observing strategies trade the number of seasons for better sampling frequency. In our investigation, this leads to half the number of systems in comparison to the best observing strategy. Therefore rolling cadences are disfavored because the gain from the increased sampling frequency cannot compensate for the shortened cumulative season length. We anticipate that the sample of lensed SNe Ia from our preferred LSST cadence strategies with rapid follow-up observations would yield an independent percent-level constraint on H0.
We present a new measurement of the Hubble Constant H sub( 0) and other cosmological parameters based on the joint analysis of three multiply imaged quasar systems with measured gravitational time ...delays. First, we measure the time delay of HE 0435-1223 from 13-yr light curves obtained as part of the COSMOGRAIL project. Companion papers detail the modelling of the main deflectors and line-of-sight effects, and how these data are combined to determine the time-delay distance of HE 0435-1223. Crucially, the measurements are carried out blindly with respect to cosmological parameters in order to avoid confirmation bias. We then combine the time-delay distance of HE 0435-1223 with previous measurements from systems B1608+656 and RXJ1131-1231 to create a Time Delay Strong Lensing probe (TDSL). In flat Lambda cold dark matter ( Lambda CDM) with free matter and energy density, we find H sub( 0) =71.9... km s super( -1) Mpc super( -1) and Omega sub( Lambda )=0.62... This measurement is completely independent of, and in agreement with, the local distance ladder measurements of H sub( 0). We explore more general cosmological models combining TDSL with other probes, illustrating its power to break degeneracies inherent to other methods. The joint constraints from TDSL and Planck are H sub( 0) = 69.2... km s super( -1) Mpc super( -1) , Omega sub( Lambda )=0.70... and Omega sub( k)=0.003+0.004-0.006 in open ...CDM and H sub( 0) =79.0... km s super( -1) Mpc super( -1), Omega sub( de)=0.77... and w=-1.38... in flat wCDM. In combination with Planck and baryon acoustic oscillation data, when relaxing the constraints on the numbers of relativistic species we find N sub( eff) = 3.34... in N sub( eff) Lambda CDM and when relaxing the total mass of neutrinos we find ...m sub( ...) less than or equal to ...0.182 eV in m sub( nu )...CDM. Finally, in an open wCDM in combination with Planck and cosmic microwave background lensing, we find H sub( 0) =77.9... km s super( -1) Mpc super( -1), Omega sub( de) = 0.77..., Omega sub( k) = -0.003... and w=-1.37... (ProQuest: ... denotes formulae/symbols omitted.)
Abstract
A striking signal of dark matter beyond the standard model is the existence of cores in the centre of galaxy clusters. Recent simulations predict that a brightest cluster galaxy (BCG) inside ...a cored galaxy cluster will exhibit residual wobbling due to previous major mergers, long after the relaxation of the overall cluster. This phenomenon is absent with standard cold dark matter where a cuspy density profile keeps a BCG tightly bound at the centre. We test this hypothesis using cosmological simulations and deep observations of 10 galaxy clusters acting as strong gravitational lenses. Modelling the BCG wobble as a simple harmonic oscillator, we measure the wobble amplitude, Aw, in the BAHAMAS suite of cosmological hydrodynamical simulations, finding an upper limit for the cold dark matter paradigm of Aw < 2 kpc at the 95 per cent confidence limit. We carry out the same test on the data finding a non-zero amplitude of $A_{\rm w}=11.82^{+7.3}_{-3.0}$ kpc, with the observations dis-favouring Aw = 0 at the 3σ confidence level. This detection of BCG wobbling is evidence for a dark matter core at the heart of galaxy clusters. It also shows that strong lensing models of clusters cannot assume that the BCG is exactly coincident with the large-scale halo. While our small sample of galaxy clusters already indicates a non-zero Aw, with larger surveys, e.g. Euclid, we will be able to not only confirm the effect but also to use it to determine whether or not the wobbling finds its origin in new fundamental physics or astrophysical process.
COSMOGRAIL is a long-term photometric monitoring of gravitationally lensed quasars aimed at implementing Refsdal's time-delay method to measure cosmological parameters, in particular H sub(0). Given ...the long and well sampled light curves of strongly lensed quasars, time-delay measurements require numerical techniques whose quality must be assessed. To this end, and also in view of future monitoring programs or surveys such as the LSST, a blind signal processing competition named Time Delay Challenge 1 (TDC1) was held in 2014. The aim of the present paper, which is based on the simulated light curves from the TDC1, is double. First, we test the performance of the time-delay measurement techniques currently used in COSMOGRAIL. Second, we analyse the quantity and quality of the harvest of time delays obtained from the TDC1 simulations. To achieve these goals, we first discover time delays through a careful inspection of the light curves via a dedicated visual interface. Our measurement algorithms can then be applied to the data in an automated way. We show that our techniques have no significant biases, and yield adequate uncertainty estimates resulting in reduced chi super(2) values between 0.5 and 1.0. We provide estimates for the number and precision of time-delay measurements that can be expected from future time-delay monitoring campaigns as a function of the photometric signal-to-noise ratio and of the true time delay. We make our blind measurements on the TDC1 data publicly available.
Gravitational microlensing is a powerful tool for probing the inner structure of strongly lensed quasars and for constraining parameters of the stellar mass function of lens galaxies. This is ...achieved by analysing microlensing light curves between the multiple images of strongly lensed quasars and accounting for the effects of three main variable components: (1) the continuum flux of the source, (2) microlensing by stars in the lens galaxy, and (3) reverberation of the continuum by the broad line region (BLR). The latter, ignored by state-of-the-art microlensing techniques, can introduce high-frequency variations which we show carry information on the BLR size. We present a new method that includes all these components simultaneously and fits the power spectrum of the data in the Fourier space rather than the observed light curve itself. In this new framework, we analyse COSMOGRAIL light curves of the two-image system QJ 0158-4325 known to display high-frequency variations. Using exclusively the low-frequency part of the power spectrum, our constraint on the accretion disk radius agrees with the thin-disk model estimate and the results of previous work where the microlensing light curves were fit in real space. However, if we also take into account the high-frequency variations, the data favour significantly smaller disk sizes than previous microlensing measurements. In this case, our results are only in agreement with the thin-disk model prediction only if we assume very low mean masses for the microlens population, i.e. ⟨
M
⟩ = 0.01
M
⊙
. At the same time, including the differentially microlensed continuum reverberation by the BLR successfully explains the high frequencies without requiring such low-mass microlenses. This allows us to measure, for the first time, the size of the BLR using single-band photometric monitoring; we obtain
R
BLR
= 1.6
−0.8
+1.5
× 10
17
cm, in good agreement with estimates using the BLR size–luminosity relation.
Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution ...with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging (OK). These methods are tested on the Star-challenge part of the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and are compared with the classical polynomial fitting (Polyfit). In all our methods we model the PSF using a single Moffat profile and we interpolate the fitted parameters at a set of required positions. This allowed us to win the Star-challenge of GREAT10, with the B-splines method. However, we also test all our interpolation methods independently of the way the PSF is modeled, by interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are known exactly at star positions). We find in that case RBF to be the clear winner, closely followed by the other local methods, IDW and OK. The global methods, Polyfit and B-splines, are largely behind, especially in fields with (ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all interpolators reach a variance on PSF systematics σ2sys better than the 1 × 10-7 upper bound expected by future space-based surveys, with the local interpolators performing better than the global ones.