ABSTRACT
Searches for space–time variations of fundamental constants have entered an era of unprecedented precision. New, high-quality quasar spectra require increasingly refined analytical methods. ...In this paper, a continuation in a series to establish robust and unbiased methodologies, we explore how convergence criteria in non-linear least-squares optimization impact on quasar absorption system measurements of the fine structure constant α. Given previous claims for high-precision constraints, we critically examine the veracity of a so-called blinding approach, in which α is fixed at the terrestrial value during the model building process, releasing it as a free parameter only after the ‘final’ absorption system kinematic structure has been obtained. We show that this approach results in such small consecutive parameter steps during minimization that convergence is unlikely to be reached, even after as many as 1000 iterations. The fix is straightforward: α must be treated as a free parameter from the earliest possible stages of absorption system model building. The implication of the results presented here is that all previous measurements that have used initially fixed α should be reworked.
Precision and consistency of astrocombs Milaković, Dinko; Pasquini, Luca; Webb, John K ...
Monthly Notices of the Royal Astronomical Society,
04/2020, Letnik:
493, Številka:
3
Journal Article
Recenzirano
Odprti dostop
ABSTRACT
Astrocombs are ideal spectrograph calibrators whose limiting precision can be derived using a second, independent, astrocomb system. We therefore analyse data from two astrocombs (one 18 GHz ...and one 25 GHz) used simultaneously on the HARPS (High Accuracy Radial velocity Planet Searcher) spectrograph at the European Southern Observatory. The first aim of this paper is to quantify the wavelength repeatability achieved by a particular astrocomb. The second aim is to measure wavelength calibration consistency between independent astrocombs, that is to place limits or measure any possible zero-point offsets. We present three main findings, each with important implications for exoplanet detection, varying fundamental constant and redshift drift measurements. First, wavelength calibration procedures are important: using multiple segmented polynomials within one echelle order results in significantly better wavelength calibration compared to using a single higher order polynomial. Segmented polynomials should be used in all applications aimed at precise spectral line position measurements. Secondly, we found that changing astrocombs causes significant zero-point offsets (${\approx}60\, {\rm cm\, s}^{-1}$ in our raw data) which were removed. Thirdly, astrocombs achieve a precision of ${\lesssim }4\, {\rm cm\, s}^{-1}$ in a single exposure (${\approx }10{{\,\rm per\,cent}}$ above the measured photon-limited precision) and 1 cm s−1 when time-averaged over a few hours, confirming previous results. Astrocombs therefore provide the technological requirements necessary for detecting Earth–Sun analogues, measuring variations of fundamental constants and the redshift drift.
Abstract
A new and automated method is presented for the analysis of high-resolution absorption spectra. Three established numerical methods are unified into one ‘artificial intelligence’ process: a ...genetic algorithm (Genetic Voigt Profile FIT, gvpfit); non-linear least-squares with parameter constraints (vpfit); and Bayesian model averaging (BMA). The method has broad application but here we apply it specifically to the problem of measuring the fine structure constant at high redshift. For this we need objectivity and reproducibility. gvpfit is also motivated by the importance of obtaining a large statistical sample of measurements of Δα/α. Interactive analyses are both time consuming and complex and automation makes obtaining a large sample feasible. In contrast to previous methodologies, we use BMA to derive results using a large set of models and show that this procedure is more robust than a human picking a single preferred model since BMA avoids the systematic uncertainties associated with model choice. Numerical simulations provide stringent tests of the whole process and we show using both real and simulated spectra that the unified automated fitting procedure out-performs a human interactive analysis. The method should be invaluable in the context of future instrumentation like ESPRESSO on the VLT and indeed future ELTs. We apply the method to the zabs = 1.8389 absorber towards the zem = 2.145 quasar J110325−264515. The derived constraint of Δα/α = 3.3 ± 2.9 × 10−6 is consistent with no variation and also consistent with the tentative spatial variation reported in Webb et al. and King et al.
ABSTRACT
The cosmological principle, the combined assumptions of cosmological isotropy and homogeneity, underpins the standard model of big bang cosmology with which we interpret astronomical ...observations. A new test of isotropy over the redshift range 2 < z < 4 and across large angular scales on the sky is presented. We use the cosmological distribution of neutral hydrogen, as probed by the Ly α forest seen towards distant quasars. The Sloan Digital Sky Survey provides the largest data set of quasar spectra available to date. We use combined information from Data Releases 12 and 14 to select a sample of 142 661 quasars most suitable for this purpose. The scales covered by the data extend beyond post-inflation causality scales, thus probing initial conditions in the early Universe. We identify significant spatially correlated systematic effects that can emulate cosmological anisotropy. Once these systematics have been accounted for, the data are found to be consistent with isotropy, providing an important independent check on the standard model, consistent with results from cosmic microwave background data.
Quasar absorption lines provide a precise test of whether the fine-structure constant, α, is the same in different places and through cosmological time. We present a new analysis of a large sample of ...quasar absorption-line spectra obtained using the Ultraviolet and Visual Echelle Spectrograph (UVES) on the Very Large Telescope (VLT) in Chile. We apply the many-multiplet method to derive values of Δα/α≡ (α
z
−α0)/α0 from 154 absorbers, and combine these values with 141 values from previous observations at the Keck Observatory in Hawaii. In the VLT sample, we find evidence that α increases with increasing cosmological distance from Earth. However, as previously shown, the Keck sample provided evidence for a smaller α in the distant absorption clouds. Upon combining the samples, an apparent variation of α across the sky emerges which is well represented by an angular dipole model pointing in the direction RA = 17.3 ± 1.0 h and Dec. =−61°± 10°, with amplitude
. The dipole model is required at the 4.1σ statistical significance level over a simple monopole model where α is the same across the sky (but possibly different from the current laboratory value). The data sets reveal remarkable consistencies: (i) the directions of dipoles fitted to the VLT and Keck samples separately agree; (ii) the directions of dipoles fitted to z < 1.6 and z > 1.6 cuts of the combined VLT+Keck samples agree; and (iii) in the equatorial region of the dipole, where both the Keck and VLT samples contribute a significant number of absorbers, there is no evidence for inconsistency between Keck and VLT. The amplitude of the dipole is clearly larger at higher redshift. Assuming a dipole-only (i.e. no-monopole) model whose amplitude grows proportionally with 'lookback-time distance' (r=ct, where t is the lookback time), the amplitude is (1.1 ± 0.2) × 10−6 GLyr−1 and the model is significant at the 4.2σ confidence level over the null model (Δα/α≡ 0). We apply robustness checks and demonstrate that the dipole effect does not originate from a small subset of the absorbers or spectra. We present an analysis of systematic effects, and are unable to identify any single systematic effect which can emulate the observed variation in α. To the best of our knowledge, this result is not in conflict with any other observational or experimental result.
ABSTRACT
High resolution spectra of quasar absorption systems provide the best constraints on temporal or spatial changes of fundamental constants in the early Universe. An important systematic that ...has never before been quantified concerns model non-uniqueness. The absorption structure is generally complicated, comprising many blended lines. This characteristic means any given system can be fitted equally well by many slightly different models, each having a different value of α, the fine structure constant. We use AI Monte Carlo modelling to quantify non-uniqueness. Extensive supercomputer calculations are reported, revealing new systematic effects that guide future analyses: (i) Whilst higher signal to noise and improved spectral resolution produces a smaller statistical uncertainty for α, model non-uniqueness adds a significant additional uncertainty. (ii) Non-uniqueness depends on the line broadening mechanism used. We show that modelling the spectral data using turbulent line broadening results in far greater non-uniqueness, hence this should no longer be done. Instead, for varying α studies, it is important to use the more physically appropriate compound broadening. (iii) We have studied two absorption systems in detail. Generalising thus requires caution. Nevertheless, if non-uniqueness is present in all or most quasar absorption systems, it seems unavoidable that attempts to determine the existence (or non-existence) of spacetime variations of fundamental constants is best approached using a statistical sample.
Abstract
The “Condor Array Telescope” or “Condor” is a high-performance “array telescope” comprised of six apochromatic refracting telescopes of objective diameter 180 mm, each equipped with a ...large-format, very low-read-noise (≈1.2 e
−
), very rapid-read-time (<1 s) CMOS camera. Condor is located at a very dark astronomical site in the southwest corner of New Mexico, at the Dark Sky New Mexico observatory near Animas, roughly midway between (and more than 150 km from either) Tucson and El Paso. Condor enjoys a wide field of view (2.29 × 1.53 deg
2
or 3.50 deg
2
), is optimized for measuring
both
point sources
and
extended, very low-surface-brightness features, and for broad-band images can operate at a cadence of 60 s (or even less) while remaining sky-noise limited with a duty cycle near 100%. In its normal mode of operation, Condor obtains broad-band exposures of exposure time 60 s over dwell times spanning dozens or hundreds of hours. In this way, Condor builds up deep, sensitive images while simultaneously monitoring tens or hundreds of thousands of point sources per field at a cadence of 60 s. Condor is also equipped with diffraction gratings and with a set of He
ii
468.6 nm, O
iii
500.7 nm, He
i
587.5 nm, H
α
656.3 nm, N
ii
658.4 nm, and S
ii
671.6 nm narrow-band filters, allowing it to address a variety of broad- and narrow-band science issues. Given its unique capabilities, Condor can access regions of “astronomical discovery space” that have never before been studied. Here we introduce Condor and describe various aspects of its performance.
ABSTRACT
New observations of the quasar HE0515−4414 have been made, aided by the Laser Frequency Comb (LFC), using the HARPS spectrograph on the ESO 3.6m telescope. We present three important ...advances for α measurements in quasar absorption spectra from these observations. First, the data have been wavelength calibrated using LFC and ThAr methods. The LFC wavelength calibration residuals are six times smaller than when using the standard ThAr calibration. We give a direct comparison between α measurements made using the two methods. Secondly, spectral modelling was performed using Artificial Intelligence (fully automated, all human bias eliminated), including a temperature parameter for each absorption component. Thirdly, in contrast to previous work, additional model parameters were assigned to measure α for each individual absorption component. The increase in statistical uncertainty from the larger number of model parameters is small and the method allows a substantial advantage; outliers that would otherwise contribute a significant systematic, possibly corrupting the entire measurement, are identified and removed, permitting a more robust overall result. The $z$abs = 1.15 absorption system along the HE0515−4414 sightline yields 40 new α measurements. We constrain spatial fluctuations in α to be Δα/α ≤ 9 × 10−5 on scales $\approx \!\! {20}\, {\rm km\, s}^{-1}$, corresponding to $\approx 25\,$kpc if the $z$abs = 1.15 system arises in a 1Mpc cluster. Collectively, the 40 measurements yield Δα/α = −0.27 ± 2.41 × 10−6, consistent with no variation.
ABSTRACT
Robust model-fitting to spectroscopic transitions is a requirement across many fields of science. The corrected Akaike and Bayesian information criteria (AICc and BIC) are most frequently ...used to select the optimal number of fitting parameters. In general, AICc modelling is thought to overfit (too many model parameters) and BIC underfits. For spectroscopic modelling, both AICc and BIC lack in two important respects: (a) no penalty distinction is made according to line strength such that parameters of weak lines close to the detection threshold are treated with equal importance as strong lines and (b) no account is taken of the way in which a narrow spectral line impacts only on a very small section of the overall data. In this paper, we introduce a new information criterion that addresses these shortcomings, the Spectral Information Criterion (SpIC). Spectral simulations are used to compare performances. The main findings are (i) SpIC clearly outperforms AICc for high signal-to-noise data, (ii) SpIC and AICc work equally well for lower signal-to-noise data, although SpIC achieves this with fewer parameters, and (iii) BIC does not perform well (for this application) and should be avoided. The new method should be of broader applicability (beyond spectroscopy), wherever different model parameters influence separated small ranges within a larger data set and/or have widely varying sensitivities.
ABSTRACT
This paper describes the optimization theory on which vpfit, a non-linear least-squares program for modelling absorption spectra, is based. Particular attention is paid to precision. Voigt ...function derivatives have previously been calculated using numerical finite difference approximations. We show how these can instead be computed analytically using Taylor series expansions and look-up tables. We introduce a new optimization method for an efficient descent path to the best fit, combining the principles used in both the Gauss–Newton and Levenberg–Marquardt algorithms. A simple practical fix for ill-conditioning is described, a common problem when modelling quasar absorption systems. We also summarize how unbiased modelling depends on using an appropriate information criterion to guard against overfitting or underfitting. The methods and the new implementations introduced in this paper are aimed at optimal usage of future data from facilities such as ESPRESSO/VLT and HIRES/ELT, particularly for the most demanding applications such as searches for space–time variations in fundamental constants and attempts to detect cosmological redshift drift.