Model selection is the problem of distinguishing competing models, perhaps featuring different numbers of parameters. The statistics literature contains two distinct sets of tools, those based on ...information theory such as the Akaike Information Criterion (AIC), and those on Bayesian inference such as the Bayesian evidence and Bayesian Information Criterion (BIC). The Deviance Information Criterion combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy. I describe the properties of the information criteria, and as an example compute them from Wilkinson Microwave Anisotropy Probe 3-yr data for several cosmological models. I find that at present the information theory and Bayesian approaches give significantly different conclusions from that data.
ABSTRACT Can new cosmic physics be uncovered through tensions amongst data sets? Tensions in parameter determinations amongst different types of cosmological observation, especially the ‘Hubble ...tension’ between probes of the expansion rate, have been invoked as possible indicators of new physics, requiring extension of the ΛCDM paradigm to resolve. Within a fully Bayesian framework, we show that the standard tension metric gives only part of the updating of model probabilities, supplying a data co-dependence term that must be combined with the Bayes factors of individual data sets. This shows that, on its own, a reduction of data set tension under an extension to ΛCDM is insufficient to demonstrate that the extended model is favoured. Any analysis that claims evidence for new physics solely on the basis of alleviating data set tensions should be considered incomplete and suspect. We describe the implications of our results for the interpretation of the Hubble tension.
How many cosmological parameters Liddle, Andrew R.
Monthly notices of the Royal Astronomical Society,
July 2004, Letnik:
351, Številka:
3
Journal Article
Recenzirano
Odprti dostop
Constraints on cosmological parameters depend on the set of parameters chosen to define the model that is compared with observational data. I use the Akaike and Bayesian information criteria to carry ...out cosmological model selection, in order to determine the parameter set providing the preferred fit to the data. Applying the information criteria to the current cosmological data sets indicates, for example, that spatially flat models are statistically preferred to closed models, and that possible running of the spectral index has lower significance than inferred from its confidence limits. I also discuss some problems of statistical assessment arising from there being a large number of ‘candidate’ cosmological parameters that can be investigated for possible cosmological implications, and argue that 95 per cent confidence is too low a threshold to identify robustly the need for new parameters in model fitting. The best present description of cosmological data uses a scale-invariant (n= 1) spectrum of Gaussian adiabatic perturbations in a spatially flat Universe, with the cosmological model requiring only five fundamental parameters to specify it fully.
The abundance of cosmological data becoming available means that a wider range of cosmological models are testable than ever before. However, an important distinction must be made between parameter ...fitting and model selection. While parameter fitting simply determines how well a model fits the data, model selection statistics, such as the Bayesian evidence, are now necessary to choose between these different models, and in particular to assess the need for new parameters. We implement a new evidence algorithm known as nested sampling, which combines accuracy, generality of application, and computational feasibility, and we apply it to some cosmological data sets and models. We find that a five-parameter model with a Harrison-Zel'dovich initial spectrum is currently preferred.
The XMM Cluster Survey (XCS) is a serendipitous search for galaxy clusters using all publicly available data in the XMM-Newton Science Archive. Its main aims are to measure cosmological parameters ...and trace the evolution of X-ray scaling relations. In this paper we present the first data release from the XMM Cluster Survey (XCS-DR1). This consists of 503 optically confirmed, serendipitously detected, X-ray clusters. Of these clusters, 256 are new to the literature and 357 are new X-ray discoveries. We present 463 clusters with a redshift estimate (0.06 < z < 1.46), including 261 clusters with spectroscopic redshifts. The remainder have photometric redshifts. In addition, we have measured X-ray temperatures (T
X) for 401 clusters (0.4 < T
X < 14.7 keV). We highlight seven interesting subsamples of XCS-DR1 clusters: (i) 10 clusters at high redshift (z > 1.0, including a new spectroscopically confirmed cluster at z= 1.01); (ii) 66 clusters with high T
X (>5 keV); (iii) 130 clusters/groups with low T
X (<2 keV); (iv) 27 clusters with measured T
X values in the Sloan Digital Sky Survey (SDSS) 'Stripe 82' co-add region; (v) 77 clusters with measured T
X values in the Dark Energy Survey region; (vi) 40 clusters detected with sufficient counts to permit mass measurements (under the assumption of hydrostatic equilibrium); (vii) 104 clusters that can be used for applications such as the derivation of cosmological parameters and the measurement of cluster scaling relations. The X-ray analysis methodology used to construct and analyse the XCS-DR1 cluster sample has been presented in a companion paper, Lloyd-Davies et al.