The two-point correlation function of the galaxy distribution is a key cosmological observable that allows us to constrain the dynamical and geometrical state of our Universe. To measure the ...correlation function we need to know both the galaxy positions and the expected galaxy density field. The expected field is commonly specified using a Monte-Carlo sampling of the volume covered by the survey and, to minimize additional sampling errors, this random catalog has to be much larger than the data catalog. Correlation function estimators compare data–data pair counts to data–random and random–random pair counts, where random–random pairs usually dominate the computational cost. Future redshift surveys will deliver spectroscopic catalogs of tens of millions of galaxies. Given the large number of random objects required to guarantee sub-percent accuracy, it is of paramount importance to improve the efficiency of the algorithm without degrading its precision. We show both analytically and numerically that splitting the random catalog into a number of subcatalogs of the same size as the data catalog when calculating random–random pairs and excluding pairs across different subcatalogs provides the optimal error at fixed computational cost. For a random catalog fifty times larger than the data catalog, this reduces the computation time by a factor of more than ten without affecting estimator variance or bias.
We present two novel methods for the estimation of the angular power spectrum of cosmic microwave background (CMB) anisotropies. We assume an absolute CMB experiment with arbitrary asymmetric beams ...and arbitrary sky coverage. The methods differ from the earlier ones in that the power spectrum is estimated directly from the time-ordered data, without first compressing the data into a sky map, and they take into account the effect of asymmetric beams. In particular, they correct the beam-induced leakage from temperature to polarization. The methods are applicable to a case where part of the sky has been masked out to remove foreground contamination, leaving a pure CMB signal, but incomplete sky coverage. The first method (deconvolution quadratic maximum likelihood) is derived as the optimal quadratic estimator, which simultaneously yields an unbiased spectrum estimate and minimizes its variance. We successfully apply it to multipoles up to l = 200. The second method is derived as a weak-signal approximation from the first one. It yields an unbiased estimate for the full multipole range, but relaxes the requirement of minimal variance. We validate the methods with simulations for the 70 GHz channel of Planck surveyor, and demonstrate that we are able to correct the beam effects in the TT, EE, BB and TE spectra up to multipole l = 1500. Together, the two methods cover the complete multipole range with no gap in between.
Madam - a map-making method for CMB experiments Keihänen, E.; Kurki-Suonio, H.; Poutanen, T.
Monthly notices of the Royal Astronomical Society,
06/2005, Letnik:
360, Številka:
1
Journal Article
Recenzirano
Odprti dostop
We present a new map-making method for cosmic microwave background (CMB) measurements. The method is based on the destriping technique, but it also utilizes information about the noise spectrum. The ...low-frequency component of the instrument noise stream is modelled as a superposition of a set of simple base functions, whose amplitudes are determined by means of maximum-likelihood analysis, involving the covariance matrix of the amplitudes. We present simulation results with 1/f noise and show a reduction in the residual noise with respect to ordinary destriping. This study is related to Planck Low Frequency Instrument (LFI) activities.
The European Space Agency’s Planck satellite was launched on 14 May 2009, and surveyed the sky stably and continuously between August 2009 and October 2013. The scientific analysis of the Planck data ...requires understanding the optical response of its detectors, which originates partly from a physical model of the optical system. In this paper, we use in-flight measurements of planets within ∼1° of boresight to estimate the geometrical properties of the telescope and focal plane. First, we use observed grating lobes to measure the amplitude of mechanical dimpling of the reflectors, which is caused by the hexagonal honeycomb structure of the carbon fibre reflectors. We find that the dimpling amplitude on the two reflectors is larger than expected from the ground, by 20% on the secondary and at least a factor of 2 on the primary. Second, we use the main beam shapes of 26 detectors to investigate the alignment of the various elements of the optical system, as well as the large-scale deformations of the reflectors. We develop a metric to guide an iterative fitting scheme, and are able to determine a new geometric model that fits the in-flight measurements better than the pre-flight prediction according to this metric. The new alignment model is within the mechanical tolerances expected from the ground, with some specific but minor exceptions. We find that the reflectors contain large-scale sinusoidal deformations most probably related to the mechanical supports. In spite of the better overall fit, the new model still does not fit the beam measurements at a level compatible with the needs of cosmological analysis. Nonetheless, future analysis of the Planck data would benefit from taking into account some of the features of the new model. The analysis described here exemplifies some of the limitations of in-flight retrieval of the geometry of an optical system similar to that of Planck, and provides useful information for similar efforts in future experiments.
We present a system-level description of the Low Frequency Instrument (LFI) considered as a differencing polarimeter, and evaluate its expected performance. The LFI is one of the two instruments on ...board the ESA Planck mission to study the cosmic microwave background. It consists of a set of 22 radiometers sensitive to linear polarisation, arranged in orthogonally-oriented pairs connected to 11 feed horns operating at 30, 44 and 70 GHz. In our analysis, the generic Jones and Mueller-matrix formulations for polarimetry are adapted to the special case of the LFI. Laboratory measurements of flight components are combined with optical simulations of the telescope to investigate the values and uncertainties in the system parameters affecting polarisation response. Methods of correcting residual systematic errors are also briefly discussed. The LFI has beam-integrated polarisation efficiency >99% for all detectors, with uncertainties below 0.1%. Indirect assessment of polarisation position angles suggests that uncertainties are generally less than 0$\fdg$5, and this will be checked in flight using observations of the Crab nebula. Leakage of total intensity into the polarisation signal is generally well below the thermal noise level except for bright Galactic emission, where the dominant effect is likely to be spectral-dependent terms due to bandpass mismatch between the two detectors behind each feed, contributing typically 1–3% leakage of foreground total intensity. Comparable leakage from compact features occurs due to beam mismatch, but this averages to < 5 × 10-4 for large-scale emission. An inevitable feature of the LFI design is that the two components of the linear polarisation are recovered from elliptical beams which differ substantially in orientation. This distorts the recovered polarisation and its angular power spectrum, and several methods are being developed to correct the effect, both in the power spectrum and in the sky maps. The LFI will return a high-quality measurement of the CMB polarisation, limited mainly by thermal noise. To meet our aspiration of measuring polarisation at the 1% level, further analysis of flight and ground data is required. We are still researching the most effective techniques for correcting subtle artefacts in polarisation; in particular the correction of bandpass mismatch effects is a formidable challenge, as it requires multi-band analysis to estimate the spectral indices that control the leakage.
Context. In the last decade, astronomers have found a new type of supernova called superluminous supernovae (SLSNe) due to their high peak luminosity and long light-curves. These hydrogen-free ...explosions (SLSNe-I) can be seen to z ~ 4 and therefore, offer the possibility of probing the distant Universe. Aims. We aim to investigate the possibility of detecting SLSNe-I using ESA’s Euclid satellite, scheduled for launch in 2020. In particular, we study the Euclid Deep Survey (EDS) which will provide a unique combination of area, depth and cadence over the mission. Methods. We estimated the redshift distribution of Euclid SLSNe-I using the latest information on their rates and spectral energy distribution, as well as known Euclid instrument and survey parameters, including the cadence and depth of the EDS. To estimate the uncertainties, we calculated their distribution with two different set-ups, namely optimistic and pessimistic, adopting different star formation densities and rates. We also applied a standardization method to the peak magnitudes to create a simulated Hubble diagram to explore possible cosmological constraints. Results. We show that Euclid should detect approximately 140 high-quality SLSNe-I to z ~ 3.5 over the first five years of the mission (with an additional 70 if we lower our photometric classification criteria). This sample could revolutionize the study of SLSNe-I at z > 1 and open up their use as probes of star-formation rates, galaxy populations, the interstellar and intergalactic medium. In addition, a sample of such SLSNe-I could improve constraints on a time-dependent dark energy equation-of-state, namely w(a), when combined with local SLSNe-I and the expected SN Ia sample from the Dark Energy Survey. Conclusions. We show that Euclid will observe hundreds of SLSNe-I for free. These luminous transients will be in the Euclid data-stream and we should prepare now to identify them as they offer a new probe of the high-redshift Universe for both astrophysics and cosmology.
ABSTRACT
We present a new, updated version of the EuclidEmulator (called EuclidEmulator2), a fast and accurate predictor for the nonlinear correction of the matter power spectrum. 2 per cent level ...accurate emulation is now supported in the eight-dimensional parameter space of w0waCDM+∑mν models between redshift z = 0 and z = 3 for spatial scales within the range $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$. In order to achieve this level of accuracy, we have had to improve the quality of the underlying N-body simulations used as training data: (i) we use self-consistent linear evolution of non-dark matter species such as massive neutrinos, photons, dark energy, and the metric field, (ii) we perform the simulations in the so-called N-body gauge, which allows one to interpret the results in the framework of general relativity, (iii) we run over 250 high-resolution simulations with 30003 particles in boxes of 1(h−1 Gpc)3 volumes based on paired-and-fixed initial conditions, and (iv) we provide a resolution correction that can be applied to emulated results as a post-processing step in order to drastically reduce systematic biases on small scales due to residual resolution effects in the simulations. We find that the inclusion of the dynamical dark energy parameter wa significantly increases the complexity and expense of creating the emulator. The high fidelity of EuclidEmulator2 is tested in various comparisons against N-body simulations as well as alternative fast predictors such as HALOFIT, HMCode, and CosmicEmu. A blind test is successfully performed against the Euclid Flagship v2.0 simulation. Nonlinear correction factors emulated with EuclidEmulator2 are accurate at the level of $1{{\ \rm per\ cent}}$ or better for $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$ and z ≤ 3 compared to high-resolution dark-matter-only simulations. EuclidEmulator2 is publicly available at https://github.com/miknab/EuclidEmulator2.
The destriping technique is a viable tool for removing different kinds of systematic effects in CMB-related experiments. It has already been proven to work for gain instabilities that produce the ...so-called $1/f$ noise and periodic fluctuations due to e.g. thermal instability. Both effects, when coupled to the observing strategy, result in stripes on the observed sky region. Here we present a maximum-likelihood approach to this type of technique and provide also a useful generalization. As a working case we consider a data set similar to what the planck satellite will produce in its Low Frequency Instrument (LFI). We compare our method to those presented in the literature and find some improvement in performance. Our approach is also more general and allows for different base functions to be used when fitting the systematic effect under consideration. We study the effect of increasing the number of these base functions on the quality of signal cleaning and reconstruction. This study is related to planck LFI activities.
Making sky maps from Planck data Ashdown, M. A. J.; Baccigalupi, C.; Balbi, A. ...
Astronomy and astrophysics (Berlin),
05/2007, Letnik:
467, Številka:
2
Journal Article
Recenzirano
Odprti dostop
Aims.We compare the performance of multiple codes written by different groups for making polarized maps from Planck-sized, all-sky cosmic microwave background (CMB) data. Three of the codes are based ...on a destriping algorithm; the other three are implementations of an optimal maximum-likelihood algorithm. Methods.Time-ordered data (TOD) were simulated using the Planck Level-S simulation pipeline. Several cases of temperature-only data were run to test that the codes could handle large datasets, and to explore effects such as the precision of the pointing data. Based on these preliminary results, TOD were generated for a set of four 217 GHz detectors (the minimum number required to produce I, Q, and U maps) under two different scanning strategies, with and without noise. Results.Following correction of various problems revealed by the early simulation, all codes were able to handle the large data volume that Planck will produce. Differences in maps produced are small but noticeable; differences in computing resources are large.