The two-point correlation function of the galaxy distribution is a key cosmological observable that allows us to constrain the dynamical and geometrical state of our Universe. To measure the ...correlation function we need to know both the galaxy positions and the expected galaxy density field. The expected field is commonly specified using a Monte-Carlo sampling of the volume covered by the survey and, to minimize additional sampling errors, this random catalog has to be much larger than the data catalog. Correlation function estimators compare data–data pair counts to data–random and random–random pair counts, where random–random pairs usually dominate the computational cost. Future redshift surveys will deliver spectroscopic catalogs of tens of millions of galaxies. Given the large number of random objects required to guarantee sub-percent accuracy, it is of paramount importance to improve the efficiency of the algorithm without degrading its precision. We show both analytically and numerically that splitting the random catalog into a number of subcatalogs of the same size as the data catalog when calculating random–random pairs and excluding pairs across different subcatalogs provides the optimal error at fixed computational cost. For a random catalog fifty times larger than the data catalog, this reduces the computation time by a factor of more than ten without affecting estimator variance or bias.
Context.
Future weak lensing surveys, such as the
Euclid
mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain ...very low levels of statistical error, and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy.
Aims.
The aims of this paper are twofold. Firstly, we took steps toward a nonparametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to
Euclid
. Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Secondly, we studied the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in an
Euclid
scenario.
Methods.
We extended the recently proposed resolved components analysis approach, which performs super-resolution on a field of under-sampled observations of a spatially varying, image-valued function. We added a spatial interpolation component to the method, making it a true 2-dimensional PSF model. We compared our approach to
PSFEx
, then quantified the impact of PSF recovery errors on galaxy shape measurements through image simulations.
Results.
Our approach yields an improvement over
PSFEx
in terms of the PSF model and on observed galaxy shape errors, though it is at present far from reaching the required
Euclid
accuracy. We also find that the usual formalism used for the propagation of PSF model errors to weak lensing quantities no longer holds in the case of an
Euclid
-like PSF. In particular, different shape measurement approaches can react differently to the same PSF modeling errors.
The Planck satellite in orbit mission ended in October 2013. Between the end of Low Frequency Instrument (LFI) routine mission operations and the satellite decommissioning, a dedicated test was also ...performed to measure the Planck telescope emissivity. The scope of the test was twofold: i) to provide, for the first time in flight, a direct measure of the telescope emissivity; and ii) to evaluate the possible degradation of the emissivity by comparing data taken in flight at the end of mission with those taken during the ground telescope characterization. The emissivity was determined by heating the Planck telescope and disentangling the system temperature excess measured by the LFI radiometers. Results show End of Life (EOL) performance in good agreement with the results from the ground optical tests and from
in-flight
indirect estimations measured during the Commissioning and Performance Verification (CPV) phase. Methods and results are presented and discussed.
We present the calibration and scientific performance parameters of the Planck Low Frequency Instrument (LFI) measured during the ground cryogenic test campaign. These parameters characterise the ...instrument response and constitute our optimal pre-launch knowledge of the LFI scientific performance. The LFI shows excellent 1/f stability and rejection of instrumental systematic effects; its measured noise performance shows that LFI is the most sensitive instrument of its kind. The calibration parameters will be updated during flight operations until the end of the mission.
We present simultaneous Planck, Swift, Fermi, and ground-based data for 105 blazars belonging to three samples with flux limits in the soft X-ray, hard X-ray, and γ-ray bands, with additional 5GHz ...flux-density limits to ensure a good probability of a Planck detection. We compare our results to those of a companion paper presenting simultaneous Planck and multi-frequency observations of 104 radio-loud northern active galactic nuclei selected at radio frequencies. While we confirm several previous results, our unique data set allows us to demonstrate that the selection method strongly influences the results, producing biases that cannot be ignored. Almost all the BL Lac objects have been detected by the Fermi Large AreaTelescope (LAT), whereas 30% to 40% of the flat-spectrum radio quasars (FSRQs) in the radio, soft X-ray, and hard X-ray selected samples are still below the γ-ray detection limit even after integrating 27 months of Fermi-LAT data. The radio to sub-millimetre spectral slope of blazars is quite flat, with ⟨α⟩ ~ 0 up to about 70GHz, above which it steepens to ⟨α⟩ ~ −0.65. The BL Lacs have significantly flatter spectra than FSRQs at higher frequencies. The distribution of the rest-frame synchrotron peak frequency (νpeakS) in the spectral energy distribution (SED) of FSRQs is the same in all the blazar samples with ⟨νpeakS⟩ = 1013.1 ± 0.1 Hz, while the mean inverse Compton peak frequency, ⟨νpeakIC⟩, ranges from 1021 to 1022 Hz. The distributions of νpeakS and νpeakIC of BL Lacs are much broader and are shifted to higher energies than those of FSRQs; their shapes strongly depend on the selection method. The Compton dominance of blazars, defined as the ratio of the inverse Compton to synchrotron peak luminosities, ranges from less than 0.2 to nearly 100, with only FSRQs reaching values larger than about 3. Its distribution is broad and depends strongly on the selection method, with γ-ray selected blazars peaking at ~7 or more, and radio-selected blazars at values close to 1, thus implying that the common assumption that the blazar power budget is largely dominated by high-energy emission is a selection effect. A comparison of our multi-frequency data with theoretical predictions shows that simple homogeneous SSC models cannot explain the simultaneous SEDs of most of the γ-ray detected blazars in all samples. The SED of the blazars that were not detected by Fermi-LAT may instead be consistent with SSC emission. Our data challenge the correlation between bolometric luminosity and νpeakS predicted by the blazar sequence.
In this paper we discuss the Planck-LFI in-flight calibration campaign. After a brief overview of the ground test campaigns, we describe in detail the calibration and performance verification (CPV) ...phase, carried out in space during and just after the cool-down of LFI. We discuss in detail the functionality verification, the tuning of the front-end and warm electronics, the preliminary performance assessment and the thermal susceptibility tests. The logic, sequence, goals and results of the in-flight tests are discussed. All the calibration activities were successfully carried out and the instrument response was comparable to the one observed on ground. For some channels the in-flight tuning activity allowed us to improve significantly the noise performance.
Weak lensing, which is the deflection of light by matter along the line of sight, has proven to be an efficient method for constraining models of structure formation and reveal the nature of dark ...energy. So far, most weak-lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak-lensing surveys such as
Euclid
, convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While it carry the same information, the lensing signal is more compressed in the convergence maps than in the shear field. This simplifies otherwise computationally expensive analyses, for instance, non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects caused by holes in the data field, field borders, shape noise, and the fact that the shear is not a direct observable (reduced shear). We present the two mass-inversion methods that are included in the official
Euclid
data-processing pipeline: the standard Kaiser & Squires method (KS), and a new mass-inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS method and includes corrections for mass-mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the
Euclid
Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions and third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method substantially reduces the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5, while the errors on the third- and fourth-order moments are reduced by factors of about 2 and 10, respectively.