Component separation is one of the key stages of any modern cosmic microwave background data analysis pipeline. It is an inherently nonlinear procedure and typically involves a series of sequential ...solutions of linear systems with similar but not identical system matrices, derived for different data models of the same data set. Sequences of this type arise, for instance, in the maximization of the data likelihood with respect to foreground parameters or sampling of their posterior distribution. However, they are also common in many other contexts. In this work we consider solving the component separation problem directly in the measurement (time-) domain. This can have a number of important benefits over the more standard pixel-based methods, in particular if non-negligible time-domain noise correlations are present, as is commonly the case. The approach based on the time-domain, however, implies significant computational effort because the full volume of the time-domain data set needs to be manipulated. To address this challenge, we propose and study efficient solvers adapted to solving time-domain-based component separation systems and their sequences, and which are capable of capitalizing on information derived from the previous solutions. This is achieved either by adapting the initial guess of the subsequent system or through a so-called subspace recycling, which allows constructing progressively more efficient two-level preconditioners. We report an overall speed-up over solving the systems independently of a factor of nearly 7, or 5, in our numerical experiments, which are inspired by the likelihood maximization and likelihood sampling procedures, respectively.
Estimation of the sky signal from sequences of time ordered data is one of the key steps in cosmic microwave background (CMB) data analysis, commonly referred to as the map-making problem. Some of ...the most popular and general methods proposed for this problem involve solving generalised least-squares (GLS) equations with non-diagonal noise weights given by a block-diagonal matrix with Toeplitz blocks. In this work, we study new map-making solvers potentially suitable for applications to the largest anticipated data sets. They are based on iterative conjugate gradient (CG) approaches enhanced with novel, parallel, two-level preconditioners. We apply the proposed solvers to examples of simulated non-polarised and polarised CMB observations and a set of idealised scanning strategies with sky coverage ranging from a nearly full sky down to small sky patches. We discuss their implementation for massively parallel computational platforms and their performance for a broad range of parameters that characterise the simulated data sets in detail. We find that our best new solver can outperform carefully optimised standard solvers used today by a factor of as much as five in terms of the convergence rate and a factor of up to four in terms of the time to solution, without significantly increasing the memory consumption and the volume of inter-processor communication. The performance of the new algorithms is also found to be more stable and robust and less dependent on specific characteristics of the analysed data set. We therefore conclude that the proposed approaches are well suited to address successfully challenges posed by new and forthcoming CMB data sets.
We discuss linear system solvers invoking a messenger-field and compare them with (preconditioned) conjugate gradient approaches. We show that the messenger-field techniques correspond to fixed point ...iterations of an appropriately preconditioned initial system of linear equations. We then argue that a conjugate gradient solver applied to the same preconditioned system, or equivalently a preconditioned conjugate gradient solver using the same preconditioner and applied to the original system, will in general ensure at least a comparable and typically better performance in terms of the number of iterations to convergence and time-to-solution. We illustrate our conclusions with two common examples drawn from the cosmic microwave background (CMB) data analysis: Wiener filtering and map-making. In addition, and contrary to the standard lore in the CMB field, we show that the performance of the preconditioned conjugate gradient solver can depend significantly on the starting vector. This observation seems of particular importance in the cases of map-making of high signal-to-noise ratio sky maps and therefore should be of relevance for the next generation of CMB experiments.
We present an overview of the design and status of the
Polarbear
-2 and the Simons Array experiments.
Polarbear
-2 is a cosmic microwave background polarimetry experiment which aims to characterize ...the arc-minute angular scale B-mode signal from weak gravitational lensing and search for the degree angular scale B-mode signal from inflationary gravitational waves. The receiver has a 365 mm diameter focal plane cooled to 270 mK. The focal plane is filled with 7588 dichroic lenslet–antenna-coupled polarization sensitive transition edge sensor (TES) bolometric pixels that are sensitive to 95 and 150 GHz bands simultaneously. The TES bolometers are read-out by SQUIDs with 40 channel frequency domain multiplexing. Refractive optical elements are made with high-purity alumina to achieve high optical throughput. The receiver is designed to achieve noise equivalent temperature of 5.8
μ
K
CMB
s
in each frequency band.
Polarbear
-2 will deploy in 2016 in the Atacama desert in Chile. The Simons Array is a project to further increase sensitivity by deploying three
Polarbear
-2 type receivers. The Simons Array will cover 95, 150, and 220 GHz frequency bands for foreground control. The Simons Array will be able to constrain tensor-to-scalar ratio and sum of neutrino masses to
σ
(
r
)
=
6
×
10
-
3
at
r
=
0.1
and
∑
m
ν
(
σ
=
1
)
to 40 meV.
The estimation of the B-mode angular power spectrum of polarized anisotropies of the cosmic microwave background is a key step towards a full exploitation of the scientific potential of this probe. ...In the context of pseudo-spectrum methods the major challenge is related to a contamination of the B-mode spectrum estimate with the residual power of the much larger E-mode. This so-called E-to-B leakage is unavoidably present whenever only an incomplete sky map is available, as is the case for any realistic observation. We find that although all these methods allow us to reduce significantly the level of the E-to-B leakage, it is the method of Smith that at the same time ensures the smallest error bars in all experimental configurations studied here, owing to the fact that it permits straightforwardly an optimization of the sky apodization of the polarization maps used for the estimation. For a satellite-like experiment, this method enables a detection of the B-mode power spectrum at large angular scales but only after appropriate binning. The method of Zhao and Baskaran is a close runner-up in the case of a nearly full-sky coverage.
ABSTRACT We extend a general maximum likelihood foreground estimation for cosmic microwave background (CMB) polarization data to include estimation of instrumental systematic effects. We focus on two ...particular effects: frequency band measurement uncertainty and instrumentally induced frequency dependent polarization rotation. We assess the bias induced on the estimation of the B-mode polarization signal by these two systematic effects in the presence of instrumental noise and uncertainties in the polarization and spectral index of Galactic dust. Degeneracies between uncertainties in the band and polarization angle calibration measurements and in the dust spectral index and polarization increase the uncertainty in the extracted CMB B-mode power, and may give rise to a biased estimate. We provide a quantitative assessment of the potential bias and increased uncertainty in an example experimental configuration. For example, we find that with 10% polarized dust, a tensor to scalar ratio of r = 0.05, and the instrumental configuration of the E and B experiment balloon payload, the estimated CMB B-mode power spectrum is recovered without bias when the frequency band measurement has 5% uncertainty or less, and the polarization angle calibration has an uncertainty of up to 4°.
Context. The planck satellite will map the full sky at nine frequencies from 30 to 857 GHz. The CMB intensity and polarization that are its prime targets are contaminated by foreground emission. ...Aims. The goal of this paper is to compare proposed methods for separating CMB from foregrounds based on their different spectral and spatial characteristics, and to separate the foregrounds into “components” with different physical origins (Galactic synchrotron, free-free and dust emissions; extra-galactic and far-IR point sources; Sunyaev-Zeldovich effect, etc.). Methods. A component separation challenge has been organised, based on a set of realistically complex simulations of sky emission. Several methods including those based on internal template subtraction, maximum entropy method, parametric method, spatial and harmonic cross correlation methods, and independent component analysis have been tested. Results. Different methods proved to be effective in cleaning the CMB maps of foreground contamination, in reconstructing maps of diffuse Galactic emissions, and in detecting point sources and thermal Sunyaev-Zeldovich signals. The power spectrum of the residuals is, on the largest scales, four orders of magnitude lower than the input Galaxy power spectrum at the foreground minimum. The CMB power spectrum was accurately recovered up to the sixth acoustic peak. The point source detection limit reaches 100 mJy, and about 2300 clusters are detected via the thermal SZ effect on two thirds of the sky. We have found that no single method performs best for all scientific objectives. Conclusions. We foresee that the final component separation pipeline for planck will involve a combination of methods and iterations between processing steps targeted at different objectives such as diffuse component separation, spectral estimation, and compact source extraction.
We present a measurement of the gravitational lensing deflection power spectrum reconstructed with two seasons of cosmic microwave background polarization data from the Polarbear experiment. ...Observations were taken at 150 GHz from 2012 to 2014 and surveyed three patches of sky totaling 30 square degrees. We test the consistency of the lensing spectrum with a cold dark matter cosmology and reject the no-lensing hypothesis at a confidence of 10.9 , including statistical and systematic uncertainties. We observe a value of AL = 1.33 0.32 (statistical) 0.02 (systematic) 0.07 (foreground) using all polarization lensing estimators, which corresponds to a 24% accurate measurement of the lensing amplitude. Compared to the analysis of the first-year data, we have improved the breadth of both the suite of null tests and the error terms included in the estimation of systematic contamination.
MADmap is a software application used to produce maximum likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by cosmic microwave background ...(CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne, and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels, and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms, and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analyzing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.