In radio astronomy obtaining a high dynamic range in synthesis imaging of wide fields requires a correction for time and direction-dependent effects. Applying direction-dependent correction can be ...done by either partitioning the image in facets and applying a direction-independent correction per facet, or by including the correction in the gridding kernel (AW-projection). An advantage of AW-projection over faceting is that the effectively applied beam is a sinc interpolation of the sampled beam, where the correction applied in the faceting approach is a discontinuous piece wise constant beam. However, AW-projection quickly becomes prohibitively expensive when the corrections vary over short time scales. This occurs, for example, when ionospheric effects are included in the correction. The cost of the frequent recomputation of the oversampled convolution kernels then dominates the total cost of gridding. Image domain gridding is a new approach that avoids the costly step of computing oversampled convolution kernels. Instead low-resolution images are made directly for small groups of visibilities which are then transformed and added to the large uv grid. The computations have a simple, highly parallel structure that maps very well onto massively parallel hardware such as graphical processing units (GPUs). Despite being more expensive in pure computation count, the throughput is comparable to classical W-projection. The accuracy is close to classical gridding with a continuous convolution kernel. Compared to gridding methods that use a sampled convolution function, the new method is more accurate. Hence, the new method is at least as fast and accurate as classical W-projection, while allowing for the correction for quickly varying direction-dependent effects.
Full text
Available for:
FMFMET, NUK, UL, UM, UPUK
The Low-Frequency Array (LOFAR) is under construction in the Netherlands and in several surrounding European countries. In this contribution, we describe the layout and design of the telescope, with ...particular emphasis on the imaging characteristics of the array when used in its ‘standard imaging’ mode. After briefly reviewing the calibration and imaging software used for LOFAR image processing, we show some recent results from the ongoing imaging commissioning efforts. We conclude by summarizing future prospects for the use of LOFAR in observing the little-explored low-frequency Universe.
Full text
Available for:
DOBA, EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, IZUM, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UILJ, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
LOFAR is a low-frequency radio astronomical array currently under development in The Netherlands. It is designed to produce synthesis images of the most distant celestial objects yet observed. Due to ...high redshift levels, observations must be at unusually low frequencies (30-240 MHz), over large apertures (100 km), using thousands of antennas. At these frequencies, Earth's ionosphere acts as a random refractive sheet which over the large aperture induces source direction dependent gain and phase errors that must be estimated and calibrated out. Current radio astronomy "self-calibration" algorithms do not address direction dependence and will not work in the LOFAR environment. This paper presents a formal study of the parameter estimation problem for LOFAR calibration. A data model is proposed, and a Cramer-Rao lower bound (CRB) analysis is developed with a new general formulation to easily incorporate a variety of constraining signal models. It is shown that although the unconstrained direction dependent calibration problem is ambiguous, physically justifiable constraints can be applied in LOFAR to yield viable solutions. Use of a "compact core" of closely spaced array elements as part of the larger array is shown to significantly improve full array direction dependent calibration performance. Candidate algorithms are proposed and compared to the CRB.
In radio astronomy obtaining a high dynamic range in synthesis imaging of wide fields requires a correction for time and direction-dependent effects. Applying direction-dependent correction can be ...done by either partitioning the image in facets and applying a direction-independent correction per facet, or by including the correction in the gridding kernel (AW-projection). An advantage of AW-projection over faceting is that the effectively applied beam is a sinc interpolation of the sampled beam, where the correction applied in the faceting approach is a discontinuous piece wise constant beam. However, AW-projection quickly becomes prohibitively expensive when the corrections vary over short time scales. This occurs for example when ionospheric effects are included in the correction. The cost of the frequent recomputation of the oversampled convolution kernels then dominates the total cost of gridding. Image domain gridding is a new approach that avoids the costly step of computing oversampled convolution kernels. Instead low-resolution images are made directly for small groups of visibilities which are then transformed and added to the large \(uv\) grid. The computations have a simple, highly parallel structure that maps very well onto massively parallel hardware such as graphical processing units (GPUs). Despite being more expensive in pure computation count, the throughput is comparable to classical W-projection. The accuracy is close to classical gridding with a continuous convolution kernel. Compared to gridding methods that use a sampled convolution function, the new method is more accurate. Hence the new method is at least as fast and accurate as classical W-projection, while allowing for the correction for quickly varying direction-dependent effects.
Radio astronomical observations are increasingly contaminated by man-made RF interference (RFI). If these signals are continuously present, then they cannot be removed by the usual techniques of ...detection and blanking. We have previously proposed a spatial filtering technique, where the impact of the interferer is projected out from the estimated covariance data. Assuming that the spatial signature of the interferer is time-varying, several such estimates can be combined to recover the missing dimensions. We give a detailed performance analysis of this algorithm. It is shown that the spatial filter introduces a small increase in variance of the estimates (because of the loss in information) and that the algorithm is unbiased in case the true spatial signatures of the interferers are known but that there may be a bias in case the signatures are estimated from the same data. Some of the bias may be removed, and moreover, the bias only affects the auto-correlations, whereas the astronomical information is mostly in the cross-correlations.
LOFAR calibration and wide-field imaging Tasse, Cyril; van Diepen, Ger; van der Tol, Sebastiaan ...
Comptes rendus. Physique,
01/2012, Volume:
13, Issue:
1
Journal Article
Peer reviewed
Open access
LOFAR is a revolutionary instrument, operating at low frequencies (
ν
≲
240
MHz
). It will drive major breakthroughs in the area of observational cosmology, but its use requires the development of ...challenging techniques and algorithms. Since its field of view and sensitivity are increased by orders of magnitude as compared to the older generation of instruments, new technical problems have to be addressed. The LOFAR survey team is in charge of commissioning the first LOFAR data produced in the imager mode as part of building the imaging pipeline. We are developing algorithms to tackle the problems associated with calibration (ionosphere, beam, etc.) and wide-field imaging for the achievement of the deep extragalactic surveys. New types of problems arise in that context, and notions such as algorithmic complexity and parallelism become fundamental.
LOFAR est un instrument au design révolutionnaire qui opère à très basses fréquences (
∼
10
–
240
MHz
) dans un domaine dʼénergie quasiment inexploré. Il est construit presque entièrement en software, et allie réseau phasé et interféromètre. Son utilisation conduira à des avancées scientifiques majeures, mais elle représente un défi technologique considérable. En effet, la sensibilité et le champ de vue de cet instrument étant accru de plusieurs ordres de grandeur par rapport à lʼancienne génération, des phénomènes subtils doivent être considérés. Dʼautre part, les nouvelles technologies utilisées comme les réseaux phasés entraînent des complications de taille. Dans le cadre du groupe de travail « relevés LOFAR » nous développons des procédures dʼétalonnage (ionosphère, lobes de station). Dans le cadre de ces problématiques, de nouvelles difficultés émergent, et les concepts de complexité algorithmique et de parallélisabilité deviennent une base de réflexion.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
The data production rate of future telescopes like LOFAR2.0 and SKA will require processing pipelines with high computational performace and a large degree of automation and configurability. In this ...contribution, we show how we are addressing these challenges with a combination of high-performance processing components, primarily written in C++, and a Python-based pipeline script orchestrating the work of the processing components. We present the current status and sketch future developments.
For large aperture arrays like LOFAR and SKA-LOW, the point spread function (PSF) varies significantly across the wide field-of-view (FoV) of these instruments. This turns deconvolution into a ...computationally demanding and often labour intensive task when using standard algorithms assuming that the PSF is constant over the FoV. In this contribution, we describe an implementation of deconvolution using a direction-dependent (DD) PSF. We demonstrate the advantage of using a DD PSF on an extreme case of a LOFAR observation done with stations of two different sizes. The use of a DD PSF made convergence of deconvolution more robust thereby requiring fewer major cycles and less manual tuning to obtain a good deconvolved image.
In radio astronomy images are made of astronomical objects as they appear at radio frequencies using a technique called aperture synthesis. Signals from several antennas are correlated and integrated ...over time. The data collected over several hours are further processed to calibrate the instrument and to form an image or intensity map. The calibration and imaging algorithms do not use the autocorrelations because the receiver noise is unstable and hence considered unknown. In literature the Cramer Rao bound for the calibration problem has been derived assuming that the autocorrelations are part of the available data. If the assumption is correct that the autocorrelations do not contain useful information when the receiver noise is unknown, than the CRB for the case that the autocorrelations are not part of the data will be the same. In this paper we will derive the CRB excluding the autocorrelations and show that it indeed does not matter whether the autocorrelations are included or not