Abstract
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop ...of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
ABSTRACT
We derive constraints on the thermal and ionization states of the intergalactic medium (IGM) at redshift ≈ 9.1 using new upper limits on the 21-cm power spectrum measured by the LOFAR radio ...telescope and a prior on the ionized fraction at that redshift estimated from recent cosmic microwave background (CMB) observations. We have used results from the reionization simulation code grizzly and a Bayesian inference framework to constrain the parameters which describe the physical state of the IGM. We find that, if the gas heating remains negligible, an IGM with ionized fraction ≳0.13 and a distribution of the ionized regions with a characteristic size ≳ 8 h−1 comoving megaparsec (Mpc) and a full width at half-maximum (FWHM) ≳16 h−1 Mpc is ruled out. For an IGM with a uniform spin temperature TS ≳ 3 K, no constraints on the ionized component can be computed. If the large-scale fluctuations of the signal are driven by spin temperature fluctuations, an IGM with a volume fraction ≲0.34 of heated regions with a temperature larger than CMB, average gas temperature 7–160 K, and a distribution of the heated regions with characteristic size 3.5–70 h−1 Mpc and FWHM of ≲110 h−1 Mpc is ruled out. These constraints are within the 95 per cent credible intervals. With more stringent future upper limits from LOFAR at multiple redshifts, the constraints will become tighter and will exclude an increasingly large region of the parameter space.
Context. The volume of radio-astronomical data is a considerable burden in the processing and storing of radio observations that have high time and frequency resolutions and large bandwidths. For ...future telescopes such as the Square Kilometre Array (SKA), the data volume will be even larger. Aims. Lossy compression of interferometric radio-astronomical data is considered to reduce the volume of visibility data and to speed up processing. Methods. A new compression technique named “Dysco” is introduced that consists of two steps: a normalization step, in which grouped visibilities are normalized to have a similar distribution; and a quantization and encoding step, which rounds values to a given quantization scheme using a dithering scheme. Several non-linear quantization schemes are tested and combined with different methods for normalizing the data. Four data sets with observations from the LOFAR and MWA telescopes are processed with different processing strategies and different combinations of normalization and quantization. The effects of compression are measured in image plane. Results. The noise added by the lossy compression technique acts similarly to normal system noise. The accuracy of Dysco is depending on the signal-to-noise ratio (S/N) of the data: noisy data can be compressed with a smaller loss of image quality. Data with typical correlator time and frequency resolutions can be compressed by a factor of 6.4 for LOFAR and 5.3 for MWA observations with less than 1% added system noise. An implementation of the compression technique is released that provides a Casacore storage manager and allows transparent encoding and decoding. Encoding and decoding is faster than the read/write speed of typical disks. Conclusions. The technique can be used for LOFAR and MWA to reduce the archival space requirements for storing observed data. Data from SKA-low will likely be compressible by the same amount as LOFAR. The same technique can be used to compress data from other telescopes, but a different bit-rate might be required.
Full text
Available for:
FMFMET, NUK, UL, UM, UPUK
ABSTRACT
The 21-cm absorption feature reported by the EDGES collaboration is several times stronger than that predicted by traditional astrophysical models. If genuine, a deeper absorption may lead ...to stronger fluctuations on the 21-cm signal on degree scales (up to 1 K in rms), allowing these fluctuations to be detectable in nearly 50 times shorter integration times compared to previous predictions. We commenced the ‘AARTFAAC Cosmic Explorer’ (ACE) program, which employs the AARTFAAC wide-field image, to measure or set limits on the power spectrum of the 21-cm fluctuations in the redshift range z = 17.9–18.6 (Δν = 72.36–75.09 MHz) corresponding to the deep part of the EDGES absorption feature. Here, we present first results from two LST bins: 23.5–23.75 and 23.75–24.00 h, each with 2 h of data, recorded in ‘semi drift-scan’ mode. We demonstrate the application of the new ACE data-processing pipeline (adapted from the LOFAR-EoR pipeline) on the AARTFAAC data. We observe that noise estimates from the channel and time-differenced Stokes V visibilities agree with each other. After 2 h of integration and subtraction of bright foregrounds, we obtain 2σ upper limits on the 21-cm power spectrum of $\Delta _{21}^2 \lt (8139~\textrm {mK})^2$ and $\Delta _{21}^2 \lt (8549~\textrm {mK})^2$ at $k = 0.144~h\, \textrm {cMpc}^{-1}$ for the two LST bins. Incoherently averaging the noise bias-corrected power spectra for the two LST bins yields an upper limit of $\Delta _{21}^2 \lt (7388~\textrm {mK})^2$ at $k = 0.144~h\, \textrm {cMpc}^{-1}$. These are the deepest upper limits thus far at these redshifts.
ABSTRACT
A new upper limit on the 21 cm signal power spectrum at a redshift of z ≈ 9.1 is presented, based on 141 h of data obtained with the Low-Frequency Array (LOFAR). The analysis includes ...significant improvements in spectrally smooth gain-calibration, Gaussian Process Regression (GPR) foreground mitigation and optimally weighted power spectrum inference. Previously seen ‘excess power’ due to spectral structure in the gain solutions has markedly reduced but some excess power still remains with a spectral correlation distinct from thermal noise. This excess has a spectral coherence scale of 0.25–0.45 MHz and is partially correlated between nights, especially in the foreground wedge region. The correlation is stronger between nights covering similar local sidereal times. A best 2-σ upper limit of $\Delta ^2_{21} \lt (73)^2\, \mathrm{mK^2}$ at $k = 0.075\, \mathrm{h\, cMpc^{-1}}$ is found, an improvement by a factor ≈8 in power compared to the previously reported upper limit. The remaining excess power could be due to residual foreground emission from sources or diffuse emission far away from the phase centre, polarization leakage, chromatic calibration errors, ionosphere, or low-level radiofrequency interference. We discuss future improvements to the signal processing chain that can further reduce or even eliminate these causes of excess power.
Context. New generation low-frequency telescopes are exploring a new parameter space in terms of depth and resolution. The data taken with these interferometers, for example with the LOw Frequency ...ARray (LOFAR), are often calibrated in a low signal-to-noise ratio regime and the removal of critical systematic effects is challenging. The process requires an understanding of their origin and properties. Aim. In this paper we describe the major systematic effects inherent to next generation low-frequency telescopes, such as LOFAR. With this knowledge, we introduce a data processing pipeline that is able to isolate and correct these systematic effects. The pipeline will be used to calibrate calibrator observations as the first step of a full data reduction process. Methods. We processed two LOFAR observations of the calibrator 3C 196: the first using the Low Band Antenna (LBA) system at 42–66 MHz and the second using the High Band Antenna (HBA) system at 115–189 MHz. Results. We were able to isolate and correct for the effects of clock drift, polarisation misalignment, ionospheric delay, Faraday rotation, ionospheric scintillation, beam shape, and bandpass. The designed calibration strategy produced the deepest image to date at 54 MHz. The image has been used to confirm that the spectral energy distribution of the average radio source population tends to flatten at low frequencies. Conclusions. We prove that LOFAR systematic effects can be described by a relatively small number of parameters. Furthermore, the identification of these parameters is fundamental to reducing the degrees of freedom when the calibration is carried out on fields that are not dominated by a strong calibrator.
Full text
Available for:
FMFMET, NUK, UL, UM, UPUK
We present a sample of 1483 sources that display spectral peaks between 72 MHz and 1.4 GHz, selected from the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey. The GLEAM ...survey is the widest fractional bandwidth all-sky survey to date, ideal for identifying peaked-spectrum sources at low radio frequencies. Our peaked-spectrum sources are the low-frequency analogs of gigahertz-peaked spectrum (GPS) and compact-steep spectrum (CSS) sources, which have been hypothesized to be the precursors to massive radio galaxies. Our sample more than doubles the number of known peaked-spectrum candidates, and 95% of our sample have a newly characterized spectral peak. We highlight that some GPS sources peaking above 5 GHz have had multiple epochs of nuclear activity, and we demonstrate the possibility of identifying high-redshift (z > 2) galaxies via steep optically thin spectral indices and low observed peak frequencies. The distribution of the optically thick spectral indices of our sample is consistent with past GPS/CSS samples but with a large dispersion, suggesting that the spectral peak is a product of an inhomogeneous environment that is individualistic. We find no dependence of observed peak frequency with redshift, consistent with the peaked-spectrum sample comprising both local CSS sources and high-redshift GPS sources. The 5 GHz luminosity distribution lacks the brightest GPS and CSS sources of previous samples, implying that a convolution of source evolution and redshift influences the type of peaked-spectrum sources identified below 1 GHz. Finally, we discuss sources with optically thick spectral indices that exceed the synchrotron self-absorption limit.
ABSTRACT
Observations of the redshifted 21-cm hyperfine line of neutral hydrogen from early phases of the Universe such as Cosmic Dawn and the Epoch of Reionization promise to open a new window onto ...the early formation of stars and galaxies. We present the first upper limits on the power spectrum of redshifted 21-cm brightness temperature fluctuations in the redshift range z = 19.8–25.2 (54–68 MHz frequency range) using 14 h of data obtained with the LOFAR-Low Band Antenna (LBA) array. We also demonstrate the application of a multiple pointing calibration technique to calibrate the LOFAR-LBA dual-pointing observations centred on the North Celestial Pole and the radio galaxy 3C220.3. We observe an unexplained excess of $\sim 30\!-\!50{{\ \rm per\ cent}}$ in Stokes / noise compared to Stokes V for the two observed fields, which decorrelates on ≳12 s and might have a physical origin. We show that enforcing smoothness of gain errors along frequency direction during calibration reduces the additional variance in Stokes I compared Stokes V introduced by the calibration on sub-band level. After subtraction of smooth foregrounds, we achieve a 2σ upper limit on the 21-cm power spectrum of $\Delta _{21}^2 \lt (14561\, \text{mK})^2$ at $k\sim 0.038\, h\, \text{cMpc}^{-1}$ and $\Delta _{21}^2 \lt (14886\, \text{mK})^2$ at $k\sim 0.038 \, h\, \text{cMpc}^{-1}$ for the 3C220 and NCP fields respectively and both upper limits are consistent with each other. The upper limits for the two fields are still dominated by systematics on most k modes.
We describe and compare several post-correlation radio frequency interference (RFI) classification methods. As data sizes of observations grow with new and improved telescopes, the need for ...completely automated, robust methods for RFI mitigation is pressing. We investigated several classification methods and find that, for the data sets we used, the most accurate among them is the SumThreshold method. This is a new method formed from a combination of existing techniques, including a new way of thresholding. This iterative method estimates the astronomical signal by carrying out a surface fit in the time-frequency plane. With a theoretical accuracy of 95 per cent recognition and an approximately 0.1 per cent false probability rate in simple simulated cases, the method is in practice as good as the human eye in finding RFI. In addition, it is fast, robust, does not need a data model before it can be executed and works in almost all configurations with its default parameters. The method has been compared using simulated data with several other mitigation techniques, including one based upon the singular value decomposition of the time-frequency matrix, and has shown better results than the rest.