Fast radio bursts (FRBs) are one of the most tantalizing mysteries of the radio sky; their progenitors and origins remain unknown and until now no rapid multiwavelength follow-up of an FRB has been ...possible. New instrumentation has decreased the time between observation and discovery from years to seconds, and enables polarimetry to be performed on FRBs for the first time. We have discovered an FRB (FRB 140514) in real-time on 2014 May 14 at 17:14:11.06 UTC at the Parkes radio telescope and triggered follow-up at other wavelengths within hours of the event. FRB 140514 was found with a dispersion measure (DM) of 562.7(6) cm−3 pc, giving an upper limit on source redshift of z ≲ 0.5. FRB 140514 was found to be 21 ± 7 per cent (3σ) circularly polarized on the leading edge with a 1σ upper limit on linear polarization <10 per cent. We conclude that this polarization is intrinsic to the FRB. If there was any intrinsic linear polarization, as might be expected from coherent emission, then it may have been depolarized by Faraday rotation caused by passing through strong magnetic fields and/or high-density environments. FRB 140514 was discovered during a campaign to re-observe known FRB fields, and lies close to a previous discovery, FRB 110220; based on the difference in DMs of these bursts and time-on-sky arguments, we attribute the proximity to sampling bias and conclude that they are distinct objects. Follow-up conducted by 12 telescopes observing from X-ray to radio wavelengths was unable to identify a variable multiwavelength counterpart, allowing us to rule out models in which FRBs originate from nearby (z < 0.3) supernovae and long duration gamma-ray bursts.
We have used millisecond pulsars (MSPs) from the southern High Time Resolution Universe (HTRU) intermediate latitude survey area to simulate the distribution and total population of MSPs in the ...Galaxy. Our model makes use of the scalefactor method, which estimates the ratio of the total number of MSPs in the Galaxy to the known sample. Using our best-fitting value for the z-height, z = 500 pc, we find an underlying population of MSPs of 8.3(±4.2) × 104 sources down to a limiting luminosity of L
min = 0.1 mJy kpc2 and a luminosity distribution with a steep slope of d log N/d log L = −1.45 ± 0.14. However, at the low end of the luminosity distribution, the uncertainties introduced by small number statistics are large. By omitting very low luminosity pulsars, we find a Galactic population above L
min = 0.2 mJy kpc2 of only 3.0(±0.7) × 104 MSPs. We have also simulated pulsars with periods shorter than any known MSP, and estimate the maximum number of sub-MSPs in the Galaxy to be 7.8(±5.0) × 104 pulsars at L = 0.1 mJy kpc2. In addition, we estimate that the high and low latitude parts of the southern HTRU survey will detect 68 and 42 MSPs, respectively, including 78 new discoveries. Pulsar luminosity, and hence flux density, is an important input parameter in the model. Some of the published flux densities for the pulsars in our sample do not agree with the observed flux densities from our data set, and we have instead calculated average luminosities from archival data from the Parkes Telescope. We found many luminosities to be very different than their catalogue values, leading to very different population estimates. Large variations in flux density highlight the importance of including scintillation effects in MSP population studies.
Accelerating incoherent dedispersion Barsdell, B. R; Bailes, M; Barnes, D. G ...
Monthly notices of the Royal Astronomical Society,
20/May , Letnik:
422, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Incoherent dedispersion is a computationally intensive problem that appears frequently in pulsar and transient astronomy. For current and future transient pipelines, dedispersion can dominate the ...total execution time, meaning its computational speed acts as a constraint on the quality and quantity of science results. It is thus critical that the algorithm be able to take advantage of trends in commodity computing hardware. With this goal in mind, we present an analysis of the 'direct', 'tree' and 'sub-band' dedispersion algorithms with respect to their potential for efficient execution on modern graphics processing units (GPUs). We find all three to be excellent candidates, and proceed to describe implementations in c for cuda using insight gained from the analysis. Using recent CPU and GPU hardware, the transition to the GPU provides a speed-up of nine times for the direct algorithm when compared to an optimized quad-core CPU code. For realistic recent survey parameters, these speeds are high enough that further optimization is unnecessary to achieve real-time processing. Where further speed-ups are desirable, we find that the tree and sub-band algorithms are able to provide three to seven times better performance at the cost of certain smearing, memory consumption and development time trade-offs. We finish with a discussion of the implications of these results for future transient surveys. Our GPU dedispersion code is publicly available as a c library at http://dedisp.googlecode.com/.
A survey of FRB fields: limits on repeatability Petroff, E; Johnston, S; Keane, E. F ...
Monthly notices of the Royal Astronomical Society,
11/2015, Letnik:
454, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Several theories exist to explain the source of the bright, millisecond duration pulses known as fast radio bursts (FRBs). If the progenitors of FRBs are non-cataclysmic, such as giant pulses from ...pulsars, pulsar–planet binaries, or magnetar flares, FRB emission may be seen to repeat. We have undertaken a survey of the fields of eight known FRBs from the High Time Resolution Universe survey to search for repeating pulses. Although no repeat pulses were detected the survey yielded the detection of a new FRB, described in Petroff et al. (2015a). From our observations we rule out periodic repeating sources with periods P ≤ 8.6 h and rule out sources with periods 8.6 < P < 21 h at the 90 per cent confidence level. At P ≥ 21 h our limits fall off as ∼1/P. Dedicated and persistent observations of FRB source fields are needed to rule out repetition on longer time-scales, a task well-suited to next generation wide-field transient detectors.
We present 75 pulsars discovered in the mid-latitude portion of the High Time Resolution Universe survey, 54 of which have full timing solutions. All the pulsars have spin periods greater than 100 ...ms, and none of those with timing solutions is in binaries. Two display particularly interesting behaviour; PSR J1054−5944 is found to be an intermittent pulsar, and PSR J1809−0119 has glitched twice since its discovery.
In the second half of the paper we discuss the development and application of an artificial neural network in the data-processing pipeline for the survey. We discuss the tests that were used to generate scores and find that our neural network was able to reject over 99 per cent of the candidates produced in the data processing, and able to blindly detect 85 per cent of pulsars. We suggest that improvements to the accuracy should be possible if further care is taken when training an artificial neural network; for example, ensuring that a representative sample of the pulsar population is used during the training process, or the use of different artificial neural networks for the detection of different types of pulsars.
Astronomy depends on ever-increasing computing power. Processor clock rates have plateaued, and increased performance is now appearing in the form of additional processor cores on a single chip. This ...poses significant challenges to the astronomy software community. Graphics processing units (GPUs), now capable of general-purpose computation, exemplify both the difficult learning curve and the significant speedups exhibited by massively parallel hardware architectures. We present a generalized approach to tackling this paradigm shift, based on the analysis of algorithms. We describe a small collection of foundation algorithms relevant to astronomy and explain how they may be used to ease the transition to massively parallel computing architectures. We demonstrate the effectiveness of our approach by applying it to four well-known astronomy problems: Högbom clean, inverse ray-shooting for gravitational lensing, pulsar dedispersion and volume rendering. Algorithms with well-defined memory access patterns and high arithmetic intensity stand to receive the greatest performance boost from massively parallel architectures, while those that involve a significant amount of decision-making may struggle to take advantage of the available processing power.
Searches for transient astrophysical sources often reveal unexpected classes of objects that are useful physical laboratories. In a recent survey for pulsars and fast transients, we have uncovered ...four millisecond-duration radio transients all more than 40° from the Galactic plane. The bursts' properties indicate that they are of celestial rather than terrestrial origin. Host galaxy and intergalactic medium models suggest that they have cosmological redshifts of 0.5 to 1 and distances of up to 3 gigaparsecs. No temporally coincident x-or gamma-ray signature was identified in association with the bursts. Characterization of the source population and identification of host galaxies offers an opportunity to determine the baryonic content of the universe.
We extend the two-dimensional Cartesian shapelet formalism to d-dimensions. Concentrating on the three-dimensional case, we derive shapelet-based equations for the mass, centroid, root mean square ...radius, and components of the quadrupole moment and moment of inertia tensors. Using cosmological N-body simulations as an application domain, we show that three-dimensional shapelets can be used to replicate the complex sub-structure of dark matter haloes and demonstrate the basis of an automated classification scheme for halo shapes. We investigate the shapelet decomposition process from an algorithmic viewpoint, and consider opportunities for accelerating the computation of shapelet-based representations using graphics processing units.
To assess how future progress in gravitational microlensing computation at high optical depth will rely on both hardware and software solutions, we compare a direct inverse ray-shooting code ...implemented on a graphics processing unit (GPU) with both a widely-used hierarchical tree code on a single-core CPU, and a recent implementation of a parallel tree code suitable for a CPU-based cluster supercomputer. We examine the accuracy of the tree codes through comparison with a direct code over a much wider range of parameter space than has been feasible before. We demonstrate that all three codes present comparable accuracy, and choice of approach depends on considerations relating to the scale and nature of the microlensing problem under investigation. On current hardware, there is little difference in the processing speed of the single-core CPU tree code and the GPU direct code, however the recent plateau in single-core CPU speeds means the existing tree code is no longer able to take advantage of Moore’s law-like increases in processing speed. Instead, we anticipate a rapid increase in GPU capabilities in the next few years, which is advantageous to the direct code. We suggest that progress in other areas of astrophysical computation may benefit from a transition to GPUs through the use of “brute force” algorithms, rather than attempting to port the current best solution directly to a GPU language – for certain classes of problems, the simple implementation on GPUs may already be no worse than an optimised single-core CPU version.