With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will ...allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ~100 when compared to a single-threaded CPU, and up to a factor of ~10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.
Accelerating incoherent dedispersion Barsdell, B. R; Bailes, M; Barnes, D. G ...
Monthly notices of the Royal Astronomical Society,
20/May , Letnik:
422, Številka:
1
Journal Article
Recenzirano
Incoherent dedispersion is a computationally intensive problem that appears frequently in pulsar and transient astronomy. For current and future transient pipelines, dedispersion can dominate the ...total execution time, meaning its computational speed acts as a constraint on the quality and quantity of science results. It is thus critical that the algorithm be able to take advantage of trends in commodity computing hardware. With this goal in mind, we present an analysis of the 'direct', 'tree' and 'sub-band' dedispersion algorithms with respect to their potential for efficient execution on modern graphics processing units (GPUs). We find all three to be excellent candidates, and proceed to describe implementations in c for cuda using insight gained from the analysis. Using recent CPU and GPU hardware, the transition to the GPU provides a speed-up of nine times for the direct algorithm when compared to an optimized quad-core CPU code. For realistic recent survey parameters, these speeds are high enough that further optimization is unnecessary to achieve real-time processing. Where further speed-ups are desirable, we find that the tree and sub-band algorithms are able to provide three to seven times better performance at the cost of certain smearing, memory consumption and development time trade-offs. We finish with a discussion of the implications of these results for future transient surveys. Our GPU dedispersion code is publicly available as a c library at http://dedisp.googlecode.com/.
Cosmological gravitational microlensing is a useful technique for understanding the structure of the inner parts of a quasar, especially the accretion disc and the central supermassive black hole. So ...far, most of the cosmological microlensing studies have focused on single objects from ∼90 currently known lensed quasars. However, present and planned all-sky surveys are expected to discover thousands of new lensed systems. Using a graphics processing unit (GPU) accelerated ray-shooting code, we have generated 2550 magnification maps uniformly across the convergence (κ) and shear (γ) parameter space of interest to microlensing. We examine the effect of random realizations of the microlens positions on map properties such as the magnification probability distribution (MPD). It is shown that for most of the parameter space a single map is representative of an average behaviour. All of the simulations have been carried out on the GPU Supercomputer for Theoretical Astrophysics Research.
As synoptic all-sky surveys begin to discover new multiply lensed quasars, the flow of data will enable statistical cosmological microlensing studies of sufficient size to constrain quasar accretion ...disk and supermassive black hole properties. In preparation for this new era, we are undertaking the GPU-Enabled, High Resolution cosmological MicroLensing parameter survey (GERLUMPH). We present here the GERLUMPH Data Release 1, which consists of 12,342 high resolution cosmological microlensing magnification maps and provides the first uniform coverage of the convergence, shear, and smooth matter fraction parameter space. We use these maps to perform a comprehensive numerical investigation of the mass-sheet degeneracy, finding excellent agreement with its predictions. We study the effect of smooth matter on microlensing induced magnification fluctuations. In particular, in the minima and saddlepoint regions, fluctuations are enhanced only along the critical line, while in the maxima region they are always enhanced for high smooth matter fractions (asymptotically =0.9). We describe our approach to data management, including the use of an SQL database with a Web interface for data access and online analysis, obviating the need for individuals to download large volumes of data. In combination with existing observational databases and online applications, the GERLUMPH archive represents a fundamental component of a new microlensing eResearch cloud. Our maps and tools are publicly available at http://gerlumph.swin.edu.au/.
Cosmological gravitational microlensing has been proven to be a powerful tool to constrain the structure of multiply imaged quasars, especially the accretion disc and central supermassive black hole ...system. However, the derived constraints on models may be affected by large systematic errors introduced in the various stages of modelling, namely, the macromodels, the microlensing magnification maps, and the convolution with realistic disc profiles. In particular, it has been known that different macromodels of the galaxy lens that fit the observations equally well, can lead to different values of convergence, ..., and shear, ..., required to generate magnification maps. So far, ~25 microlensed quasars have been studied using microlensing techniques, where each system has been modelled and analysed individually, or in small samples. This is about to change due to the upcoming synoptic all-sky surveys, which are expected to discover thousands of quasars suitable for microlensing studies. In this study, we investigate the connection between macromodels of the galaxy lens and microlensing magnification maps throughout the parameter space in preparation for future studies of large statistical samples of systems displaying microlensing. In particular, we use 55 900 maps produced by the GERLUMPH parameter survey (available online at http://gerlumph.swin.edu.au) and identify regions of parameter space where macromodel uncertainties (...) lead to statistically different magnification maps. Strategies for mitigating the effect of ... uncertainties are discussed in order to understand and control this potential source of systematic errors in accretion disc constraints derived from microlensing. (ProQuest: ... denotes formulae/symbols omitted.)
ABSTRACT In the upcoming synoptic all-sky survey era of astronomy, thousands of new multiply imaged quasars are expected to be discovered and monitored regularly. Light curves from the images of ...gravitationally lensed quasars are further affected by superimposed variability due to microlensing. In order to disentangle the microlensing from the intrinsic variability of the light curves, the time delays between the multiple images have to be accurately measured. The resulting microlensing light curves can then be analyzed to reveal information about the background source, such as the size of the quasar accretion disk. In this paper we present the most extensive and coherent collection of simulated microlensing light curves; we have generated billion light curves using the GERLUMPH high resolution microlensing magnification maps. Our simulations can be used to train algorithms to measure lensed quasar time delays, plan future monitoring campaigns, and study light curve properties throughout parameter space. Our data are openly available to the community and are complemented by online eResearch tools, located at http://gerlumph.swin.edu.au.
Astronomy depends on ever-increasing computing power. Processor clock rates have plateaued, and increased performance is now appearing in the form of additional processor cores on a single chip. This ...poses significant challenges to the astronomy software community. Graphics processing units (GPUs), now capable of general-purpose computation, exemplify both the difficult learning curve and the significant speedups exhibited by massively parallel hardware architectures. We present a generalized approach to tackling this paradigm shift, based on the analysis of algorithms. We describe a small collection of foundation algorithms relevant to astronomy and explain how they may be used to ease the transition to massively parallel computing architectures. We demonstrate the effectiveness of our approach by applying it to four well-known astronomy problems: Högbom clean, inverse ray-shooting for gravitational lensing, pulsar dedispersion and volume rendering. Algorithms with well-defined memory access patterns and high arithmetic intensity stand to receive the greatest performance boost from massively parallel architectures, while those that involve a significant amount of decision-making may struggle to take advantage of the available processing power.
Flexion-based weak gravitational lensing analysis is proving to be a useful adjunct to traditional shear-based techniques. As flexion arises from gradients across an image, analytic and numerical ...techniques are required to investigate flexion predictions for extended image/source pairs. Using the Schwarzschild lens model, we demonstrate that the ray-bundle method for gravitational lensing can be used to accurately recover second flexion, and is consistent with recovery of zero first flexion. Using lens plane to source plane bundle propagation, we find that second flexion can be recovered with an error no worse than 1 per cent for bundle radii smaller than Δθ= 0.01θE and lens plane impact pararameters greater than θE+Δθ, where θE is the angular Einstein radius. Using source plane to lens plane bundle propagation, we demonstrate the existence of a preferred flexion zone. For images at radii closer to the lens than the inner boundary of this zone, indicative of the true strong lensing regime, the flexion formalism should be used with caution (errors greater than 5 per cent for extended image/source pairs). We also define a shear-zone boundary, beyond which image shapes are essentially indistinguishable from ellipses (1 per cent error in ellipticity). While suggestive that a traditional weak lensing analysis is satisfactory beyond this boundary, a potentially detectable non-zero flexion signal remains.
High-redshift sources suffer from magnification or demagnification due to weak gravitational lensing by large-scale structure. One consequence of this is that the distance-redshift relation, in wide ...use for cosmological tests, suffers lensing-induced scatter which can be quantified by the magnification probability distribution. Predicting this distribution generally requires a method for ray tracing through cosmological N-body simulations. However, standard methods tend to apply the multiple-thin-lens approximation. In an effort to quantify the accuracy of these methods, we develop an innovative code that performs ray tracing without the use of this approximation. The efficiency and accuracy of this computationally challenging approach can be improved by careful choices of numerical parameters; therefore, the results are analysed for the behaviour of the ray-tracing code in the vicinity of Schwarzschild and Navarro-Frenk-White lenses. Preliminary comparisons are drawn with the multiple-lens-plane ray-bundle method in the context of cosmological mass distributions for a source redshift of z
s= 0.5.
We present a high-performance, graphics processing unit (GPU) based framework for the efficient analysis and visualization of (nearly) terabyte (TB) sized 3D images. Using a cluster of 96 GPUs, we ...demonstrate for a 0.5 TB image (1) volume rendering using an arbitrary transfer function at 7-10 frames per second, (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s, (3) evaluation of the image histogram in 4 s and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching 1 teravoxel per second, and are 10-100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows that the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array Pathfinder radio telescopes.