ABSTRACT
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior ...probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics.
Context. Over recent decades, astronomy has entered the era of massive data and real-time surveys. This is improving the study of transient objects – although they still contain some of the most ...poorly understood phenomena in astrophysics, as it is inherently more difficult to obtain data to constrain the proposed models. Aims. In order to help detect these objects in their brightest state and build synergies with multi-wavelength real-time surveys, we have built a quasi-real time automatic transient detection system for the XMM-Newton pipeline: the Search for Transient Objects in New detections using Known Sources (STONKS) pipeline. Methods. STONKS detects long-term X-ray transient events by automatically comparing new XMM-Newton detections to any available archival X-ray data at this position, sending out an alert if the variability between observations (defined as the ratio between the maximum flux and the minimum flux or upper limit) is over 5. This required an initial careful cross-correlation and flux calibration of various X-ray catalogs from different observatories ( XMM-Newton, Chandra, Swift , ROSAT, and eROSITA). A Bayesian framework was put into place to solve any ambiguous associations. We also systematically computed the XMM-Newton upper limits at the position of any X-ray source covered by the XMM-Newton observational footprint, even without any XMM-Newton counterpart. The behavior of STONKS was then tested on all 483 observations performed with imaging mode in 2021. Results. Over the 2021 testing run, STONKS provided a daily alert rate of 0.7 −0.5 +0.7 alerts per day, about 80% of them corresponding to serendipitous sources. Among the detected variable serendipitous sources, there are: several highly variable active galactic nuclei (AGNs) and flaring stars, as well as new X-ray binary and ultra-luminous X-ray source candidates, some of which are present here. STONKS also detected targeted tidal disruption events, ensuring its ability to detect other serendipitous events. As a byproduct of our method, the archival multi-instrument catalog contains about one million X-ray sources, with 15% of them involving several catalogs and 60% of them having XMM-Newton (pointed or slew) upper limits. Conclusions. STONKS demonstrates a great potential for revealing future serendipitous transient X-ray sources, providing the community with the ability to follow-up on these objects a few days after their detection with the goal of obtaining a better understanding of their nature. The underlying multi-instrument archival X-ray catalog will be made available to the community and kept up to date with future X-ray data releases.
Context. Over recent decades, astronomy has entered the era of massive data and real-time surveys. This is improving the study of transient objects – although they still contain some of the most ...poorly understood phenomena in astrophysics, as it is inherently more difficult to obtain data to constrain the proposed models.Aims. In order to help detect these objects in their brightest state and build synergies with multi-wavelength real-time surveys, we have built a quasi-real time automatic transient detection system for the XMM-Newton pipeline: the Search for Transient Objects in New detections using Known Sources (STONKS) pipeline.Methods. STONKS detects long-term X-ray transient events by automatically comparing new XMM-Newton detections to any available archival X-ray data at this position, sending out an alert if the variability between observations (defined as the ratio between the maximum flux and the minimum flux or upper limit) is over 5. This required an initial careful cross-correlation and flux calibration of various X-ray catalogs from different observatories (XMM-Newton, Chandra, Swift, ROSAT, and eROSITA). A Bayesian framework was put into place to solve any ambiguous associations. We also systematically computed the XMM-Newton upper limits at the position of any X-ray source covered by the XMM-Newton observational footprint, even without any XMM-Newton counterpart. The behavior of STONKS was then tested on all 483 observations performed with imaging mode in 2021.Results. Over the 2021 testing run, STONKS provided a daily alert rate of 0.7−0.5+0.7 alerts per day, about 80% of them corresponding to serendipitous sources. Among the detected variable serendipitous sources, there are: several highly variable active galactic nuclei (AGNs) and flaring stars, as well as new X-ray binary and ultra-luminous X-ray source candidates, some of which are present here. STONKS also detected targeted tidal disruption events, ensuring its ability to detect other serendipitous events. As a byproduct of our method, the archival multi-instrument catalog contains about one million X-ray sources, with 15% of them involving several catalogs and 60% of them having XMM-Newton (pointed or slew) upper limits.Conclusions. STONKS demonstrates a great potential for revealing future serendipitous transient X-ray sources, providing the community with the ability to follow-up on these objects a few days after their detection with the goal of obtaining a better understanding of their nature. The underlying multi-instrument archival X-ray catalog will be made available to the community and kept up to date with future X-ray data releases.
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability ...density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics.
Over recent decades, astronomy has entered the era of massive data and real-time surveys. This is improving the study of transient objects - although they still contain some of the most poorly ...understood phenomena in astrophysics, as it is inherently more difficult to obtain data on them. In order to help detect these objects in their brightest state, we have built a quasi-real time transient detection system for the XMM-Newton pipeline: the Search for Transient Objects in New detections using Known Sources (STONKS) pipeline. STONKS detects long-term X-ray transients by automatically comparing new XMM-Newton detections to any available archival X-ray data at this position, sending out an alert if the amplitude of variability between observations is over 5. This required an initial careful cross-correlation and flux calibration of various X-ray catalogs from different observatories (XMM-Newton, Chandra, Swift, ROSAT, and eROSITA). We also systematically computed the XMM-Newton upper limits at the position of any X-ray source covered by the XMM-Newton observational footprint, even without any XMM-Newton counterpart. The behavior of STONKS was then tested on all 483 observations performed with imaging mode in 2021. Over the 2021 testing run, STONKS provided \(0.7^{+0.7}_{-0.5}\) alerts per day, about 80% of them being serendipitous. STONKS also detected targeted tidal disruption events, ensuring its ability to detect other serendipitous events. As a byproduct of our method, the archival multi-instrument catalog contains about one million X-ray sources, with 15% of them involving several catalogs and 60% of them having XMM-Newton upper limits. STONKS demonstrates a great potential for revealing future serendipitous transient X-ray sources, providing the community with the ability to follow-up on these objects a few days after their detection.
The main science aim of the BlackGEM array is to detect optical counterparts
to gravitational wave mergers. Additionally, the array will perform a set of
synoptic surveys to detect Local Universe ...transients and short time-scale
variability in stars and binaries, as well as a six-filter all-sky survey down
to ~22nd mag. The BlackGEM Phase-I array consists of three optical wide-field
unit telescopes. Each unit uses an f/5.5 modified Dall-Kirkham (Harmer-Wynne)
design with a triplet corrector lens, and a 65cm primary mirror, coupled with a
110Mpix CCD detector, that provides an instantaneous field-of-view of
2.7~square degrees, sampled at 0.564\arcsec/pixel. The total field-of-view for
the array is 8.2 square degrees. Each telescope is equipped with a six-slot
filter wheel containing an optimised Sloan set (BG-u, BG-g, BG-r, BG-i, BG-z)
and a wider-band 440-720 nm (BG-q) filter. Each unit telescope is independent
from the others. Cloud-based data processing is done in real time, and includes
a transient-detection routine as well as a full-source optimal-photometry
module. BlackGEM has been installed at the ESO La Silla observatory as of
October 2019. After a prolonged COVID-19 hiatus, science operations started on
April 1, 2023 and will run for five years. Aside from its core scientific
program, BlackGEM will give rise to a multitude of additional science cases in
multi-colour time-domain astronomy, to the benefit of a variety of topics in
astrophysics, such as infant supernovae, luminous red novae, asteroseismology
of post-main-sequence objects, (ultracompact) binary stars, and the relation
between gravitational wave counterparts and other classes of transients
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability ...density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing twelve photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time (LSST) Dark Energy Science Collaboration (DESC). By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/under-breadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate (CDE) loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performancemetrics.