ABSTRACT
fink is a broker designed to enable science with large time-domain alert streams such as the one from the upcoming Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). It ...exhibits traditional astronomy broker features such as automatized ingestion, annotation, selection, and redistribution of promising alerts for transient science. It is also designed to go beyond traditional broker features by providing real-time transient classification that is continuously improved by using state-of-the-art deep learning and adaptive learning techniques. These evolving added values will enable more accurate scientific output from LSST photometric data for diverse science cases while also leading to a higher incidence of new discoveries which shall accompany the evolution of the survey. In this paper, we introduce fink, its science motivation, architecture, and current status including first science verification cases using the Zwicky Transient Facility alert stream.
Abstract
The generation-defining Vera C. Rubin Observatory will make state-of-the-art measurements of both the static and transient universe through its Legacy Survey for Space and Time (LSST). With ...such capabilities, it is immensely challenging to optimize the LSST observing strategy across the survey’s wide range of science drivers. Many aspects of the LSST observing strategy relevant to the LSST Dark Energy Science Collaboration, such as survey footprint definition, single-visit exposure time, and the cadence of repeat visits in different filters, are yet to be finalized. Here, we present metrics used to assess the impact of observing strategy on the cosmological probes considered most sensitive to survey design; these are large-scale structure, weak lensing, type Ia supernovae, kilonovae, and strong lens systems (as well as photometric redshifts, which enable many of these probes). We evaluate these metrics for over 100 different simulated potential survey designs. Our results show that multiple observing strategy decisions can profoundly impact cosmological constraints with LSST; these include adjusting the survey footprint, ensuring repeat nightly visits are taken in different filters, and enforcing regular cadence. We provide public code for our metrics, which makes them readily available for evaluating further modifications to the survey design. We conclude with a set of recommendations and highlight observing strategy factors that require further research.
Studies of gravitational microlensing effects require the estimation of their detection efficiency as soon as one wants to quantify the massive compact objects along the line of sight of source ...targets. This is particularly important for setting limits on the contribution of massive compact objects to the Galactic halo. These estimates of detection efficiency must not only account for the blending effects of accidentally superimposed sources in crowded fields, but also for possible mixing of light from stars belonging to multiple gravitationally bound stellar systems. Until now, only blending due to accidental alignment of stars had been studied, in particular as a result of high-resolution space images. In this paper, we address the impact of unresolved binary sources that are physically gravitationally bound and not accidentally aligned, in the case of microlensing detection efficiencies toward the Large Magellanic Cloud (LMC). We used the Gaia catalog of nearby stars to constrain the local binarity rate, which we extrapolated to the distance of the LMC. Then we estimated an upper limit to the impact of this binarity on the detection efficiency of microlensing effects, as a function of lens mass. We find that a maximum of 6.2\% of microlensing events on LMC sources due to halo lenses heavier than \(30 M_{\odot}\) could be affected as a result of the sources belonging to unresolved binary systems. This number is the maximum fraction of events for which the source is a binary system separated by about one angular Einstein radius or more in a configuration where light-curve distortion could affect the efficiency of some detection algorithms. For events caused by lighter lenses on LMC sources, our study shows that the chances of blending effects by binary systems is likely to be higher and should be studied in more detail to improve the accuracy of efficiency calculations.
Black hole-like objects with mass greater than \(10 M_{\odot}\), as discovered by gravitational antennas, can produce long time-scale (several years) gravitational microlensing effects. Considered ...separately, previous microlensing surveys were insensitive to such events because of their limited duration of 6-7 years. We combined light curves from the EROS-2 and MACHO surveys to the Large Magellanic Cloud (LMC) to create a joint database for 14.1 million stars, covering a total duration of 10.6 years, with fluxes measured through 4 wide passbands. We searched for multi-year microlensing events in this catalog of extended light curves, complemented by 24.1 million light curves observed by only one of the surveys. Our analysis, combined with previous analysis from EROS, shows that compact objects with mass between \(10^{-7}\) and \(200 M_{\odot}\) can not constitute more than \(\sim 20\%\) of the total mass of a standard halo (at \(95\%\) CL). We also exclude that \(\sim 50\%\) of the halo is made of Black Holes (BH) lighter than \(1000 M_{\odot}\).
The generation-defining Vera C. Rubin Observatory will make state-of-the-art measurements of both the static and transient universe through its Legacy Survey for Space and Time (LSST). With such ...capabilities, it is immensely challenging to optimize the LSST observing strategy across the survey's wide range of science drivers. Many aspects of the LSST observing strategy relevant to the LSST Dark Energy Science Collaboration, such as survey footprint definition, single visit exposure time and the cadence of repeat visits in different filters, are yet to be finalized. Here, we present metrics used to assess the impact of observing strategy on the cosmological probes considered most sensitive to survey design; these are large-scale structure, weak lensing, type Ia supernovae, kilonovae and strong lens systems (as well as photometric redshifts, which enable many of these probes). We evaluate these metrics for over 100 different simulated potential survey designs. Our results show that multiple observing strategy decisions can profoundly impact cosmological constraints with LSST; these include adjusting the survey footprint, ensuring repeat nightly visits are taken in different filters and enforcing regular cadence. We provide public code for our metrics, which makes them readily available for evaluating further modifications to the survey design. We conclude with a set of recommendations and highlight observing strategy factors that require further research.
Fink is a broker designed to enable science with large time-domain alert streams such as the one from the upcoming Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). It exhibits ...traditional astronomy broker features such as automatised ingestion, annotation, selection and redistribution of promising alerts for transient science. It is also designed to go beyond traditional broker features by providing real-time transient classification which is continuously improved by using state-of-the-art Deep Learning and Adaptive Learning techniques. These evolving added values will enable more accurate scientific output from LSST photometric data for diverse science cases while also leading to a higher incidence of new discoveries which shall accompany the evolution of the survey. In this paper we introduce Fink, its science motivation, architecture and current status including first science verification cases using the Zwicky Transient Facility alert stream.