We present a new technique to determine distances to major star-forming regions across the Perseus Molecular Cloud, using a combination of stellar photometry, astrometric data, and 12CO spectral-line ...maps. Incorporating the Gaia DR2 parallax measurements when available, we start by inferring the distance and reddening to stars from their Pan-STARRS1 and Two Micron All Sky Survey photometry, based on a technique presented by Green et al. and implemented in their 3D "Bayestar" dust map of three-quarters of the sky. We then refine their technique by using the velocity slices of a CO spectral cube as dust templates and modeling the cumulative distribution of dust along the line of sight toward these stars as a linear combination of the emission in the slices. Using a nested sampling algorithm, we fit these per-star distance-reddening measurements to find the distances to the CO velocity slices toward each star-forming region. This results in distance estimates explicitly tied to the velocity structure of the molecular gas. We determine distances to the B5, IC 348, B1, NGC 1333, L1448, and L1451 star-forming regions and find that individual clouds are located between 275 and 300 pc, with typical combined uncertainties of 5%. We find that the velocity gradient across Perseus corresponds to a distance gradient of about 25 pc, with the eastern portion of the cloud farther away than the western portion. We determine an average distance to the complex of 294 17 pc, about 60 pc further than the distance derived to the western portion of the cloud using parallax measurements of water masers associated with young stellar objects. The method we present is not limited to the Perseus Complex, but may be applied anywhere on the sky with adequate CO data in the pursuit of more accurate 3D maps of molecular clouds in the solar neighborhood and beyond.
Abstract
Galaxy formation and evolution involve a variety of effectively stochastic processes that operate over different timescales. The extended regulator model provides an analytic framework for ...the resulting variability (or “burstiness”) in galaxy-wide star formation due to these processes. It does this by relating the variability in Fourier space to the effective timescales of stochastic gas inflow, equilibrium, and dynamical processes influencing giant molecular clouds' creation and destruction using the power spectral density (PSD) formalism. We use the connection between the PSD and autocovariance function for general stochastic processes to reformulate this model as an autocovariance function, which we use to model variability in galaxy star formation histories (SFHs) using physically motivated Gaussian processes in log star formation rate (SFR) space. Using stellar population synthesis models, we then explore how changes in model stochasticity can affect spectral signatures across galaxy populations with properties similar to the Milky Way and present-day dwarfs, as well as at higher redshifts. We find that, even at fixed scatter, perturbations to the stochasticity model (changing timescales vs. overall variability) leave unique spectral signatures across both idealized and more realistic galaxy populations. Distributions of spectral features including H
α
and UV-based SFR indicators, H
δ
and Ca H and K absorption-line strengths,
D
n
(4000), and broadband colors provide testable predictions for galaxy populations from present and upcoming surveys with the Hubble Space Telescope, James Webb Space Telescope, and Nancy Grace Roman Space Telescope. The Gaussian process SFH framework provides a fast, flexible implementation of physical covariance models for the next generation of spectral energy distribution modeling tools. Code to reproduce our results can be found at
https://github.com/kartheikiyer/GP-SFH
.
Abstract
Flagship near-future surveys targeting 10
8
–10
9
galaxies across cosmic time will soon reveal the processes of galaxy assembly in unprecedented resolution. This creates an immediate ...computational challenge on effective analyses of the full data set. With simulation-based inference (SBI), it is possible to attain complex posterior distributions with the accuracy of traditional methods but with a >10
4
increase in speed. However, it comes with a major limitation. Standard SBI requires the simulated data to have characteristics identical to those of the observed data, which is often violated in astronomical surveys due to inhomogeneous coverage and/or fluctuating sky and telescope conditions. In this work, we present a complete SBI-based methodology,
SBI
++
, for treating out-of-distribution measurement errors and missing data. We show that out-of-distribution errors can be approximated by using standard SBI evaluations and that missing data can be marginalized over using SBI evaluations over nearby data realizations in the training set. In addition to the validation set, we apply
SBI
++
to galaxies identified in extragalactic images acquired by the James Webb Space Telescope, and show that
SBI
++
can infer photometric redshifts at least as accurately as traditional sampling methods—and crucially, better than the original SBI algorithm using training data with a wide range of observational errors.
SBI
++
retains the fast inference speed of ∼1 s for objects in the observational training set distribution, and additionally permits parameter inference outside of the trained noise and data at ∼1 minute per object. This expanded regime has broad implications for future applications to astronomical surveys. (Code and a Jupyter tutorial are made publicly available at
https://github.com/wangbingjie/sbi_pp
.)
Abstract
We leverage the 1 pc spatial resolution of the Leike et al. three-dimensional (3D) dust map to characterize the 3D structure of nearby molecular clouds (
d
≲ 400 pc). We start by ...“skeletonizing” the clouds in 3D volume density space to determine their “spines,” which we project on the sky to constrain cloud distances with ≈1% uncertainty. For each cloud, we determine an average radial volume density profile around its 3D spine and fit the profiles using Gaussian and Plummer functions. The radial volume density profiles are well described by a two-component Gaussian function, consistent with clouds having broad, lower-density outer envelopes and narrow, higher-density inner layers. The ratio of the outer to inner envelope widths is ≈3:1. We hypothesize that these two components may be tracing a transition between atomic and diffuse molecular gas or between the unstable and cold neutral medium. Plummer-like models can also provide a good fit, with molecular clouds exhibiting shallow power-law wings with density,
n
, falling off like
n
−2
at large radii. Using Bayesian model selection, we find that parameterizing the clouds’ profiles using a single Gaussian is disfavored. We compare our results with two-dimensional dust extinction maps, finding that the 3D dust recovers the total cloud mass from integrated approaches with fidelity, deviating only at higher levels of extinction (
A
V
≳ 2–3 mag). The 3D cloud structure described here will enable comparisons with synthetic clouds generated in simulations, offering unprecedented insight into the origins and fates of molecular clouds in the interstellar medium.
Abstract
Deep optical and near-infrared imaging of the entire Galactic plane is essential for understanding our Galaxy’s stars, gas, and dust. The second data release of the Dark Energy Camera ...(DECam) Plane Survey extends the five-band optical and near-infrared survey of the southern Galactic plane to cover 6.5% of the sky, ∣
b
∣ ≤ 10°, and 6° >
ℓ
> −124°, complementary to coverage by Pan-STARRS1. Typical single-exposure effective depths, including crowding effects and other complications, are 23.5, 22.6, 22.1, 21.6, and 20.8 mag in
g
,
r
,
i
,
z
, and
Y
bands, respectively, with around 1″ seeing. The survey comprises 3.32 billion objects built from 34 billion detections in 21,400 exposures, totaling 260 hr open shutter time on the DECam at Cerro Tololo. The data reduction pipeline features several improvements, including the addition of synthetic source injection tests to validate photometric solutions across the entire survey footprint. A convenient functional form for the detection bias in the faint limit was derived and leveraged to characterize the photometric pipeline performance. A new postprocessing technique was applied to every detection to debias and improve uncertainty estimates of the flux in the presence of structured backgrounds, specifically targeting nebulosity. The images and source catalogs are publicly available at
http://decaps.skymaps.info/
.
Abstract
Galaxy stellar mass is known to be monotonically related to the size of the galaxy’s globular cluster (GC) population for Milky Way sized and larger galaxies. However, the relation becomes ...ambiguous for dwarf galaxies, where there is some evidence for a downturn in GC population size at low galaxy masses. Smaller dwarfs are increasingly likely to have no GCs, and these zeros cannot be easily incorporated into linear models. We introduce the Hierarchical Errors-in-variables ERrors-in-variables BAyesian Lognormal hurdle (HERBAL) model to represent the relationship between dwarf galaxies and their GC populations, and apply it to the sample of Local Group galaxies, where the luminosity range coverage is maximal. This bimodal model accurately represents the two populations of dwarf galaxies: those that have GCs and those that do not. Our model thoroughly accounts for all uncertainties, including measurement uncertainty, uncertainty in luminosity to stellar mass conversions, and intrinsic scatter. The hierarchical nature of our Bayesian model also allows us to estimate galaxy masses and individual mass-to-light ratios from luminosity data within the model. We find that 50% of galaxies are expected to host GC populations at a stellar mass of
log
10
(
M
*
)
=
6.996
, and that the expected mass of GC populations remains linear down to the smallest galaxies. Our hierarchical model recovers an accurate estimate of the Milky Way stellar mass. Under our assumed error model, we find a nonzero intrinsic scatter of
0.59
−
0.21
+
0.3
(95% credible interval) that should be accounted for in future models.
Abstract
Stellar ages are key for determining the formation history of the Milky Way, but are difficult to measure precisely. Furthermore, methods that use chemical abundances to infer ages may ...entangle the intrinsic evolution of stars with the chemodynamical evolution of the Galaxy. In this paper, we present a framework for making probabilistic predictions of stellar ages, and then quantify the contribution of both stellar evolution and Galactic chemical evolution to those predictions using SHapley Additive exPlanations. We apply this interpretable prediction framework to both a simulated Milky Way sample containing stars in a variety of evolutionary stages and an APOGEE-mocked sample of red clump stars. We find that in the former case, stellar evolution is the dominant driver for age estimates, while in the latter case, the more restricted evolutionary information causes the model to proxy ages through the chemical evolution model. We show that as a result of the use of nonintrinsic Galactic chemical information, trends estimated with the predicted ages, such as the age–metallicity relation, can deviate from the truth.
Abstract
Artificial neural network emulators have been demonstrated to be a very computationally efficient method to rapidly generate galaxy spectral energy distributions, for parameter inference or ...otherwise. Using a highly flexible and fast mathematical structure, they can learn the nontrivial relationship between input galaxy parameters and output observables. However, they do so imperfectly, and small errors in flux prediction can yield large differences in recovered parameters. In this work, we investigate the relationship between an emulator’s execution time, uncertainties, correlated errors, and ability to recover accurate posteriors. We show that emulators can recover consistent results to traditional fits, with a precision of 25%–40% in posterior medians for stellar mass, stellar metallicity, star formation rate, and stellar age. We find that emulation uncertainties scale with an emulator’s width
N
as ∝
N
−1
, while execution time scales as ∝
N
2
, resulting in an inherent tradeoff between execution time and emulation uncertainties. We also find that emulators with uncertainties smaller than observational uncertainties are able to recover accurate posteriors for most parameters without a significant increase in catastrophic outliers. Furthermore, we demonstrate that small architectures can produce flux residuals that have significant correlations, which can create dangerous systematic errors in colors. Finally, we show that the distributions chosen for generating training sets can have a large effect on an emulator’s ability to accurately fit rare objects. Selecting the optimal architecture and training set for an emulator will minimize the computational requirements for fitting near-future large-scale galaxy surveys. We release our emulators on GitHub (
http://github.com/elijahmathews/MathewsEtAl2023
).
For the past 150 years, the prevailing view of the local interstellar medium has been based on a peculiarity known as the Gould Belt
, an expanding ring of young stars, gas and dust, tilted about 20 ...degrees to the Galactic plane. However, the physical relationship between local gas clouds has remained unknown because the accuracy in distance measurements to such clouds is of the same order as, or larger than, their sizes
. With the advent of large photometric surveys
and the astrometric survey
, this situation has changed
. Here we reveal the three-dimensional structure of all local cloud complexes. We find a narrow and coherent 2.7-kiloparsec arrangement of dense gas in the solar neighbourhood that contains many of the clouds thought to be associated with the Gould Belt. This finding is inconsistent with the notion that these clouds are part of a ring, bringing the Gould Belt model into question. The structure comprises the majority of nearby star-forming regions, has an aspect ratio of about 1:20 and contains about three million solar masses of gas. Remarkably, this structure appears to be undulating, and its three-dimensional shape is well described by a damped sinusoidal wave on the plane of the Milky Way with an average period of about 2 kiloparsecs and a maximum amplitude of about 160 parsecs.
Abstract
We present and characterize the catalog of galaxy shape measurements that will be used for cosmological weak lensing measurements in the Wide layer of the first year of the Hyper Suprime-Cam ...(HSC) survey. The catalog covers an area of 136.9 deg2 split into six fields, with a mean i-band seeing of 0${^{\prime\prime}_{.}}$58 and 5σ point-source depth of i ∼ 26. Given conservative galaxy selection criteria for first-year science, the depth and excellent image quality results in unweighted and weighted source number densities of 24.6 and 21.8 arcmin−2, respectively. We define the requirements for cosmological weak lensing science with this catalog, then focus on characterizing potential systematics in the catalog using a series of internal null tests for problems with point-spread function (PSF) modeling, shear estimation, and other aspects of the image processing. We find that the PSF models narrowly meet requirements for weak lensing science with this catalog, with fractional PSF model size residuals of approximately 0.003 (requirement: 0.004) and the PSF model shape correlation function ρ1 < 3 × 10−7 (requirement: 4 × 10−7) at 0${^{\circ}_{.}}$5 scales. A variety of galaxy shape-related null tests are statistically consistent with zero, but star–galaxy shape correlations reveal additive systematics on >1° scales that are sufficiently large as to require mitigation in cosmic shear measurements. Finally, we discuss the dominant systematics and the planned algorithmic changes to reduce them in future data reductions.