We present results of simulations of stellar collapse and explosions in spherical symmetry for progenitor stars in the 8–$10\,M_\odot$ range with an O-Ne-Mg core. The simulations were continued until ...nearly one second after core bounce and were performed with the Prometheus/Vertex code with a variable Eddington factor solver for the neutrino transport, including a state-of-the-art treatment of neutrino-matter interactions. Particular effort was made to implement nuclear burning and electron capture rates with sufficient accuracy to ensure a smooth continuation, without transients, from the progenitor evolution to core collapse. Using two different nuclear equations of state (EoSs), a soft version of the Lattimer & Swesty EoS and the significantly stiffer Wolff & Hillebrandt EoS, we found no prompt explosions, but instead delayed explosions, powered by neutrino heating and the neutrino-driven baryonic wind which sets in about 200 ms after bounce. The models eject little nickel (${<} 0.015~M_\odot$), explode with an energy of ${\ga}0.1\times 10^{51}\,$erg, and leave behind neutron stars (NSs) with a baryonic mass near $1.36\,M_\odot$. Different from previous models of such explosions, the ejecta during the first second have a proton-to-baryon ratio of $Y_{\rm{e}} \ga 0.46$, which suggests a chemical composition that is not in conflict with galactic abundances. No low-entropy matter with $Y_{\rm{e}} \ll 0.5$ is ejected. This excludes such explosions as sites of a low-entropy r-process. The low explosion energy and nucleosynthetic implications are compatible with the observed properties of the Crab supernova, and the small nickel mass supports the possibility that our models explain some subluminous type II-P supernovae.
Context.
Devising fast and accurate methods of predicting the Lyman-
α
forest at the field level, avoiding the computational burden of running large-volume cosmological hydrodynamic simulations, is ...of fundamental importance to quickly generate the massive set of simulations needed by the state-of-the-art galaxy and Ly
α
forest spectroscopic surveys.
Aims.
We present an improved analytical model to predict the Ly
α
forest at the field level in redshift space from the dark matter field, expanding upon the widely used Fluctuating Gunn-Peterson Approximation (FGPA). Instead of assuming a unique universal relation over the whole considered cosmic volume, we introduce a dependence on the cosmic web environment (knots, filaments, sheets, and voids) in the model, thereby effectively accounting for nonlocal bias. Furthermore, we include a detailed treatment of velocity bias in the redshift space distortion modeling, allowing the velocity bias to be cosmic-web-dependent.
Methods.
We first mapped the dark matter field from real to redshift space through a particle-based relation including velocity bias, depending on the cosmic web classification of the dark matter field in real space. We then formalized an appropriate functional form for our model, building upon the traditional FGPA and including a cutoff and a boosting factor mimicking a threshold and inverse-threshold bias effect, respectively, with model parameters depending on the cosmic web classification in redshift space. Eventually, we fit the coefficients of the model via an efficient Markov chain Monte Carlo scheme.
Results.
We find evidence for a significant difference between the same model parameters in different environments, suggesting that for the investigated setup the simple standard FGPA is not able to adequately predict the Ly
α
forest in the different cosmic web regimes. We reproduce the summary statistics of the reference cosmological hydrodynamic simulation that we use for comparison, yielding an accurate mean transmitted flux, probability distribution function, 3D power spectrum, and bispectrum. In particular, we achieve maximum deviation and average deviation accuracy in the Ly
α
forest 3D power spectrum of ∼3% and ∼0.1% up to
k
∼ 0.4
h
Mpc
−1
, and ∼5% and ∼1.8% up to
k
∼ 1.4
h
Mpc
−1
.
Conclusions.
Our new model outperforms previous analytical efforts to predict the Ly
α
forest at the field level in all the probed summary statistics, and has the potential to become instrumental in the generation of fast accurate mocks for covariance matrices estimation in the context of current and forthcoming Ly
α
forest surveys.
The cosmic web from perturbation theory Kitaura, F.-S.; Sinigaglia, F.; Balaguera-Antolínez, A. ...
Astronomy and astrophysics (Berlin),
03/2024, Letnik:
683
Journal Article
Recenzirano
Odprti dostop
Context. Analysing the large-scale structure (LSS) in the Universe with galaxy surveys demands accurate structure formation models. Such models should ideally be fast and have a clear theoretical ...framework in order to rapidly scan a variety of cosmological parameter spaces without requiring large training data sets. Aims. This study aims to extend Lagrangian perturbation theory (LPT), including viscosity and vorticity, to reproduce the cosmic evolution from dark matter N -body calculations at the field level. Methods. We extend LPT to a Eulerian framework, which we dub eALPT. An ultraviolet regularisation through the spherical collapse model provided by Augmented LPT turns out to be crucial at low redshifts. This iterative method enables modelling of the stress tensor and introduces vorticity. The eALPT model has two free parameters apart from the choice of cosmology, redshift snapshots, cosmic volume, and the number of particles. Results. We find that compared to N -body solvers, the cross-correlation of the dark matter distribution increases at k = 1 h Mpc −1 and z = 0 from ∼55% with the Zel’dovich approximation (∼70% with ALPT), to ∼95% with the three-timestep eALPT, and the power spectra show percentage accuracy up to k ≃ 0.3 h Mpc −1 .
We present a Bayesian reconstruction algorithm to generate unbiased samples of the underlying dark matter field from halo catalogues. Our new contribution consists of implementing a non-Poisson ...likelihood including a deterministic non-linear and scale-dependent bias. In particular we present the Hamiltonian equations of motions for the negative binomial (NB) probability distribution function. This permits us to efficiently sample the posterior distribution function of density fields given a sample of galaxies using the Hamiltonian Monte Carlo technique implemented in the argo code. We have tested our algorithm with the Bolshoi N-body simulation at redshift z = 0, inferring the underlying dark matter density field from subsamples of the halo catalogue with biases smaller and larger than one. Our method shows that we can draw closely unbiased samples (compatible within 1-...) from the posterior distribution up to scales of about ... in terms of power-spectra and cell-to-cell correlations. We find that a Poisson likelihood including a scale-dependent non-linear deterministic bias can yield reconstructions with power spectra deviating more than 10 per cent at ... Our reconstruction algorithm is especially suited for emission line galaxy data for which a complex non-linear stochastic biasing treatment beyond Poissonity becomes indispensable. (ProQuest: ... denotes formulae/symbols omitted.)
We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in ...terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton–Raphson, Landweber–Fridman and both linear and non-linear Krylov methods based on Fletcher–Reeves, Polak–Ribière and Hestenes–Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel argo software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.
We present a method to produce mock galaxy catalogues with efficient perturbation theory schemes, which match the number density, power spectra and bispectra in real and in redshift space from N-body ...simulations. The essential contribution of this work is the way in which we constrain the bias parameters of the patchy-code. In addition to aiming at reproducing the two-point statistics, we seek the set of bias parameters, which constrain the univariate halo probability distribution function (PDF) encoding higher order correlation functions. We demonstrate that halo catalogues based on the same underlying dark matter field with a fix halo number density, and accurately matching the power spectrum (within 2 per cent) can lead to very different bispectra depending on the adopted halo bias model. A model ignoring the shape of the halo PDF can lead to deviations up to factors of 2. The catalogues obtained additionally constraining the shape of the halo PDF can significantly lower the discrepancy in the three-point statistics, yielding closely unbiased bispectra both in real and in redshift space; which are in general compatible with those corresponding to an N-body simulation within 10 per cent (deviating at most up to 20 per cent). Our calculations show that the constant linear bias of ∼2 for luminous red galaxy (LRG) like galaxies found in the power spectrum, mainly comes from sampling haloes in high-density peaks, choosing a high-density threshold rather than from a factor multiplying the dark matter density field. Our method contributes towards an efficient modelling of the halo/galaxy distribution required to estimate uncertainties in the clustering measurements from galaxy redshift surveys. We have also demonstrated that it represents a powerful tool to test various bias models.
ABSTRACT We cross-correlate foreground cleaned Planck Nominal cosmic microwave background (CMB) maps with two templates constructed from the Two-Micron All-Sky Redshift Survey of galaxies. The first ...template traces the large-scale filamentary distribution characteristic of the Warm-Hot Intergalactic Medium (WHIM) out to Mpc. The second preferentially traces the virialized gas in unresolved halos around galaxies. We find a marginal signal from the correlation of Planck data and the WHIM template with a signal to noise from 0.84 to 1.39 at the different Planck frequencies, and with a frequency dependence compatible with the thermal Sunyaev-Zel'dovich effect. When we restrict our analysis to the 60% of the sky outside the plane of the Galaxy and known point sources and galaxy clusters, the cross-correlation at zero lag is . The correlation extends out to , which at the median depth of our template corresponds to a physical length of Mpc. On the same fraction of the sky, the cross-correlation of the CMB data with the second template is (95% C.L.), providing no statistically significant evidence of a contribution from bound gas to the previous result. This limit translates into a physical constraint on the properties of the shock-heated WHIM of a log-normal model describing the weakly nonlinear density field. We find that our upper limit is compatible with a fraction of 45% of all baryons residing in filaments at overdensities ∼1-100 and with temperatures in the range K, in agreement with the detection at redshift of Van Waerbeke et al..
ABSTRACT
This work investigates the connection between the cosmic web and the halo distribution through the gravitational potential at the field level. We combine three fields of research, cosmic web ...classification, perturbation theory expansions of the halo bias, and halo (galaxy) mock catalogue making methods. In particular, we use the invariants of the tidal field and the velocity shear tensor as generating functions to reproduce the halo number counts of a reference catalogue from full gravity calculations, populating the dark matter field on a mesh well into the non-linear regime ($3\, h^{-1}\, {\rm Mpc}$ scales). Our results show an unprecedented agreement with the reference power spectrum within 1 per cent up to $k=0.72\, h\, {\rm Mpc}^{-1}$. By analysing the three-point statistics on large scales (configurations of up to $k=0.2\, h\, {\rm Mpc}^{-1}$), we find evidence for non-local bias at the 4.8σ confidence level, being compatible with the reference catalogue. In particular, we find that a detailed description of tidal anisotropic clustering on large scales is crucial to achieve this accuracy at the field level. These findings can be particularly important for the analysis of the next generation of galaxy surveys in mock galaxy production.
We examine nucleosynthesis in the electron capture supernovae of progenitor asymptotic giant branch stars with an O-Ne-Mg core (with the initial stellar mass of 8.8 M ). Thermodynamic trajectories ...for the first 810 ms after core bounce are taken from a recent state-of-the-art hydrodynamic simulation. The presented nucleosynthesis results are characterized by a number of distinct features that are not shared with those of other supernovae from the collapse of stars with iron core (with initial stellar masses of more than 10 M ). First is the small amount of 56Ni (0.002-0.004 M ) in the ejecta, which can be an explanation for the observed properties of faint supernovae such as SNe 2008S and 1997D. In addition, the large Ni/Fe ratio is in reasonable agreement with the spectroscopic result of the Crab nebula (the relic of SN 1054). Second is the large production of 64Zn, 70Ge, light p-nuclei (74Se, 78Kr, 84Sr, and 92Mo), and in particular, 90Zr, which originates from the low Ye (0.46-0.49, the number of electrons per nucleon) ejecta. We find, however, that only a 1%-2% increase of the minimum Ye moderates the overproduction of 90Zr. In contrast, the production of 64Zn is fairly robust against a small variation of Ye . This provides the upper limit of the occurrence of this type of events to be about 30% of all core-collapse supernovae.
The first statistically significant detection of the cosmic gamma -ray horizon (CGRH) that is independent of any extragalactic background light (EBL) model is presented. The CGRH is a fundamental ...quantity in cosmology. It gives an estimate of the opacity of the universe to very high energy (VHE) gamma -ray photons due to photon-photon pair production with the EBL. The only estimations of the CGRH to date are predictions from EBL models and lower limits from gamma -ray observations of cosmological blazars and gamma -ray bursts. Here, we present homogeneous synchrotron/synchrotron self-Compton (SSC) models of the spectral energy distributions of 15 blazars based on (almost) simultaneous observations from radio up to the highest energy gamma -rays taken with the Fermi satellite. These synchrotron/SSC models predict the unattenuated VHE fluxes, which are compared with the observations by imaging atmospheric Cherenkov telescopes. This comparison provides an estimate of the optical depth of the EBL, which allows us a derivation of the CGRH through a maximum likelihood analysis that is EBL-model independent. We find that the observed CGRH is compatible with the current knowledge of the EBL.