Abstract We produce several public void catalogs using a volume-limited subsample of the Sloan Digital Sky Survey Data Release 7 (SDSS DR7). Using new implementations of three different void-finding ...algorithms, VoidFinder and two ZOBOV-based algorithms (VIDE and REVOLVER), we identify 1163, 531, and 518 cosmic voids with radii >10 h −1 Mpc, respectively, out to a redshift of z = 0.114 assuming a Planck 2018 cosmology, and 1184, 535, and 519 cosmic voids assuming a WMAP5 cosmology. We compute effective radii and centers for all voids and find none with an effective radius >54 h −1 Mpc. The median void effective radius is 15–19 h −1 Mpc for all three algorithms. We extract and discuss several properties of the void populations, including radial density profiles, the volume fraction of the catalog contained within voids, and the fraction of galaxies contained within voids. Using 64 mock galaxy catalogs created from the Horizon Run 4 N -body simulation, we compare simulated and observed void properties and find good agreement between the SDSS DR7 and mock catalog results.
Abstract
We study how well void-finding algorithms identify cosmic void regions and whether we can quantitatively and qualitatively compare the voids they find with dynamical information from the ...underlying matter distribution. Using the ORIGAMI algorithm to determine the number of dimensions along which dark matter particles have undergone shell crossing (crossing number) in
N
-body simulations from the AbacusSummit simulation suite, we identify dark matter particles that have undergone no shell crossing as belonging to voids. We then find voids in the corresponding halo distribution using two different void-finding algorithms: VoidFinder and V
2
, a ZOBOV-based algorithm. The resulting void catalogs are compared to the distribution of dark matter particles to examine how their crossing numbers depend on void proximity. While both algorithms’ voids have a similar distribution of crossing numbers near their centers, we find that beyond 0.25 times the effective void radius, voids found by VoidFinder exhibit a stronger preference for particles with low crossing numbers than those found by V
2
. We examine two possible methods of mitigating this difference in efficacy between the algorithms. While we are able to partially mitigate the ineffectiveness of V
2
by using the distance from the void edge as a measure of centrality, we conclude that VoidFinder more reliably identifies dynamically distinct regions of low crossing number.
Abstract
Current neutrino detectors will observe hundreds to thousands of neutrinos from Galactic supernovae, and future detectors will increase this yield by an order of magnitude or more. With such ...a data set comes the potential for a huge increase in our understanding of the explosions of massive stars, nuclear physics under extreme conditions, and the properties of the neutrino. However, there is currently a large gap between supernova simulations and the corresponding signals in neutrino detectors, which will make any comparison between theory and observation very difficult. SNEWPY is an open-source software package that bridges this gap. The SNEWPY code can interface with supernova simulation data to generate from the model either a time series of neutrino spectral fluences at Earth, or the total time-integrated spectral fluence. Data from several hundred simulations of core-collapse, thermonuclear, and pair-instability supernovae is included in the package. This output may then be used by an event generator such as sntools or an event rate calculator such as the SuperNova Observatories with General Long Baseline Experiment Simulator (SNOwGLoBES). Additional routines in the SNEWPY package automate the processing of the generated data through the SNOwGLoBES software and collate its output into the observable channels of each detector. In this paper we describe the contents of the package, the physics behind SNEWPY, the organization of the code, and provide examples of how to make use of its capabilities.
The FemtoDAQ is a low-cost two-channel data acquisition system that we have used to investigate the signal characteristics of silicon photomultipliers (SiPMs) coupled to fast scintillators. The ...FemtoDAQ system can also be used to instrument low-cost moderate-performance passive detectors, and is suitable for use in harsh environments (e.g., high altitude). The FemtoDAQ is being used as a SiPM test bench for the High Altitude Water Cherenkov (HAWC) Observatory, a TeV gamma ray detector located 4100 m above sea level. Planned upgrades to the HAWC array can benefit greatly from SiPMs, a robust low-voltage low-cost alternative to traditional vacuum photomultipliers. The FemtoDAQ is used to power the SiPM detector front end, bias the SiPM, and digitize the photosensor output in a single compact unit.
The High-Altitude Water Cherenkov Observatory, or HAWC, is an air shower array designed to observe cosmic rays and gamma rays between 100 GeV and 100 TeV. HAWC, located between the peaks Sierra Negra ...and Pico de Orizaba in central Mexico, will be completed in the spring of 2015. However, the observatory has been collecting data in a partial configuration since mid-2013. With only part of the final array in data acquisition, HAWC has already accumulated a data set of nearly 100 billion air showers. These events are used to calibrate the detector angular reconstruction using the shadow of the Moon, and to measure the anisotropy in the arrival directions of cosmic rays above 1 TeV. Using data recorded between June 2013 and July 2014, we have observed a significant 10−4 anisotropy consisting of three statistically significant “hotspots” in the cosmic ray flux. We will discuss these first results from HAWC and compare them to previous measurements of anisotropy in the northern and southern sky.
Developing sustainable software for the scientific community requires expertise in software engineering and domain science. This can be challenging due to the unique needs of scientific software, the ...insufficient resources for software engineering practices in the scientific community, and the complexity of developing for evolving scientific contexts. While open‐source software can partially address these concerns, it can introduce complicating dependencies and delay development. These issues can be reduced if scientists and software developers collaborate. We present a case study wherein scientists from the SuperNova Early Warning System collaborated with software developers from the Scalable Cyberinfrastructure for Multi‐Messenger Astrophysics project. The collaboration addressed the difficulties of open‐source software development, but presented additional risks to each team. For the scientists, there was a concern of relying on external systems and lacking control in the development process. For the developers, there was a risk in supporting a user‐group while maintaining core development. These issues were mitigated by creating a second Agile Scrum framework in parallel with the developers' ongoing Agile Scrum process. This Agile collaboration promoted communication, ensured that the scientists had an active role in development, and allowed the developers to evaluate and implement the scientists' software requirements. The collaboration provided benefits for each group: the scientists actuated their development by using an existing platform, and the developers utilized the scientists' use‐case to improve their systems. This case study suggests that scientists and software developers can avoid scientific computing issues by collaborating and that Agile Scrum methods can address emergent concerns.
The next Galactic core-collapse supernova (CCSN) presents a once-in-a-lifetime opportunity to make astrophysical measurements using neutrinos, gravitational waves, and electromagnetic radiation. ...CCSNe local to the Milky Way are extremely rare, so it is paramount that detectors are prepared to observe the signal when it arrives. The IceCube Neutrino Observatory, a gigaton water Cherenkov detector below the South Pole, is sensitive to the burst of neutrinos released by a Galactic CCSN at a level \(>\)10\(\sigma\). This burst of neutrinos precedes optical emission by hours to days, enabling neutrinos to serve as an early warning for follow-up observation. IceCube's detection capabilities make it a cornerstone of the global network of neutrino detectors monitoring for Galactic CCSNe, the SuperNova Early Warning System (SNEWS 2.0). In this contribution, we describe IceCube's sensitivity to Galactic CCSNe and strategies for operational readiness, including "fire drill" data challenges. We also discuss coordination with SNEWS 2.0.
We produce several public void catalogs using a volume-limited subsample of the Sloan Digital Sky Survey Data Release 7 (SDSS DR7). Using new implementations of three different void-finding ...algorithms, VoidFinder and two ZOBOV-based algorithms (VIDE and REVOLVER), we identify 1163, 531, and 518 cosmic voids with radii >10 Mpc/h, respectively, out to a redshift of z = 0.114 assuming a Planck 2018 cosmology, and 1184, 535, and 519 cosmic voids assuming a WMAP5 cosmology. We compute effective radii and centers for all voids and find none with an effective radius >54 Mpc/h. The median void effective radius is 15-19 Mpc/h for all three algorithms. We extract and discuss several properties of the void populations, including radial density profiles, the volume fraction of the catalog contained within voids, and the fraction of galaxies contained within voids. Using 64 mock galaxy catalogs created from the Horizon Run 4 N-body simulation, we compare simulated and observed void properties and find good agreement between the SDSS DR7 and mock catalog results.
We study how well void-finding algorithms identify cosmic void regions and whether we can quantitatively and qualitatively describe their biases by comparing the voids they find with dynamical ...information from the underlying matter distribution. Using the ORIGAMI algorithm to determine the number of dimensions along which dark matter particles have undergone shell-crossing (crossing number) in \(N\)-body simulations from the AbacusSummit simulation suite, we identify dark matter particles which have undergone no shell crossing as belonging to voids. We then find voids in the corresponding halo distribution using two different void-finding algorithms: VoidFinder and V\(^2\), a ZOBOV-based algorithm. The resulting void catalogs are compared to the distribution of dark matter particles to examine how their crossing numbers depend on void proximity. While both algorithms' voids have a similar distribution of crossing numbers near their centers, we find that beyond 0.25 times the effective void radius, voids found by VoidFinder exhibit a stronger preference for particles with low crossing numbers than those found by V\(^2\). We examine two possible methods of mitigating this difference in efficacy between the algorithms. While we are able to partially mitigate the ineffectiveness of V\(^2\) by using distance from the void edge as a measure of centrality, we conclude that VoidFinder more reliably identifies dynamically-distinct regions of low crossing number.