We introduce an algorithm for the classical simulation of Gaussian boson sampling that is quadratically faster than previously known methods. The complexity of the algorithm is exponential in the ...number of photon pairs detected, not the number of photons, and is directly proportional to the time required to calculate a probability amplitude for a pure Gaussian state. The main innovation is to use auxiliary conditioning variables to reduce the problem of sampling to the computation of the pure-state probability amplitudes, for which the most computationally expensive step is the calculation of a loop hafnian. We implement and benchmark an improved loop-hafnian algorithm and show that it can be used to compute pure-state probabilities, the dominant step in the sampling algorithm, of events involving up to 50 photons in a single workstation, i.e., without the need of a supercomputer.
Objective
Current magnetic resonance imaging (MRI) axon diameter measurements rely on the pulsed gradient spin-echo sequence, which is unable to provide diffusion times short enough to measure small ...axon diameters. This study combines the AxCaliber axon diameter fitting method with data generated from Monte Carlo simulations of oscillating gradient spin-echo sequences (OGSE) to infer micron-sized axon diameters, in order to determine the feasibility of using MRI to infer smaller axon diameters in brain tissue.
Materials and methods
Monte Carlo computer simulation data were synthesized from tissue geometries of cylinders of different diameters using a range of gradient frequencies in the cosine OGSE sequence . Data were fitted to the AxCaliber method modified to allow the new pulse sequence. Intra- and extra-axonal water were studied separately and together.
Results
The simulations revealed the extra-axonal model to be problematic. Rather than change the model, we found that restricting the range of gradient frequencies such that the measured apparent diffusion coefficient was constant over that range resulted in more accurate fitted diameters. Thus a careful selection of frequency ranges is needed for the AxCaliber method to correctly model extra-axonal water, or adaptations to the method are needed. This restriction helped reduce the necessary gradient strengths for measurements that could be performed with parameters feasible for a Bruker BG6 gradient set. For these experiments, the simulations inferred diameters as small as 0.5 μm on square-packed and randomly packed cylinders. The accuracy of the inferred diameters was found to be dependent on the signal-to-noise ratio (SNR), with smaller diameters more affected by noise, although all diameter distributions were distinguishable from one another for all SNRs tested.
Conclusion
The results of this study indicate the feasibility of using MRI with OGSE on preclinical scanners to infer small axon diameters.
We introduce a new open-source software library
J
e
t
, which uses task-based parallelism to obtain speed-ups in classical tensor-network simulations of quantum circuits. These speed-ups result from ...i) the increased parallelism introduced by mapping the tensor-network simulation to a task-based framework, ii) a novel method of reusing shared work between tensor-network contraction tasks, and iii) the concurrent contraction of tensor networks on all available hardware. We demonstrate the advantages of our method by benchmarking our code on several Sycamore-53 and Gaussian boson sampling (GBS) supremacy circuits against other simulators. We also provide and compare theoretical performance estimates for tensor-network simulations of Sycamore-53 and GBS supremacy circuits for the first time.
•A semi-automatic and a more accurate manual method were developed to detect difference in volumes of hippocampus between mice.•The semi-automated segmentation was unable to detect the same level of ...differences.•Manual segmentation is a more reliable segmentation method for small structures.
Magnetic resonance imaging (MRI) of transgenic mouse models of Alzheimer's disease is valuable to understand better the structural changes that occur in the brain and could provide a means to test drug treatments. A hallmark pathological feature of Alzheimer's disease is atrophy of the hippocampus, which is an early biomarker of the disease. MRI can be used to detect and monitor this biomarker.
Repeated measurements using in vivo 3D T2-weighted imaging of mice were used to assess the methods. Each mouse was imaged twice in one week and twice the following week and no changes in volume were expected. The hippocampus was segmented both manually and semi-automatically. Registration was done to gain information on shape changes. The volumes from each mouse were compared intra-mouse, between mice and to hippocampus volume values in the literature.
A reliable method was developed which was able to detect difference in volumes of hippocampus between mice when performed by a single individual. The semi-automated segmentation was unable to detect the same level of differences. The semi-automated segmentation method gave larger hippocampus volumes, with 78–87% reliability between the manual and semi-automated segmentation. Although more accurate, the manual segmentation is laborious and suffers from inter- and intra-variability.
These results suggest that manual segmentation is still considered the most reliable segmentation method for small structures. However, if performing longitudinal studies, where there is at least one year between imaging sessions, the segmentation should be done all at once at the end of all the imaging sessions. If segmentation is done after each imaging session, with at least a year passing between segmentations, very small variations in volumes can be missed. This method provides a means to quantify the volume of the hippocampus in a live mouse using manual segmentation, which is the first step toward studying hippocampus atrophy in a mouse model of Alzheimer's disease.
A quantum computer attains computational advantage when outperforming the best classical computers running the best-known algorithms on well-defined tasks. No photonic machine offering ...programmability over all its quantum gates has demonstrated quantum computational advantage: previous machines
were largely restricted to static gate sequences. Earlier photonic demonstrations were also vulnerable to spoofing
, in which classical heuristics produce samples, without direct simulation, lying closer to the ideal distribution than do samples from the quantum hardware. Here we report quantum computational advantage using Borealis, a photonic processor offering dynamic programmability on all gates implemented. We carry out Gaussian boson sampling
(GBS) on 216 squeezed modes entangled with three-dimensional connectivity
, using a time-multiplexed and photon-number-resolving architecture. On average, it would take more than 9,000 years for the best available algorithms and supercomputers to produce, using exact methods, a single sample from the programmed distribution, whereas Borealis requires only 36 μs. This runtime advantage is over 50 million times as extreme as that reported from earlier photonic machines. Ours constitutes a very large GBS experiment, registering events with up to 219 photons and a mean photon number of 125. This work is a critical milestone on the path to a practical quantum computer, validating key technological features of photonics as a platform for this goal.
We present 12 new simulations of unequal mass neutron star mergers. The simulations are performed with the spec code, and utilize nuclear-theory-based equations of state and a two-moment gray ...neutrino transport scheme with an improved energy estimate based on evolving the number density. We model the neutron stars with the SFHo, LS220, and DD2 equations of state (EOS) and we study the neutrino and matter emission of all 12 models to search for robust trends between binary parameters and emission characteristics. We find that the total mass of the dynamical ejecta exceeds 0.01 M⊙ only for SFHo with weak dependence on the mass ratio across all models. We find that the ejecta have a broad electron fraction (Ye) distribution (≈0.06–0.48), with mean 0.2. Ye increases with neutrino irradiation over time, but decreases with increasing binary asymmetry. We also find that the models have ejecta with a broad asymptotic velocity distribution (≈0.05–0.7c). The average velocity lies in the range 0.2c−0.3c and decreases with binary asymmetry. Furthermore, we find that disk mass increases with binary asymmetry and stiffness of the EOS. The Ye of the disk increases with softness of the EOS. The strongest neutrino emission occurs for the models with soft EOS. For (anti) electron neutrinos we find no significant dependence of the magnitude or angular distribution or neutrino luminosity with mass ratio. The heavier neutrino species have a luminosity dependence on mass ratio but an angular distribution which does not change with mass ratio.
Photonics is a promising platform for demonstrating a quantum computational advantage (QCA) by outperforming the most powerful classical supercomputers on a well-defined computational task. Despite ...this promise, existing proposals and demonstrations face challenges. Experimentally, current implementations of Gaussian boson sampling (GBS) lack programmability or have prohibitive loss rates. Theoretically, there is a comparative lack of rigorous evidence for the classical hardness of GBS. In this work, we make progress in improving both the theoretical evidence and experimental prospects. We provide evidence for the hardness of GBS, comparable to the strongest theoretical proposals for QCA. We also propose a QCA architecture we call high-dimensional GBS, which is programmable and can be implemented with low loss using few optical components. We show that particular algorithms for simulating GBS are outperformed by high-dimensional GBS experiments at modest system sizes. This work thus opens the path to demonstrating QCA with programmable photonic processors.
The discovery of GW170817 with gravitational waves (GWs) and electromagnetic (EM) radiation is prompting new questions in strong-gravity astrophysics. Importantly, it remains unknown whether the ...progenitor of the merger comprised two neutron stars (NSs) or a NS and a black hole (BH). Using new numerical-relativity simulations and incorporating modeling uncertainties, we produce novel GW and EM observables for NS-BH mergers with similar masses. A joint analysis of GW and EM measurements reveals that if GW170817 is a NS-BH merger, ≲40% of the binary parameters consistent with the GW data are compatible with EM observations.
We introduce a new relativistic astrophysics code, SpECTRE, that combines a discontinuous Galerkin method with a task-based parallelism model. SpECTRE's goal is to achieve more accurate solutions for ...challenging relativistic astrophysics problems such as core-collapse supernovae and binary neutron star mergers. The robustness of the discontinuous Galerkin method allows for the use of high-resolution shock capturing methods in regions where (relativistic) shocks are found, while exploiting high-order accuracy in smooth regions. A task-based parallelism model allows efficient use of the largest supercomputers for problems with a heterogeneous workload over disparate spatial and temporal scales. We argue that the locality and algorithmic structure of discontinuous Galerkin methods will exhibit good scalability within a task-based parallelism framework. We demonstrate the code on a wide variety of challenging benchmark problems in (non)-relativistic (magneto)-hydrodynamics. We demonstrate the code's scalability including its strong scaling on the NCSA Blue Waters supercomputer up to the machine's full capacity of 22,380 nodes using 671,400 threads.
A considerable amount of attention has been given to discontinuous Galerkin methods for hyperbolic problems in numerical relativity, showing potential advantages of the methods in dealing with ...hydrodynamical shocks and other discontinuities. This paper investigates discontinuous Galerkin methods for the solution of elliptic problems in numerical relativity. We present a novel hp-adaptive numerical scheme for curvilinear and nonconforming meshes. It uses a multigrid preconditioner with a Chebyshev or Schwarz smoother to create a very robust discontinuous Galerkin code on generic domains. The code employs compactification to move the outer boundary near spatial infinity. We explore the properties of the code on some test problems, including one mimicking neutron stars with phase transitions. We also apply it to construct initial data for two or three black holes.