The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of ...the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now-when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.
Euclid preparation Blanchard, A.; Camera, S.; Carbone, C. ...
Astronomy and astrophysics (Berlin),
10/2020, Letnik:
642
Journal Article
Recenzirano
Odprti dostop
Aims.
The
Euclid
space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the ...expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for
Euclid
cosmological forecasts.
Methods.
We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for
Euclid
forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required.
Results.
We present new cosmological forecasts for
Euclid
. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three.
We map the radial and azimuthal distribution of Mg II gas within ~ 200 kpc (physical) of ~ 4000 galaxies at redshifts 0.5 < z < 0.9 using co-added spectra of more than 5000 background galaxies at z > ...1. We investigate the variation of Mg II rest-frame equivalent width (EW) as a function of the radial impact parameter for different subsets of foreground galaxies selected in terms of their rest-frame colors and masses. Blue galaxies have a significantly higher average Mg II EW at close galactocentric radii as compared to the red galaxies. Among the blue galaxies, there is a correlation between Mg II EW and galactic stellar mass of the host galaxy. We also find that the distribution of Mg II absorption around group galaxies is more extended than that for non-group galaxies, and that groups as a whole have more extended radial profiles than individual galaxies. Interestingly, these effects can be satisfactorily modeled by a simple superposition of the absorption profiles of individual member galaxies, assuming that these are the same as those of non-group galaxies, suggesting that the group environment may not significantly enhance or diminish the Mg II absorption of individual galaxies. We show that there is a strong azimuthal dependence of the Mg II absorption within 50 kpc of inclined disk-dominated galaxies, indicating the presence of a strongly bipolar outflow aligned along the disk rotation axis. There is no significant dependence of Mg II absorption on the apparent inclination angle of disk-dominated galaxies.
Automatic analysis toolboxes are popular in brain image analysis, both in clinical and in preclinical practices. In this regard, we proposed a new toolbox for mouse PET-CT brain image analysis ...including a new Statistical Parametric Mapping-based template and a pipeline for image registration of PET-CT images based on CT images. The new templates is compatible with the common coordinate framework (CCFv3) of the Allen Reference Atlas (ARA) while the CT based registration step allows to facilitate the analysis of mouse PET-CT brain images. From the ARA template, we identified 27 volumes of interest that are relevant for in vivo imaging studies and provided binary atlas to describe them. We acquired 20 C57BL/6 mice with
FFDG PET-CT, and 12 of them underwent 3D T2-weighted high-resolution MR scans. All images were elastically registered to the ARA atlas and then averaged. High-resolution MR images were used to validate a CT-based registration pipeline. The resulting method was applied to a mouse model of Parkinson's disease subjected to a test-retest study (n = 6) with the TSPO-specific radioligand
FVC701. The identification of regions of microglia/macrophage activation was performed in comparison to the Ma and Mirrione template. The new toolbox identified 11 (6 after false discovery rate adjustment, FDR) brain sub-areas of significant
FVC701 uptake increase versus the 4 (3 after FDR) macro-regions identified by the Ma and Mirrione template. Moreover, these 11 areas are functionally connected as found by applying the Mouse Connectivity tool of ARA. In conclusion, we developed a mouse brain atlas tool optimized for PET-CT imaging analysis that does not require MR. This tool conforms to the CCFv3 of ARA and could be applied to the analysis of mouse brain disease models.
ABSTRACT
We present a new, updated version of the EuclidEmulator (called EuclidEmulator2), a fast and accurate predictor for the nonlinear correction of the matter power spectrum. 2 per cent level ...accurate emulation is now supported in the eight-dimensional parameter space of w0waCDM+∑mν models between redshift z = 0 and z = 3 for spatial scales within the range $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$. In order to achieve this level of accuracy, we have had to improve the quality of the underlying N-body simulations used as training data: (i) we use self-consistent linear evolution of non-dark matter species such as massive neutrinos, photons, dark energy, and the metric field, (ii) we perform the simulations in the so-called N-body gauge, which allows one to interpret the results in the framework of general relativity, (iii) we run over 250 high-resolution simulations with 30003 particles in boxes of 1(h−1 Gpc)3 volumes based on paired-and-fixed initial conditions, and (iv) we provide a resolution correction that can be applied to emulated results as a post-processing step in order to drastically reduce systematic biases on small scales due to residual resolution effects in the simulations. We find that the inclusion of the dynamical dark energy parameter wa significantly increases the complexity and expense of creating the emulator. The high fidelity of EuclidEmulator2 is tested in various comparisons against N-body simulations as well as alternative fast predictors such as HALOFIT, HMCode, and CosmicEmu. A blind test is successfully performed against the Euclid Flagship v2.0 simulation. Nonlinear correction factors emulated with EuclidEmulator2 are accurate at the level of $1{{\ \rm per\ cent}}$ or better for $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$ and z ≤ 3 compared to high-resolution dark-matter-only simulations. EuclidEmulator2 is publicly available at https://github.com/miknab/EuclidEmulator2.
This paper describes the observations and the first data release (DR1) of the ESO public spectroscopic survey “VANDELS, a deep VIMOS survey of the CANDELS CDFS and UDS fields”. The main targets of ...VANDELS are star-forming galaxies at redshift 2.4 < z < 5.5, an epoch when the Universe had not yet reached 20% of its current age, and massive passive galaxies in the range 1 < z < 2.5. By adopting a strategy of ultra-long exposure times, ranging from a minimum of 20 h to a maximum of 80 h per source, VANDELS is specifically designed to be the deepest-ever spectroscopic survey of the high-redshift Universe. Exploiting the red sensitivity of the refurbished VIMOS spectrograph, the survey is obtaining ultra-deep optical spectroscopy covering the wavelength range 4800–10 000 Å with a sufficiently high signal-to-noise ratio to investigate the astrophysics of high-redshift galaxy evolution via detailed absorption line studies of well-defined samples of high-redshift galaxies. VANDELS-DR1 is the release of all medium-resolution spectroscopic data obtained during the first season of observations, on a 0.2 square degree area centered around the CANDELS-CDFS (Chandra deep-field south) and CANDELS-UDS (ultra-deep survey) areas. It includes data for all galaxies for which the total (or half of the total) scheduled integration time was completed. The DR1 contains 879 individual objects, approximately half in each of the two fields, that have a measured redshift, with the highest reliable redshifts reaching zspec ~ 6. In DR1 we include fully wavelength-calibrated and flux-calibrated 1D spectra, the associated error spectrum and sky spectrum, and the associated wavelength-calibrated 2D spectra. We also provide a catalog with the essential galaxy parameters, including spectroscopic redshifts and redshift quality flags measured by the collaboration. We present the survey layout and observations, the data reduction and redshift measurement procedure, and the general properties of the VANDELS-DR1 sample. In particular, we discuss the spectroscopic redshift distribution and the accuracy of the photometricredshifts for each individual target category, and we provide some examples of data products for the various target typesand the different quality flags. All VANDELS-DR1 data are publicly available and can be retrieved from the ESO archive. Two further data releases are foreseen in the next two years, and a final data release is currently scheduled for June 2020, which will include an improved rereduction of the entire spectroscopic data set.
We analyse the largest spectroscopic samples of galaxy clusters to date, and provide observational constraints on the distance–redshift relation from baryon acoustic oscillations. The cluster samples ...considered in this work have been extracted from the Sloan Digital Sky Survey at three median redshifts, z = 0.2, 0.3 and 0.5. The number of objects is 12 910, 42 215 and 11 816, respectively. We detect the peak of baryon acoustic oscillations for all the three samples. The derived distance constraints are r
s/D
V
(z = 0.2) = 0.18 ± 0.01, r
s/D
V
(z = 0.3) = 0.124 ± 0.004 and r
s/D
V
(z = 0.5) = 0.080 ± 0.002. Combining these measurements with the sound horizon scale measured from the cosmic microwave background, we obtain robust constraints on cosmological parameters. Our results are in agreement with the standard Λ cold dark matter (ΛCDM) model. Specifically, we constrain the Hubble constant in a ΛCDM model,
$H_0 = 64_{-8}^{+17} \, \, \mathrm{km} \, \mathrm{s}^{-1}\,\mathrm{Mpc}^{-1} \,$
, the density of curvature energy, in the oΛCDM context,
$\Omega _K = -0.01_{-0.33}^{+0.34}$
, and finally the parameter of the dark energy equation of state in the wCDM case,
$w = -1.06_{-0.52}^{+0.49}$
. This is the first time the distance–redshift relation has been constrained using only the peak of baryon acoustic oscillations of galaxy clusters.
We explore the role of environment in the evolution of galaxies over 0.1 < z < 0.7 using the final zCOSMOS-bright data set. Using the red fraction of galaxies as a proxy for the quenched population, ...we find that the fraction of red galaxies increases with the environmental overdensity δ and with the stellar mass M
*, consistent with previous works. As at lower redshift, the red fraction appears to be separable in mass and environment, suggesting the action of two processes: mass m(M
*) and environmental ρ(δ) quenching. The parameters describing these appear to be essentially the same at z ∼ 0.7 as locally. We explore the relation between red fraction, mass and environment also for the central and satellite galaxies separately, paying close attention to the effects of impurities in the central-satellite classification and using carefully constructed samples well matched in stellar mass. There is little evidence for a dependence of the red fraction of centrals on overdensity. Satellites are consistently redder at all overdensities, and the satellite quenching efficiency, sat(δ, M
*), increases with overdensity at 0.1 < z < 0.4. This is less marked at higher redshift, but both are nevertheless consistent with the equivalent local measurements. At a given stellar mass, the fraction of galaxies that are satellites, f
sat(δ, M
*), also increases with overdensity. The obtained ρ(δ)/f
sat(δ, M
*) agrees well with sat(δ, M
*), demonstrating that the environmental quenching in the overall population is consistent with being entirely produced by a satellite quenching process at least up to z = 0.7. However, despite the unprecedented size of our high-redshift samples, the associated statistical uncertainties are still significant and our statements should be understood as approximations to physical reality, rather than physically exact formulae.