Context.
Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, ...previously ignored systematic effects must be addressed.
Aims.
In this work, we evaluate the impact of the reduced shear approximation and magnification bias on information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities in high-magnification regions.
Methods.
The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from
Euclid
, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach.
Results.
These effects cause significant biases in Ω
m
,
σ
8
,
n
s
, Ω
DE
,
w
0
, and
w
a
of −0.53
σ
, 0.43
σ
, −0.34
σ
, 1.36
σ
, −0.68
σ
, and 1.21
σ
, respectively. We then show that these lensing biases interact with another systematic effect: the intrinsic alignment of galaxies. Accordingly, we have developed the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to
Euclid
, we find that the additional terms introduced by this correction are sub-dominant.
Weak lensing, which is the deflection of light by matter along the line of sight, has proven to be an efficient method for constraining models of structure formation and reveal the nature of dark ...energy. So far, most weak-lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak-lensing surveys such as Euclid , convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While it carry the same information, the lensing signal is more compressed in the convergence maps than in the shear field. This simplifies otherwise computationally expensive analyses, for instance, non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects caused by holes in the data field, field borders, shape noise, and the fact that the shear is not a direct observable (reduced shear). We present the two mass-inversion methods that are included in the official Euclid data-processing pipeline: the standard Kaiser & Squires method (KS), and a new mass-inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS method and includes corrections for mass-mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the Euclid Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions and third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method substantially reduces the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5, while the errors on the third- and fourth-order moments are reduced by factors of about 2 and 10, respectively.
Euclid preparation Martinet, N.; Schrabback, T.; Hoekstra, H. ...
Astronomy and astrophysics (Berlin),
07/2019, Letnik:
627
Journal Article
Recenzirano
Odprti dostop
In modern weak-lensing surveys, the common approach to correct for residual systematic biases in the shear is to calibrate shape measurement algorithms using simulations. These simulations must fully ...capture the complexity of the observations to avoid introducing any additional bias. In this paper we study the importance of faint galaxies below the observational detection limit of a survey. We simulate simplified
Euclid
VIS images including and excluding this faint population, and measure the shift in the multiplicative shear bias between the two sets of simulations. We measure the shear with three different algorithms: a moment-based approach, model fitting, and machine learning. We find that for all methods, a spatially uniform random distribution of faint galaxies introduces a shear multiplicative bias of the order of a few times 10
−3
. This value increases to the order of 10
−2
when including the clustering of the faint galaxies, as measured in the
Hubble
Space Telescope Ultra-Deep Field. The magnification of the faint background galaxies due to the brighter galaxies along the line of sight is found to have a negligible impact on the multiplicative bias. We conclude that the undetected galaxies must be included in the calibration simulations with proper clustering properties down to magnitude 28 in order to reach a residual uncertainty on the multiplicative shear bias calibration of a few times 10
−4
, in line with the 2 × 10
−3
total accuracy budget required by the scientific objectives of the
Euclid
survey. We propose two complementary methods for including faint galaxy clustering in the calibration simulations.
Context.
The ESA
Euclid
space telescope could observe up to 150 000 asteroids as a side product of its primary cosmological mission. Asteroids appear as trailed sources, that is streaks, in the ...images. Owing to the survey area of 15 000 square degrees and the number of sources, automated methods have to be used to find them.
Euclid
is equipped with a visible camera, VIS (VISual imager), and a near-infrared camera, NISP (Near-Infrared Spectrometer and Photometer), with three filters.
Aims.
We aim to develop a pipeline to detect fast-moving objects in
Euclid
images, with both high completeness and high purity.
Methods.
We tested the
StreakDet
software to find asteroids from simulated
Euclid
images. We optimized the parameters of
StreakDet
to maximize completeness, and developed a post-processing algorithm to improve the purity of the sample of detected sources by removing false-positive detections.
Results.
StreakDet
finds 96.9% of the synthetic asteroid streaks with apparent magnitudes brighter than 23rd magnitude and streak lengths longer than 15 pixels (10 arcsec h
−1
), but this comes at the cost of finding a high number of false positives. The number of false positives can be radically reduced with multi-streak analysis, which utilizes all four dithers obtained by
Euclid
.
Conclusions.
StreakDet
is a good tool for identifying asteroids in
Euclid
images, but there is still room for improvement, in particular, for finding short (less than 13 pixels, corresponding to 8 arcsec h
−1
) and/or faint streaks (fainter than the apparent magnitude of 23).
A pulsed KrF excimer laser of irradiance of about 108 W/cm2 was utilized to synthesize Si nanocrystals on SiO2/Si substrates. The results were compared with that ones obtained by applying low bias ...voltage to Si(1 0 0) target in order to control the kinetic energy of plasma ions. Glancing incidence X-ray diffraction spectra indicate the presence of silicon crystalline phases, i.e. (1 1 1) and (2 2 0), on SiO2/Si substrates. The average Si nanocrystal size was estimated to be about 45 nm by using the Debye-Scherrer formula. Scanning electron microscopy and atomic force microscopy images showed the presence of nanoparticles of different size and shape. Their distribution exhibits a maximum concentration at 49 nm and a fraction of 14% at 15 nm.
Euclid preparation Blanchard, A.; Camera, S.; Carbone, C. ...
Astronomy and astrophysics (Berlin),
10/2020, Letnik:
642
Journal Article
Recenzirano
Odprti dostop
Aims.
The
Euclid
space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the ...expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for
Euclid
cosmological forecasts.
Methods.
We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for
Euclid
forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required.
Results.
We present new cosmological forecasts for
Euclid
. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three.
Si nanocrystals formation by a new ion implantation device Lorusso, A.; Nassisi, V.; Velardi, L. ...
Nuclear instruments & methods in physics research. Section B, Beam interactions with materials and atoms,
20/May , Letnik:
266, Številka:
10
Journal Article
Recenzirano
Metallic and non-metallic ion beams can be used to modify the properties of wafer surfaces if accelerated at moderate energies. We developed a new “implantation machine” able to generate ions and to ...accelerate them up to 80kV. The ion generation is achieved by a laser-plasma source which creates plasma in expansion. The device consists of a KrF excimer laser and a generating vacuum chamber made of stainless steel. The laser energy was 45mJ/pulse with a power density of 2.25×108W/cm2. The target was kept to positive voltage to accelerate the produced ions. The ion dose was estimated by a fast polarised Faraday cup. This machine was utilised to try synthesizing silicon nanocrystals in SiO2 matrix. Preliminary results of Si nanocrystals formation and the glancing-angle X-ray diffraction analyses are reported.
ABSTRACT
We present a new, updated version of the EuclidEmulator (called EuclidEmulator2), a fast and accurate predictor for the nonlinear correction of the matter power spectrum. 2 per cent level ...accurate emulation is now supported in the eight-dimensional parameter space of w0waCDM+∑mν models between redshift z = 0 and z = 3 for spatial scales within the range $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$. In order to achieve this level of accuracy, we have had to improve the quality of the underlying N-body simulations used as training data: (i) we use self-consistent linear evolution of non-dark matter species such as massive neutrinos, photons, dark energy, and the metric field, (ii) we perform the simulations in the so-called N-body gauge, which allows one to interpret the results in the framework of general relativity, (iii) we run over 250 high-resolution simulations with 30003 particles in boxes of 1(h−1 Gpc)3 volumes based on paired-and-fixed initial conditions, and (iv) we provide a resolution correction that can be applied to emulated results as a post-processing step in order to drastically reduce systematic biases on small scales due to residual resolution effects in the simulations. We find that the inclusion of the dynamical dark energy parameter wa significantly increases the complexity and expense of creating the emulator. The high fidelity of EuclidEmulator2 is tested in various comparisons against N-body simulations as well as alternative fast predictors such as HALOFIT, HMCode, and CosmicEmu. A blind test is successfully performed against the Euclid Flagship v2.0 simulation. Nonlinear correction factors emulated with EuclidEmulator2 are accurate at the level of $1{{\ \rm per\ cent}}$ or better for $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$ and z ≤ 3 compared to high-resolution dark-matter-only simulations. EuclidEmulator2 is publicly available at https://github.com/miknab/EuclidEmulator2.
Abstract
Background
Pharmaceutical companies have a considerable environmental impact in terms of waste disposal, CO2 emissions, plastic usage, and air pollution, water and energy consumption ...(including those related to transportation and refrigeration of drugs). According to data from 2019, pharmacological firms’ CO2 emissions are 13% greater than those of the car industry. To follow a greener path, pharmaceutical companies should primarily invest in sustainable production and supply, standard criteria, rewards programs, and staff training. This review aims to examine how pharmaceutical businesses incorporate environmental sustainability concerns into their operations, regulations, and communications.
Methods
We selected the top 50 pharmaceutical companies by revenue using the drugdiscoverytrends.com database. We collected the 2021 ESG report for each company, examining the following aspects: the inclusion of a ‘sustainability’ section or a downloadable sustainability report; the disclosure of the company production and carbon footprint; the existence of concrete measures to limit CO2 emissions, save water and energy, and reduce the effect of transportation. Data were analyzed through descriptive statistics to assess the sustainability of companies.
Results
Preliminary results indicate that each organization has a different level of importance for the topic of sustainability. Companies focused on CO2 production, few on other areas such as water needs or impact on biodiversity. Logistics correspond to the most impactful carbon footprint. Future forecasts and the path to NetZero emissions will also be emphasized, possibly comparing them with upcoming ESG 2022 reports.
Conclusions
The first results of our review show that the 50 biggest pharmaceutical companies have a different kind of interest towards environmental-friendly actions. Those with the highest revenues seem to be more careful towards the green issues, but the route to NetZero is still far to be achieved.
Key messages
• The environmental impact of pharmaceutical companies is greater than other industries. They must balance their business goals with environmental sustainability to reach NetZero as soon as possible.
• The future goal of public health will be to monitor the environmental impact of drug production.