The quality of single crystal diamond obtained by microwave CVD processes has been drastically improved in the last 5 years thanks to surface pre-treatment of the substrates A. Tallaire, J. Achard, ...F. Silva, R.S. Sussmann, A. Gicquel, E. Rzepka, Physica Status Solidi (A) 201, 2419–2424 (2004); G. Bogdan, M. Nesládek, J. D'Haen, J. Maes, V.V. Moshchalkov, K. Haenen, M. D'Olieslaeger, Physica Status Solidi (A) 202, 2066–2072 (2005); M. Yamamoto, T. Teraji, T. Ito, Journal of Crystal Growth 285, 130–136 (2005). Additionally, recent results have unambiguously shown the occurrence of (110) faces on crystal edges and (113) faces on crystal corners F. Silva, J. Achard, X. Bonnin, A. Michau, A. Tallaire, O. Brinza, A. Gicquel, Physica Status Solidi (A) 203, 3049–3055 (2006). We have developed a 3D geometrical growth model to account for the final crystal morphology. The basic parameters of this growth model are the relative displacement speeds of (111), (110) and (113) faces normalized to that of the (100) faces, respectively
α,
β, and
γ. This model predicts both the final equilibrium shape of the crystal (
i.e. after infinite growth time) and the crystal morphology as a function of
α,
β,
γ, and deposition time.
An optimized operating point, deduced from the model, has been validated experimentally by measuring the growth rate in (100), (111), (110), and (113) orientations. Furthermore, the evolution of
α,
β,
γ as a function of methane concentration in the gas discharge has been established. From these results, crystal growth strategies can be proposed in order, for example, to enlarge the deposition area. In particular, we will show, using the growth model, that the only possibility to significantly increase the deposition area is, for our growth conditions, to use a (113) oriented substrate. A comparison between the grown crystal and the model results will be discussed and characterizations of the grown film (Photoluminescence spectroscopy, EPR, SEM) will be presented.
In the preparation of high power diamond photoswitches, thick (more than 100 μm) lightly nitrogen-doped single crystals were grown at LIMHP, for which Differential Interference Contrast Microscopy, ...Raman spectroscopy, photoluminescence, and cathodoluminescence have confirmed good morphology and very low but well-controlled impurity doping level. In order to evaluate the effect of nitrogen incorporation on the electronic properties of these films, photoconductivity measurements have been carried out. In an initial study,
I–
V and transient photocurrent measurements were conducted on several films with N-doping from 0 to 20 ppm intentionally added to the gas phase during growth, resulting into nitrogen concentrations lower than 100 ppb in the film. The results of these measurements are presented showing typical semiconductor behavior in terms of gain versus settling time, relatively high external quantum efficiency (EQE) and corresponding derived
μτ (mobility
×
lifetime) product. In particular, samples with no nitrogen showed EQEs of several hundreds while their settling time was quite long (tens of seconds). However, samples with small nitrogen addition were observed to have settling times decreasing below a few seconds while EQEs close to 10 showed that a compromise could be found between efficiency and response time.
In this paper, the fast growth of three thick diamond single crystals using the chemical vapour deposition (CVD) method working in a pulsed mode is reported. After 48
h, a total of half a carat of ...uncoloured synthetic diamond was obtained. These crystals, exhibiting thicknesses of 430, 570 and 900
μm, were then thoroughly analysed by a wide range of characterization techniques, such as Raman spectroscopy, UV and IR absorption, photoluminescence (PL) and cathodoluminescence (CL). All three samples turned out to be of relatively high quality but small differences in purity and quality could be detected. These appeared to be directly related to the slight inconsistence of the substrate temperature during growth that ranged from 800 to 900
°C due to non-uniformity in the radial distribution of gas temperature. A higher contamination by residual nitrogen impurities has been evidenced for the two samples that were grown with the lowest temperature as confirmed by the PL and UV absorption spectra, as well as a lower free excitonic emission in CL. Finally a 900
°C growth temperature was shown to be more favourable to good quality and fast growth rate.
Context.
Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, ...previously ignored systematic effects must be addressed.
Aims.
In this work, we evaluate the impact of the reduced shear approximation and magnification bias on information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities in high-magnification regions.
Methods.
The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from
Euclid
, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach.
Results.
These effects cause significant biases in Ω
m
,
σ
8
,
n
s
, Ω
DE
,
w
0
, and
w
a
of −0.53
σ
, 0.43
σ
, −0.34
σ
, 1.36
σ
, −0.68
σ
, and 1.21
σ
, respectively. We then show that these lensing biases interact with another systematic effect: the intrinsic alignment of galaxies. Accordingly, we have developed the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to
Euclid
, we find that the additional terms introduced by this correction are sub-dominant.
Weak lensing, which is the deflection of light by matter along the line of sight, has proven to be an efficient method for constraining models of structure formation and reveal the nature of dark ...energy. So far, most weak-lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak-lensing surveys such as Euclid , convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While it carry the same information, the lensing signal is more compressed in the convergence maps than in the shear field. This simplifies otherwise computationally expensive analyses, for instance, non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects caused by holes in the data field, field borders, shape noise, and the fact that the shear is not a direct observable (reduced shear). We present the two mass-inversion methods that are included in the official Euclid data-processing pipeline: the standard Kaiser & Squires method (KS), and a new mass-inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS method and includes corrections for mass-mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the Euclid Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions and third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method substantially reduces the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5, while the errors on the third- and fourth-order moments are reduced by factors of about 2 and 10, respectively.
Multi-object spectroscopic galaxy surveys typically make use of photometric and colour criteria to select their targets. That is not the case of which will use the NISP slitless spectrograph to ...record spectra for every source over its field of view. Slitless spectroscopy has the advantage of avoiding defining a priori a specific galaxy sample, but at the price of making the selection function harder to quantify. In its Wide Survey was designed to build robust statistical samples of emission-line galaxies with fluxes brighter than $ 2e-16 erg s cm $, using the Halpha -$ N ii right $ complex to measure redshifts within the range $ $. Given the expected signal-to-noise ratio of NISP spectra, at such faint fluxes a significant contamination by incorrectly measured redshifts is expected, either due to misidentification of other emission lines, or to noise fluctuations mistaken as such, with the consequence of reducing the purity of the final samples. This can be significantly ameliorated by exploiting the extensive photometric information to identify emission-line galaxies over the redshift range of interest. Beyond classical multi-band selections in colour space, machine learning techniques provide novel tools to perform this task. Here, we compare and quantify the performance of six such classification algorithms in achieving this goal. We consider the case when only the photometric and morphological measurements are used, and when these are supplemented by the extensive set of ancillary ground-based photometric data, which are part of the overall scientific strategy to perform lensing tomography. The classifiers are trained and tested on two mock galaxy samples, the EL-COSMOS and Euclid Flagship2 catalogues. The best performance is obtained from either a dense neural network or a support vector classifier, with comparable results in terms of the adopted metrics. When training on on-board photometry alone, these are able to remove $87<!PCT!>$ of the sources that are fainter than the nominal flux limit or lie outside the $0.9<z<1.8$ redshift range, a figure that increases to $97<!PCT!>$ when ground-based photometry is included. These results show how by using the photometric information available to it will be possible to efficiently identify and discard spurious interlopers, allowing us to build robust spectroscopic samples for cosmological investigations.
Context.
The standard cosmological model is based on the fundamental assumptions of a spatially homogeneous and isotropic universe on large scales. An observational detection of a violation of these ...assumptions at any redshift would immediately indicate the presence of new physics.
Aims.
We quantify the ability of the
Euclid
mission, together with contemporary surveys, to improve the current sensitivity of null tests of the canonical cosmological constant Λ and the cold dark matter (ΛCDM) model in the redshift range 0 <
z
< 1.8.
Methods.
We considered both currently available data and simulated
Euclid
and external data products based on a ΛCDM fiducial model, an evolving dark energy model assuming the Chevallier-Polarski-Linder parameterization or an inhomogeneous Lemaître-Tolman-Bondi model with a cosmological constant Λ, and carried out two separate but complementary analyses: a machine learning reconstruction of the null tests based on genetic algorithms, and a theory-agnostic parametric approach based on Taylor expansion and binning of the data, in order to avoid assumptions about any particular model.
Results.
We find that in combination with external probes,
Euclid
can improve current constraints on null tests of the ΛCDM by approximately a factor of three when using the machine learning approach and by a further factor of two in the case of the parametric approach. However, we also find that in certain cases, the parametric approach may be biased against or missing some features of models far from ΛCDM.
Conclusions.
Our analysis highlights the importance of synergies between
Euclid
and other surveys. These synergies are crucial for providing tighter constraints over an extended redshift range for a plethora of different consistency tests of some of the main assumptions of the current cosmological paradigm.