The production of patient information leaflets (PILs) for diverse patient cohorts is challenging. This study developed varicocele and fluoroscopy guided joint injection (FLGJI) procedural PILs.
...Evidence-based PILs were developed, providing radiological procedural information – preparation, explanation of interventional procedures and aftercare. PIL readability was tested via validated readability programs: Flesch Kincaid and Flesch ease reading score methods. Radiology approval of PIL(s) content was confirmed. PILs were distributed with appointment information. Patient interviews were conducted just prior to examination and by telephone, 7 days post procedure.
Participants were purposely sampled (6 months): varicocele embolisation (n = 17) and FLGJI (n = 47). Overall 78.1% of all participants preferred Maltese leaflets. Varicocele embolisation patients were generally younger and a greater percentage educated to tertiary level compared to FLGJI patients. Education and age were found to be recurrent significant variables in the patient demographics and responses for both patient cohorts. Age versus education for the FLGJI cohort proved to be significant for several responses. Readability statistics identified the FLGJI leaflet as a plain English rating, the varicocele embolisation leaflet was more difficult. Patient feedback identified ‘what is a varicocele?’ as important to varicocele embolisation patients whereas FLGJI patients chose, ‘advice about aftercare’ and ‘advice about pain management’, highlighting differences in patients' priorities between procedures.
PILs provided tangible, accurate information pre and post examination. Patient involvement in achieving appropriate information informed the PILs development, which were adopted clinically. The development of tailored PILs to meet the diversity of other interventional radiology procedures is recommended.
•The importance of testing PIL readability for different patient cohorts is identified.•Diversity in patient demographics for different radiology procedures impacts on PIL development.•Age, education levels and native language are patient demographics requiring consideration.•Patient, radiographer and clinician involvement in PIL development is essential.
Euclid preparation Blanchard, A.; Camera, S.; Carbone, C. ...
Astronomy and astrophysics (Berlin),
10/2020, Letnik:
642
Journal Article
Recenzirano
Odprti dostop
Aims.
The
Euclid
space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the ...expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for
Euclid
cosmological forecasts.
Methods.
We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for
Euclid
forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required.
Results.
We present new cosmological forecasts for
Euclid
. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three.
ABSTRACT
We present a new, updated version of the EuclidEmulator (called EuclidEmulator2), a fast and accurate predictor for the nonlinear correction of the matter power spectrum. 2 per cent level ...accurate emulation is now supported in the eight-dimensional parameter space of w0waCDM+∑mν models between redshift z = 0 and z = 3 for spatial scales within the range $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$. In order to achieve this level of accuracy, we have had to improve the quality of the underlying N-body simulations used as training data: (i) we use self-consistent linear evolution of non-dark matter species such as massive neutrinos, photons, dark energy, and the metric field, (ii) we perform the simulations in the so-called N-body gauge, which allows one to interpret the results in the framework of general relativity, (iii) we run over 250 high-resolution simulations with 30003 particles in boxes of 1(h−1 Gpc)3 volumes based on paired-and-fixed initial conditions, and (iv) we provide a resolution correction that can be applied to emulated results as a post-processing step in order to drastically reduce systematic biases on small scales due to residual resolution effects in the simulations. We find that the inclusion of the dynamical dark energy parameter wa significantly increases the complexity and expense of creating the emulator. The high fidelity of EuclidEmulator2 is tested in various comparisons against N-body simulations as well as alternative fast predictors such as HALOFIT, HMCode, and CosmicEmu. A blind test is successfully performed against the Euclid Flagship v2.0 simulation. Nonlinear correction factors emulated with EuclidEmulator2 are accurate at the level of $1{{\ \rm per\ cent}}$ or better for $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$ and z ≤ 3 compared to high-resolution dark-matter-only simulations. EuclidEmulator2 is publicly available at https://github.com/miknab/EuclidEmulator2.
Planck 2015 results Ade, P A R; Aumont, J; Baccigalupi, C ...
Astronomy and astrophysics (Berlin),
10/2016, Letnik:
594
Journal Article
Recenzirano
Odprti dostop
We present the current accounting of systematic effect uncertainties for the Low Frequency Instrument (LFI) that are relevant to the 2015 release of the Planck cosmological results, showing the ...robustness and consistency of our data set, especially for polarization analysis. We use two complementary approaches: (i) simulations based on measured data and physical models of the known systematic effects; and (ii) analysis of difference maps containing the same sky signal ("null-maps"). The LFI temperature data are limited by instrumental noise. At large angular scales the systematic effects are below the cosmic microwave background (CMB) temperature power spectrum by several orders of magnitude. In polarization the systematic uncertainties are dominated by calibration uncertainties and compete with the CMB E-modes in the multipole range 10-20. Based on our model of all known systematic effects, we show that these effects introduce a slight bias of around 0.2sigma on the reionization optical depth derived from the 70GHz EE spectrum using the 30 and 353GHz channels as foreground templates. At 30GHz the systematic effects are smaller than the Galactic foreground at all scales in temperature and polarization, which allows us to consider this channel as a reliable template of synchrotron emission. We assess the residual uncertainties due to LFI effects on CMB maps and power spectra after component separation and show that these effects are smaller than the CMB amplitude at all scales. We also assess the impact on non-Gaussianity studies and find it to be negligible. Some residuals still appear in null maps from particular sky survey pairs, particularly at 30 GHz, suggesting possible straylight contamination due to an imperfect knowledge of the beam far sidelobes.
Context.
In metric theories of gravity with photon number conservation, the luminosity and angular diameter distances are related via the Etherington relation, also known as the distance duality ...relation (DDR). A violation of this relation would rule out the standard cosmological paradigm and point to the presence of new physics.
Aims.
We quantify the ability of
Euclid
, in combination with contemporary surveys, to improve the current constraints on deviations from the DDR in the redshift range 0 <
z
< 1.6.
Methods.
We start with an analysis of the latest available data, improving previously reported constraints by a factor of 2.5. We then present a detailed analysis of simulated
Euclid
and external data products, using both standard parametric methods (relying on phenomenological descriptions of possible DDR violations) and a machine learning reconstruction using genetic algorithms.
Results.
We find that for parametric methods
Euclid
can (in combination with external probes) improve current constraints by approximately a factor of six, while for non-parametric methods
Euclid
can improve current constraints by a factor of three.
Conclusions.
Our results highlight the importance of surveys like
Euclid
in accurately testing the pillars of the current cosmological paradigm and constraining physics beyond the standard cosmological model.
Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological ...information is encoded on small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock
Euclid
cosmic shear and
Planck
cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different non-linear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly non-linear regime, with the Hubble constant,
H
0
, and the clustering amplitude,
σ
8
, affected the most. Improvements in the modelling of non-linear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the non-linear power spectrum, using different corrections for baryon physics does not significantly impact the precision of
Euclid
, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from
Euclid
cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for non-linear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.
Euclid preparation Desprez, G.; Coupon, J.; Almosallam, I. ...
Astronomy and astrophysics (Berlin),
12/2020, Letnik:
644
Journal Article
Recenzirano
Odprti dostop
Forthcoming large photometric surveys for cosmology require precise and accurate photometric redshift (photo-
z
) measurements for the success of their main science objectives. However, to date, no ...method has been able to produce photo-
z
s at the required accuracy using only the broad-band photometry that those surveys will provide. An assessment of the strengths and weaknesses of current methods is a crucial step in the eventual development of an approach to meet this challenge. We report on the performance of 13 photometric redshift code single value redshift estimates and redshift probability distributions (PDZs) on a common set of data, focusing particularly on the 0.2 − 2.6 redshift range that the
Euclid
mission will probe. We designed a challenge using emulated
Euclid
data drawn from three photometric surveys of the COSMOS field. The data was divided into two samples: one calibration sample for which photometry and redshifts were provided to the participants; and the validation sample, containing only the photometry to ensure a blinded test of the methods. Participants were invited to provide a redshift single value estimate and a PDZ for each source in the validation sample, along with a rejection flag that indicates the sources they consider unfit for use in cosmological analyses. The performance of each method was assessed through a set of informative metrics, using cross-matched spectroscopic and highly-accurate photometric redshifts as the ground truth. We show that the rejection criteria set by participants are efficient in removing strong outliers, that is to say sources for which the photo-
z
deviates by more than 0.15(1 +
z
) from the spectroscopic-redshift (spec-
z
). We also show that, while all methods are able to provide reliable single value estimates, several machine-learning methods do not manage to produce useful PDZs. We find that no machine-learning method provides good results in the regions of galaxy color-space that are sparsely populated by spectroscopic-redshifts, for example
z
> 1. However they generally perform better than template-fitting methods at low redshift (
z
< 0.7), indicating that template-fitting methods do not use all of the information contained in the photometry. We introduce metrics that quantify both photo-
z
precision and completeness of the samples (post-rejection), since both contribute to the final figure of merit of the science goals of the survey (e.g., cosmic shear from
Euclid
). Template-fitting methods provide the best results in these metrics, but we show that a combination of template-fitting results and machine-learning results with rejection criteria can outperform any individual method. On this basis, we argue that further work in identifying how to best select between machine-learning and template-fitting approaches for each individual galaxy should be pursued as a priority.
Euclid preparation Barnett, R.; Warren, S. J.; Mortlock, D. J. ...
Astronomy and astrophysics (Berlin),
11/2019, Letnik:
631
Journal Article
Recenzirano
Odprti dostop
We provide predictions of the yield of 7 <
z
< 9 quasars from the
Euclid
wide survey, updating the calculation presented in the
Euclid
Red Book in several ways. We account for revisions to the
...Euclid
near-infrared filter wavelengths; we adopt steeper rates of decline of the quasar luminosity function (QLF; Φ) with redshift, Φ ∝ 10
k
(
z
− 6)
,
k
= −0.72, and a further steeper rate of decline,
k
= −0.92; we use better models of the contaminating populations (MLT dwarfs and compact early-type galaxies); and we make use of an improved Bayesian selection method, compared to the colour cuts used for the Red Book calculation, allowing the identification of fainter quasars, down to
J
AB
∼ 23. Quasars at
z
> 8 may be selected from
Euclid
O
Y
J
H
photometry alone, but selection over the redshift interval 7 <
z
< 8 is greatly improved by the addition of
z
-band data from, e.g., Pan-STARRS and LSST. We calculate predicted quasar yields for the assumed values of the rate of decline of the QLF beyond
z
= 6. If the decline of the QLF accelerates beyond
z
= 6, with
k
= −0.92,
Euclid
should nevertheless find over 100 quasars with 7.0 <
z
< 7.5, and ∼25 quasars beyond the current record of
z
= 7.5, including ∼8 beyond
z
= 8.0. The first
Euclid
quasars at
z
> 7.5 should be found in the DR1 data release, expected in 2024. It will be possible to determine the bright-end slope of the QLF, 7 <
z
< 8,
M
1450
< −25, using 8 m class telescopes to confirm candidates, but follow-up with JWST or E-ELT will be required to measure the faint-end slope. Contamination of the candidate lists is predicted to be modest even at
J
AB
∼ 23. The precision with which
k
can be determined over 7 <
z
< 8 depends on the value of
k
, but assuming
k
= −0.72 it can be measured to a 1
σ
uncertainty of 0.07.
In physically realistic, scalar-field-based dynamical dark energy models (including, e.g., quintessence), one naturally expects the scalar field to couple to the rest of the model’s degrees of ...freedom. In particular, a coupling to the electromagnetic sector leads to a time (redshift) dependence in the fine-structure constant and a violation of the weak equivalence principle. Here we extend the previous
Euclid
forecast constraints on dark energy models to this enlarged (but physically more realistic) parameter space, and forecast how well
Euclid
, together with high-resolution spectroscopic data and local experiments, can constrain these models. Our analysis combines simulated
Euclid
data products with astrophysical measurements of the fine-structure constant,
α
, and local experimental constraints, and it includes both parametric and non-parametric methods. For the astrophysical measurements of
α
, we consider both the currently available data and a simulated dataset representative of Extremely Large Telescope measurements that are expected to be available in the 2030s. Our parametric analysis shows that in the latter case, the inclusion of astrophysical and local data improves the
Euclid
dark energy figure of merit by between 8% and 26%, depending on the correct fiducial model, with the improvements being larger in the null case where the fiducial coupling to the electromagnetic sector is vanishing. These improvements would be smaller with the current astrophysical data. Moreover, we illustrate how a genetic algorithms based reconstruction provides a null test for the presence of the coupling. Our results highlight the importance of complementing surveys like
Euclid
with external data products, in order to accurately test the wider parameter spaces of physically motivated paradigms.
Euclid preparation Pocino, A.; Tutusaus, I.; Fosalba, P. ...
Astronomy and astrophysics (Berlin),
11/2021, Letnik:
655
Journal Article
Recenzirano
Odprti dostop
Photometric redshifts (photo-
z
s) are one of the main ingredients in the analysis of cosmological probes. Their accuracy particularly affects the results of the analyses of galaxy clustering with ...photometrically selected galaxies (GC
ph
) and weak lensing. In the next decade, space missions such as
Euclid
will collect precise and accurate photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-
z
s. In this article we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from the
Euclid
mission. We focus on GC
ph
and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-
z
distributions based on the
Euclid
Consortium Flagship simulation and using a machine learning photo-
z
algorithm. We then use the Fisher matrix formalism together with these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-
z
accuracy. We find that bins with an equal width in redshift provide a higher figure of merit (FoM) than equipopulated bins and that increasing the number of redshift bins from ten to 13 improves the FoM by 35% and 15% for GC
ph
and its combination with GGL, respectively. For GC
ph
, an increase in the survey depth provides a higher FoM. However, when we include faint galaxies beyond the limit of the spectroscopic training data, the resulting FoM decreases because of the spurious photo-
z
s. When combining GC
ph
and GGL, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. Adding galaxies at faint magnitudes and high redshift increases the FoM, even when they are beyond the spectroscopic limit, since the number density increase compensates for the photo-
z
degradation in this case. We conclude that there is more information that can be extracted beyond the nominal ten tomographic redshift bins of
Euclid
and that we should be cautious when adding faint galaxies into our sample since they can degrade the cosmological constraints.