Context . Knowledge of the spectrograph’s instrumental profile (IP) provides important information needed for wavelength calibration and for the use in scientific analyses. Aims . This work develops ...new methods for IP reconstruction in high-resolution spectrographs equipped with astronomical laser frequency comb (astrocomb) calibration systems and assesses the impact that assumptions on the IP shape have on achieving accurate spectroscopic measurements. Methods . Astrocombs produce ≈ 10 000 bright, unresolved emission lines with known wavelengths, making them excellent probes of the IP. New methods based on Gaussian process regression were developed to extract detailed information on the IP shape from these data. Applying them to HARPS, an extremely stable spectrograph installed on the ESO 3.6m telescope, we reconstructed its IP at 512 locations of the detector, covering 60% of the total detector area. Results . We found that the HARPS IP is asymmetric and that it varies smoothly across the detector. Empirical IP models provide a wavelength accuracy better than 10m s −1 (5m s −1 ) with a 92% (64%) probability. In comparison, reaching the same accuracy has a probability of only 29% (8%) when a Gaussian IP shape is assumed. Furthermore, the Gaussian assumption is associated with intra-order and inter-order distortions in the HARPS wavelength scale as large as 60 m s −1 . The spatial distribution of these distortions suggests they may be related to spectrograph optics and therefore may generally appear in cross-dispersed echelle spectrographs when Gaussian IPs are used. Empirical IP models are provided as supplementary material in machine readable format. We also provide a method to correct the distortions in astrocomb calibrations made under the Gaussian IP assumption. Conclusions . Methods presented here can be applied to other instruments equipped with astrocombs, such as ESPRESSO, but also ANDES and G-CLEF in the future. The empirical IPs are crucial for obtaining objective and unbiased measurements of fundamental constants from high-resolution spectra, as well as measurements of the redshift drift, isotopic abundances, and other science cases.
Full text
Available for:
FMFMET, NUK, UL, UM, UPUK
With this paper we participate to the call for ideas issued by the European Space Agency to define the Science Program and plan for space missions from 2035 to 2050. In particular we present five ...science cases where major advancements can be achieved thanks to space-based spectroscopic observations at ultraviolet (UV) wavelengths. We discuss the possibility to (1) unveil the large-scale structures and cosmic web in emission at redshift
≲
1.7
; (2) study the exchange of baryons between galaxies and their surroundings to understand the contribution of the circumgalactic gas to the evolution and angular-momentum build-up of galaxies; (3) constrain the efficiency of ram-pressure stripping in removing gas from galaxies and its role in quenching star formation; (4) characterize the progenitor population of core-collapse supernovae to reveal the explosion mechanisms of stars; (5) target accreting white dwarfs in globular clusters to determine their evolution and fate. These science themes can be addressed thanks to UV (wavelength range
λ
∼
90
−
350
nm) observations carried out with a panoramic integral field spectrograph (field of view
∼
1
×
1
arcmin
2
), and medium spectral (R = 4000) and spatial (
∼
1
′
′
−
3
′
′
) resolution. Such a UV-optimized instrument will be unique in the coming years, when most of the new large facilities such as the Extremely Large Telescope and the James Webb Space Telescope are optimized for infrared wavelengths.
Context. The known mega metal-poor (MMP) and hyper metal-poor (HMP) stars, with Fe/H < −6.0 and < −5.0, respectively, likely belong to the CEMP-no class, namely, carbon-enhanced stars with little or ...no second peak neutron-capture elements. They are likely second-generation stars, and the few elements measurable in their atmospheres are used to infer the properties of a single or very few progenitors. Aims. The high carbon abundance in the CEMP-no stars offers a unique opportunity to measure the carbon isotopic ratio, which directly indicates the presence of mixing between the He- and H-burning layers either within the star or in the progenitor(s). By means of high-resolution spectra acquired with the ESPRESSO spectrograph at the VLT, we aim to derive values for the 12C/13C ratio at the lowest metallicities. Methods. We used a spectral synthesis technique based on the SYNTHE code and on ATLAS models within a Markov chain Monte Carlo methodology to derive 12C/13C in the stellar atmospheres of four of the most metal-poor stars known: the MMP giant SMSS J0313–6708 (Fe/H < −7.1), the HMP dwarf HE 1327–2326 (Fe/H = −5.8), the HMP giant SDSS J1313–0019 (Fe/H = −5.0), and the ultra metal-poor subgiant HE0233 -0343 (Fe/H = −4.7). We also revised a previous value for the MMP giant SMSS J1605–1443 (Fe/H = −6.2). Results. In four stars we derive an isotopic value while for HE 1327–2326 we provide a lower limit. All measurements are in the range 39 < 12C/13C < 100, showing that the He- and H-burning layers underwent partial mixing either in the stars or, more likely, in their progenitors. This provides evidence of a primary production of 13C at the dawn of chemical evolution. CEMP-no dwarf stars with slightly higher metallicities show lower isotopic values, <30 and even approaching the CNO cycle equilibrium value. Thus, extant data suggest the presence of a discontinuity in the 12C/13C ratio at around Fe/H ≈ − 4, which could mark a real difference between the progenitor pollution captured by stars with different metallicities. We also note that some MMP and HMP stars with high 12C/13C show low 7Li values, providing an indication that mixing in the CEMP-no progenitors is not responsible for the observed Li depletion.
Full text
Available for:
FMFMET, NUK, UL, UM, UPUK
ABSTRACT
We have developed a new fully automated Artificial Intelligence (AI)-based method for deriving optimal models of complex absorption systems. The AI structure is built around VPFIT, a ...well-developed and extensively tested nonlinear least-squares code. The new method forms a sophisticated parallelized system, eliminating human decision-making and hence bias. Here, we describe the workings of such a system and apply it to synthetic spectra, in doing so establishing recommended methodologies for future analyses of Very Large Telescope (VLT) and Extremely Large Telescope (ELT) data. One important result is that modelling line broadening for high-redshift absorption components should include both thermal and turbulent components. Failing to do so means it is easy to derive the wrong model and hence incorrect parameter estimates. One topical application of our method concerns searches for spatial or temporal variations in fundamental constants. This subject is one of the key science drivers for the European Southern Observatory’s ESPRESSO spectrograph on the VLT and for the HIRES spectrograph on the ELT. The quality of new data demands completely objective and reproducible methods. The Monte Carlo aspects of the new method described here reveal that model non-uniqueness can be significant, indicating that it is unrealistic to expect to derive an unambiguous estimate of the fine structure constant α from one or a very small number of measurements. No matter how optimal the modelling method, it is a fundamental requirement to use a large sample of measurements to meaningfully constrain temporal or spatial α variation.
Context.
Ground-based high-resolution spectrographs are key instruments for several astrophysical domains, such as exoplanet studies. Unfortunately, the observed spectra are contaminated by the ...Earth’s atmosphere and its large molecular absorption bands. While different techniques (forward radiative transfer models, principle component analysis (PCA), or other empirical methods) exist to correct for telluric lines in exoplanet atmospheric studies, in radial velocity (RV) studies, telluric lines with an absorption depth of >2% are generally masked, which poses a problem for faint targets and M dwarfs as most of their RV content is present where telluric contamination is important.
Aims.
We propose a simple telluric model to be embedded in the Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations (ESPRESSO) data reduction software (DRS). The goal is to provide telluric-free spectra and enable RV measurements through the cross-correlation function technique (and others), including spectral ranges where telluric lines fall.
Methods.
The model is a line-by-line radiative transfer code that assumes a single atmospheric layer. We use the sky conditions and the physical properties of the lines from the HITRAN database to create the telluric spectrum. This high-resolution model is then convolved with the instrumental resolution and sampled to the instrumental wavelength grid. A subset of selected telluric lines is used to robustly fit the spectrum through a Levenberg-Marquardt minimization algorithm.
Results.
We computed the model to the H
2
O lines in the spectral range of ESPRESSO. When applied to stellar spectra from A0- to M5-type stars, the residuals of the strongest water lines are below the 2% peak-to-valley (P2V) amplitude for all spectral types, with the exception of M dwarfs, which are within the pseudo-continuum. We then determined the RVs from the telluric-corrected ESPRESSO spectra of Tau Ceti and Proxima. We created telluric-free masks and compared the obtained RVs with the DRS RVs. In the case of Tau Ceti, we identified that micro-telluric lines introduce systematics up to an amplitude of 58 cm s
−1
and with a period of one year if not corrected. For Proxima, the impact of micro-telluric lines is negligible due to the low flux below 5900 A. For late-type stars, the gain in spectral content at redder wavelengths is equivalent to a gain of 25% in photon noise or a factor of 1.78 in exposure time. This leads to better constraints on the semi-amplitude and eccentricity of Proxima d, which was recently proposed as a planet candidate. Finally, we applied our telluric model to the O
2
γ
-band and we obtained residuals below the 2% P2V amplitude.
Conclusions.
We propose a simple telluric model for high-resolution spectrographs to correct individual spectra and to achieve precise RVs. The removal of micro-telluric lines, coupled with the gain in spectral range, leads to more precise RVs. Moreover, we showcase that our model can be applied to other molecules, and thus to other wavelength regions observed by other spectrographs, such as NIRPS.
Full text
Available for:
FMFMET, NUK, UL, UM, UPUK
The standard Λ Cold Dark Matter (ΛCDM) cosmological model provides a good description of a wide range of astrophysical and cosmological data. However, there are a few big open questions that make the ...standard model look like an approximation to a more realistic scenario yet to be found. In this paper, we list a few important goals that need to be addressed in the next decade, taking into account the current discordances between the different cosmological probes, such as the disagreement in the value of the Hubble constant H0, the σ8–S8 tension, and other less statistically significant anomalies. While these discordances can still be in part the result of systematic errors, their persistence after several years of accurate analysis strongly hints at cracks in the standard cosmological scenario and the necessity for new physics or generalisations beyond the standard model. In this paper, we focus on the 5.0σ tension between the Planck CMB estimate of the Hubble constant H0 and the SH0ES collaboration measurements. After showing the H0 evaluations made from different teams using different methods and geometric calibrations, we list a few interesting new physics models that could alleviate this tension and discuss how the next decade's experiments will be crucial. Moreover, we focus on the tension of the Planck CMB data with weak lensing measurements and redshift surveys, about the value of the matter energy density Ωm, and the amplitude or rate of the growth of structure (σ8,fσ8). We list a few interesting models proposed for alleviating this tension, and we discuss the importance of trying to fit a full array of data with a single model and not just one parameter at a time. Additionally, we present a wide range of other less discussed anomalies at a statistical significance level lower than the H0–S8 tensions which may also constitute hints towards new physics, and we discuss possible generic theoretical approaches that can collectively explain the non-standard nature of these signals. Finally, we give an overview of upgraded experiments and next-generation space missions and facilities on Earth that will be of crucial importance to address all these open questions.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Full text
Available for:
CMK, CTK, FMFMET, IJS, NUK, PNG, UM
8.
The XXL Survey Baran, N; Smolcic, V; Milakovic, D ...
Astronomy and astrophysics (Berlin),
08/2016, Volume:
592
Journal Article
Peer reviewed
Open access
We present observations with the Karl G. Jansky Very Large Array (VLA) at 3 GHz (10 cm) toward a sub-field of the XXL-North 25 deg super(2) field targeting the first supercluster discovered in the ...XXL Survey. The structure has been found at a spectroscopic redshift of 0.43 and extending over 0.?35 x 0.?1 on the sky. The aim of this paper is twofold. First, we present the 3 GHz VLA radio continuum observations, the final radio mosaic and radio source catalogue, and, second, we perform a detailed analysis of the supercluster in the optical and radio regimes using photometric redshifts from the CFHTLS survey and our new VLA-XXL data. Our final 3 GHz radio mosaic has a resolution of 3.?2 x 1.?9, and encompasses an area of 41 x 41 with rms noise level lower than ~ 20 mu Jy beam super(-1). The noise in the central 15 x 15 region is approximate 11 mu Jy beam super(-1). From the mosaic we extract a catalogue of 155 radio sources with signal-to-noise ratio (S/N) > or = 6, eight of which are large, multicomponent sources, and 123 (79%) of which can be associated with optical sources in the CFHTLS W1 catalogue. Applying Voronoi tessellation analysis (VTA) in the area around the X-ray identified supercluster using photometric redshifts from the CFHTLS survey we identify a total of seventeen overdensities at z sub(phot)= 0.35 ? 0.50, 7 of which are associated with clusters detected in the XMM-Newton XXL data. We find a mean photometric redshift of 0.43 for our overdensities, consistent with the spectroscopic redshifts of the brightest cluster galaxies of seven X-ray detected clusters. The full VTA-identified structure extends over ~ 0.?6 x 0.?2 on the sky, which corresponds to a physical size of ~ 12 x 4 Mpc super(2) at z= 0.43. No large radio galaxies are present within the overdensities, and we associate eight (S/N> 7) radio sources with potential group/cluster member galaxies. The spatial distribution of the red and blue VTA-identified potential group member galaxies, selected by their observed g? r colours, suggests that the clusters are not virialised yet, but are dynamically young, as expected for hierarchical structure growth in a LambdaCDM universe. Further spectroscopic data are required to analyse the dynamical state of the groups.
Full text
Available for:
FMFMET, NUK, UL, UM, UPUK
Context.
The known mega metal-poor (MMP) and hyper metal-poor (HMP) stars, with Fe/H < −6.0 and < −5.0, respectively, likely belong to the CEMP-no class, namely, carbon-enhanced stars with little or ...no second peak neutron-capture elements. They are likely second-generation stars, and the few elements measurable in their atmospheres are used to infer the properties of a single or very few progenitors.
Aims.
The high carbon abundance in the CEMP-no stars offers a unique opportunity to measure the carbon isotopic ratio, which directly indicates the presence of mixing between the He- and H-burning layers either within the star or in the progenitor(s). By means of high-resolution spectra acquired with the ESPRESSO spectrograph at the VLT, we aim to derive values for the
12
C/
13
C ratio at the lowest metallicities.
Methods.
We used a spectral synthesis technique based on the SYNTHE code and on ATLAS models within a Markov chain Monte Carlo methodology to derive
12
C/
13
C in the stellar atmospheres of four of the most metal-poor stars known: the MMP giant SMSS J0313–6708 (Fe/H < −7.1), the HMP dwarf HE 1327–2326 (Fe/H = −5.8), the HMP giant SDSS J1313–0019 (Fe/H = −5.0), and the ultra metal-poor subgiant HE0233 -0343 (Fe/H = −4.7). We also revised a previous value for the MMP giant SMSS J1605–1443 (Fe/H = −6.2).
Results.
In four stars we derive an isotopic value while for HE 1327–2326 we provide a lower limit. All measurements are in the range 39 <
12
C/
13
C < 100, showing that the He- and H-burning layers underwent partial mixing either in the stars or, more likely, in their progenitors. This provides evidence of a primary production of
13
C at the dawn of chemical evolution. CEMP-no dwarf stars with slightly higher metallicities show lower isotopic values, <30 and even approaching the CNO cycle equilibrium value. Thus, extant data suggest the presence of a discontinuity in the
12
C/
13
C ratio at around Fe/H ≈ − 4, which could mark a real difference between the progenitor pollution captured by stars with different metallicities. We also note that some MMP and HMP stars with high
12
C/
13
C show low
7
Li values, providing an indication that mixing in the CEMP-no progenitors is not responsible for the observed Li depletion.
Full text
Available for:
FMFMET, NUK, UL, UM, UPUK
In anaesthesiology, economic aspects have been insufficiently studied.
The aim of this paper was the assessment of rational choice of the anaesthesiological services based on the analysis of the ...scope, distribution, trend and cost.
The costs of anaesthesiological services were counted based on "unit" prices from the Republic Health Insurance Fund. Data were analysed by methods of descriptive statistics and statistical significance was tested by Student's t-test and chi2-test.
The number of general anaesthesia was higher and average time of general anaesthesia was shorter, without statistical significance (t-test, p = 0.436) during 2006 compared to the previous year. Local anaesthesia was significantly higher (chi2-test, p = 0.001) in relation to planned operation in emergency surgery. The analysis of total anaesthesiological procedures revealed that a number of procedures significantly increased in ENT and MFH surgery, and ophthalmology, while some reduction was observed in general surgery, orthopaedics and trauma surgery and cardiovascular surgery (chi2-test, p = 0.000). The number of analgesia was higher than other procedures (chi2-test, p = 0.000). The structure of the cost was 24% in neurosurgery, 16% in digestive (general) surgery,14% in gynaecology and obstetrics, 13% in cardiovascular surgery and 9% in emergency room. Anaesthesiological services costs were the highest in neurosurgery, due to the length anaesthesia, and digestive surgery due to the total number of general anaesthesia performed.
It is important to implement pharmacoeconomic studies in all departments, and to separate the anaesthesia services for emergency and planned operations. Disproportions between the number of anaesthesia, surgery interventions and the number of patients in surgical departments gives reason to design relation database.