The COSMOS field has been the subject of a wide range of observations, with a number of studies focusing on reconstructing the 3D dark matter density field. Typically, these studies have focused on ...one given method or tracer. In this paper, we reconstruct the distribution of mass in the COSMOS field out to a redshift z= 1 by combining Hubble Space Telescope weak lensing measurements with zCOSMOS spectroscopic measurements of galaxy clustering. The distribution of galaxies traces the distribution of mass with high resolution (particularly in redshift, which is not possible with lensing), and the lensing data empirically calibrates the mass normalization (bypassing the need for theoretical models). Two steps are needed to convert a galaxy survey into a density field. The first step is to create a smooth field from the galaxy positions, which is a point field. We investigate four possible methods for this: (i) Gaussian smoothing, (ii) convolution with truncated isothermal sphere, (iii) fifth nearest neighbour smoothing and (iv) a multiscale entropy method. The second step is to rescale this density field using a bias prescription. We calculate the optimal bias scaling for each method by comparing predictions from the smoothed density field with the measured weak lensing data, on a galaxy-by-galaxy basis. In general, we find scale-independent bias for all the smoothing schemes, to a precision of 10 per cent. For the nearest neighbour smoothing case, we find the bias to be 2.51 ± 0.25. We also find evidence for a strongly evolving bias, increasing by a factor of ∼3.5 between redshifts 0 < z < 0.8. We believe this strong evolution can be explained by the fact that we use a flux limited sample to build the density field.
Context. Active galactic nuclei (AGN) are thought to play an important role in galaxy evolution. It has been suggested that AGN feedback could be partly responsible for quenching star-formation in ...the hosts, leading to transition from the blue cloud to the red sequence. The transition seems to occur faster for the most massive galaxies, where traces of AGN activity have been found as early as at z < 0.1. The correlation betweenAGN activity, aging of the stellar populations, and stellar mass still needs to be fully understood, especially at high redshifts. Aims. Our aim is to investigate the link between AGN activity, star-formation, and stellar mass of the host galaxy at 0 < z < 1, looking for spectroscopic traces of AGN and aging of the host. This work provides an extension of the existing studies at z < 0.1 and contributes to shed light on galaxy evolution at intermediate redshifts. Methods. We used the zCOSMOS 20k data to create a sample of galaxies at z < 1. We divided the sample into several mass-redshift bins to obtain stacked galaxy spectra with an improved signal-to-noise ratio (S/N). We exploited emission-line diagnostic diagrams to separate AGN from star-forming galaxies. Results. We found an indication of a role for the total galaxy stellar mass in leading galaxy classification. Stacked spectra show AGN signatures above the log M∗/M⊙ > 10.2 threshold. Moreover, the stellar populations of AGN hosts are found to be older than star-forming and composite galaxies. This could be due to the the tendency of AGN to reside in massive hosts. Conclusions. The dependence of the AGN classification on the stellar mass agrees with what has been found in previous research. Together with the evidence of older stellar populations inhabiting the AGN-like galaxies, it is consistent with the downsizing scenario. In particular, our evidence points to an evolutionary scenario where the AGN-feedback is capable of quenching the star formation in the most massive galaxies. Therefore, the AGN-feedback is the best candidate for initiating the passive evolutionary phase of galaxies.
We extend a recently developed galaxy morphology classification method, Quantitative Multiwavelength Morphology (QMM), to connect galaxy morphologies to their underlying physical properties. The ...traditional classification of galaxies approaches the problem separately through either morphological classification or, in more recent times, analysis of physical properties. A combined approach has significant potential in producing a consistent and accurate classification scheme as well as shedding light on the origin and evolution of galaxy morphology. Here, we present an analysis of a volume-limited sample of 31 703 galaxies from the fourth data release of the Sloan Digital Sky Survey. We use an image analysis method called Pixel-z to extract the underlying physical properties of the galaxies, which is then quantified using the concentration, asymmetry and clumpiness parameters. The galaxies also have their multiwavelength morphologies quantified using QMM, and these results are then related to the distributed physical properties through a regression analysis. We show that this method can be used to relate the spatial distribution of physical properties with the morphological properties of galaxies.
We present a group-galaxy cross-correlation analysis using a group catalog produced from the 16,500 spectra from the optical zCOSMOS galaxy survey. Our aim is to perform a consistency test in the ...redshift range 0.2 < or =, slant z < or =, slant 0.8 between the clustering strength of the groups and mass estimates that are based on the richness of the groups. We measure the linear bias of the groups by means of a group-galaxy cross-correlation analysis and convert it into mass using the bias-mass relation for a given cosmology, checking the systematic errors using realistic group and galaxy mock catalogs. The measured bias for the zCOSMOS groups increases with group richness as expected by the theory of cosmic structure formation and yields masses that are reasonably consistent with the masses estimated from the richness directly, considering the scatter that is obtained from the 24 mock catalogs. Some exceptions are the richest groups at high redshift (estimated to be more massive than 10 super(13.5) M sub(middot in circle)), for which the measured bias is significantly larger than for any of the 24 mock catalogs (corresponding to a 3sigma effect), which is attributed to the extremely large structure that is present in the COSMOS field at z ~ 0.7. Our results are in general agreement with previous studies that reported unusually strong clustering in the COSMOS field.
We explore the simple inter-relationships between mass, star formation rate, and environment in the SDSS, zCOSMOS, and other deep surveys. We take a purely empirical approach in identifying those ...features of galaxy evolution that are demanded by the data and then explore the analytic consequences of these. We show that the differential effects of mass and environment are completely separable to z {approx} 1, leading to the idea of two distinct processes of 'mass quenching' and 'environment quenching'. The effect of environment quenching, at fixed over-density, evidently does not change with epoch to z {approx} 1 in zCOSMOS, suggesting that the environment quenching occurs as large-scale structure develops in the universe, probably through the cessation of star formation in 30%-70% of satellite galaxies. In contrast, mass quenching appears to be a more dynamic process, governed by a quenching rate. We show that the observed constancy of the Schechter M* and {alpha}{sub s} for star-forming galaxies demands that the quenching of galaxies around and above M* must follow a rate that is statistically proportional to their star formation rates (or closely mimic such a dependence). We then postulate that this simple mass-quenching law in fact holds over a much broader range of stellar mass (2 dex) and cosmic time. We show that the combination of these two quenching processes, plus some additional quenching due to merging naturally produces (1) a quasi-static single Schechter mass function for star-forming galaxies with an exponential cutoff at a value M* that is set uniquely by the constant of proportionality between the star formation and mass quenching rates and (2) a double Schechter function for passive galaxies with two components. The dominant component (at high masses) is produced by mass quenching and has exactly the same M* as the star-forming galaxies but a faint end slope that differs by {Delta}{alpha}{sub s} {approx} 1. The other component is produced by environment effects and has the same M* and {alpha}{sub s} as the star-forming galaxies but an amplitude that is strongly dependent on environment. Subsequent merging of quenched galaxies will modify these predictions somewhat in the denser environments, mildly increasing M* and making {alpha}{sub s} slightly more negative. All of these detailed quantitative inter-relationships between the Schechter parameters of the star-forming and passive galaxies, across a broad range of environments, are indeed seen to high accuracy in the SDSS, lending strong support to our simple empirically based model. We find that the amount of post-quenching 'dry merging' that could have occurred is quite constrained. Our model gives a prediction for the mass function of the population of transitory objects that are in the process of being quenched. Our simple empirical laws for the cessation of star formation in galaxies also naturally produce the 'anti-hierarchical' run of mean age with mass for passive galaxies, as well as the qualitative variation of formation timescale indicated by the relative {alpha}-element abundances.
ABSTRACT
We present a new, updated version of the EuclidEmulator (called EuclidEmulator2), a fast and accurate predictor for the nonlinear correction of the matter power spectrum. 2 per cent level ...accurate emulation is now supported in the eight-dimensional parameter space of w0waCDM+∑mν models between redshift z = 0 and z = 3 for spatial scales within the range $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$. In order to achieve this level of accuracy, we have had to improve the quality of the underlying N-body simulations used as training data: (i) we use self-consistent linear evolution of non-dark matter species such as massive neutrinos, photons, dark energy, and the metric field, (ii) we perform the simulations in the so-called N-body gauge, which allows one to interpret the results in the framework of general relativity, (iii) we run over 250 high-resolution simulations with 30003 particles in boxes of 1(h−1 Gpc)3 volumes based on paired-and-fixed initial conditions, and (iv) we provide a resolution correction that can be applied to emulated results as a post-processing step in order to drastically reduce systematic biases on small scales due to residual resolution effects in the simulations. We find that the inclusion of the dynamical dark energy parameter wa significantly increases the complexity and expense of creating the emulator. The high fidelity of EuclidEmulator2 is tested in various comparisons against N-body simulations as well as alternative fast predictors such as HALOFIT, HMCode, and CosmicEmu. A blind test is successfully performed against the Euclid Flagship v2.0 simulation. Nonlinear correction factors emulated with EuclidEmulator2 are accurate at the level of $1{{\ \rm per\ cent}}$ or better for $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$ and z ≤ 3 compared to high-resolution dark-matter-only simulations. EuclidEmulator2 is publicly available at https://github.com/miknab/EuclidEmulator2.
Context.
In metric theories of gravity with photon number conservation, the luminosity and angular diameter distances are related via the Etherington relation, also known as the distance duality ...relation (DDR). A violation of this relation would rule out the standard cosmological paradigm and point to the presence of new physics.
Aims.
We quantify the ability of
Euclid
, in combination with contemporary surveys, to improve the current constraints on deviations from the DDR in the redshift range 0 <
z
< 1.6.
Methods.
We start with an analysis of the latest available data, improving previously reported constraints by a factor of 2.5. We then present a detailed analysis of simulated
Euclid
and external data products, using both standard parametric methods (relying on phenomenological descriptions of possible DDR violations) and a machine learning reconstruction using genetic algorithms.
Results.
We find that for parametric methods
Euclid
can (in combination with external probes) improve current constraints by approximately a factor of six, while for non-parametric methods
Euclid
can improve current constraints by a factor of three.
Conclusions.
Our results highlight the importance of surveys like
Euclid
in accurately testing the pillars of the current cosmological paradigm and constraining physics beyond the standard cosmological model.