Two linear optimal control laws and a non‐linear control strategy are critically evaluated. They are implemented in a ten‐story frame structure. For the linear control laws, both an active bracing ...system and a hybrid mass damper are considered as control devices, while the non‐linear control law can be implemented with either an active or semi‐active bracing system. The active and semi‐active systems are compared to a passive bracing system with linear viscous dampers and to a hybrid system consisting of a passive bracing and a hybrid mass damper. Dimensionless indices based on the reduction of the maximum story drift and on the maximum control force required are introduced to compare the efficiencies of different control strategies. While the linear optimal control laws exhibit an excellent performance, the non‐linear control law, in addition to its simplicity and robustness, appears to be more efficient when the allowable control force is within a certain limit. Furthermore, one attractive feature of the latter is that it can be implemented with semi‐active devices to minimize the power requirement.
In this paper, we address the problem of detecting process changes by monitoring spatially distributed data in semiconductor manufacturing processes. A specific question treated in this paper is how ...to extract information from the spatial data for change detection when we do not know what information contained in the data is useful. We adopt the idea investigated by Zamel and Hinton of using neural networks to extract information which is not known a priori. The result shows that this approach works much better than the simple mean for process changes detection.
It is shown that learning control can be used to produce control signals for perfect tracking for a class of nonlinear systems defined by the Cartwright-Littlewood equation. The idea is to utilize ...the qualitative information of the nonlinear systems so that convergence in the functional space can be achieved. The design is not based on precise information about the quantitative parameters of the plant. The result is illustrated by the example of the Van der Pol equation.< >
The problem of control design for systems to perform repetitive tracking is considered. The control structure combines open-loop control and closed-loop control. The specific organization is designed ...on the philosophy that open-loop control should be used to take full advantage of a priori knowledge and feedback control should be used for regulation against modeling uncertainties and disturbances. With this control structure learning takes place in the open-loop controller. In each iteration the share of the open-loop control in the total control input increases, while that of the closed-loop control decreases. Convergence theorems are established as design guidelines for learning algorithms. Numerical examples are given.< >
Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological ...information is encoded on small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock
Euclid
cosmic shear and
Planck
cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different non-linear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly non-linear regime, with the Hubble constant,
H
0
, and the clustering amplitude,
σ
8
, affected the most. Improvements in the modelling of non-linear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the non-linear power spectrum, using different corrections for baryon physics does not significantly impact the precision of
Euclid
, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from
Euclid
cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for non-linear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.
Euclid preparation Blanchard, A.; Camera, S.; Carbone, C. ...
Astronomy and astrophysics (Berlin),
10/2020, Letnik:
642
Journal Article
Recenzirano
Odprti dostop
Aims.
The
Euclid
space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the ...expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for
Euclid
cosmological forecasts.
Methods.
We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for
Euclid
forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required.
Results.
We present new cosmological forecasts for
Euclid
. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three.
Context.
Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, ...previously ignored systematic effects must be addressed.
Aims.
In this work, we evaluate the impact of the reduced shear approximation and magnification bias on information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities in high-magnification regions.
Methods.
The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from
Euclid
, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach.
Results.
These effects cause significant biases in Ω
m
,
σ
8
,
n
s
, Ω
DE
,
w
0
, and
w
a
of −0.53
σ
, 0.43
σ
, −0.34
σ
, 1.36
σ
, −0.68
σ
, and 1.21
σ
, respectively. We then show that these lensing biases interact with another systematic effect: the intrinsic alignment of galaxies. Accordingly, we have developed the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to
Euclid
, we find that the additional terms introduced by this correction are sub-dominant.
Context. Future weak lensing surveys, such as the Euclid mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain ...very low levels of statistical error, and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy. Aims. The aims of this paper are twofold. Firstly, we took steps toward a nonparametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to Euclid . Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Secondly, we studied the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in an Euclid scenario. Methods. We extended the recently proposed resolved components analysis approach, which performs super-resolution on a field of under-sampled observations of a spatially varying, image-valued function. We added a spatial interpolation component to the method, making it a true 2-dimensional PSF model. We compared our approach to PSFEx , then quantified the impact of PSF recovery errors on galaxy shape measurements through image simulations. Results. Our approach yields an improvement over PSFEx in terms of the PSF model and on observed galaxy shape errors, though it is at present far from reaching the required Euclid accuracy. We also find that the usual formalism used for the propagation of PSF model errors to weak lensing quantities no longer holds in the case of an Euclid -like PSF. In particular, different shape measurement approaches can react differently to the same PSF modeling errors.
ABSTRACT
We present a new, updated version of the EuclidEmulator (called EuclidEmulator2), a fast and accurate predictor for the nonlinear correction of the matter power spectrum. 2 per cent level ...accurate emulation is now supported in the eight-dimensional parameter space of w0waCDM+∑mν models between redshift z = 0 and z = 3 for spatial scales within the range $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$. In order to achieve this level of accuracy, we have had to improve the quality of the underlying N-body simulations used as training data: (i) we use self-consistent linear evolution of non-dark matter species such as massive neutrinos, photons, dark energy, and the metric field, (ii) we perform the simulations in the so-called N-body gauge, which allows one to interpret the results in the framework of general relativity, (iii) we run over 250 high-resolution simulations with 30003 particles in boxes of 1(h−1 Gpc)3 volumes based on paired-and-fixed initial conditions, and (iv) we provide a resolution correction that can be applied to emulated results as a post-processing step in order to drastically reduce systematic biases on small scales due to residual resolution effects in the simulations. We find that the inclusion of the dynamical dark energy parameter wa significantly increases the complexity and expense of creating the emulator. The high fidelity of EuclidEmulator2 is tested in various comparisons against N-body simulations as well as alternative fast predictors such as HALOFIT, HMCode, and CosmicEmu. A blind test is successfully performed against the Euclid Flagship v2.0 simulation. Nonlinear correction factors emulated with EuclidEmulator2 are accurate at the level of $1{{\ \rm per\ cent}}$ or better for $0.01 \, h\, {\rm Mpc}^{-1}\le k \le 10\, h\, {\rm Mpc}^{-1}$ and z ≤ 3 compared to high-resolution dark-matter-only simulations. EuclidEmulator2 is publicly available at https://github.com/miknab/EuclidEmulator2.
The objective of this article is to calculate the price of weather derivatives for different African countries with payout depending on temperature. A new approach for computing degree day contracts ...is shown and gives another scale to the numerical relevance and practical implementation of the findings. With historical data for each country, a stochastic process based on continuous time with mean reversion representing the evolution of the temperature is determined. Focusing on the Monte Carlo simulation method, the price of each contract and the potential implications to solve several aspects of the threatened African economy are presented.