Context. In the last decade, astronomers have found a new type of supernova called superluminous supernovae (SLSNe) due to their high peak luminosity and long light-curves. These hydrogen-free ...explosions (SLSNe-I) can be seen to z ~ 4 and therefore, offer the possibility of probing the distant Universe. Aims. We aim to investigate the possibility of detecting SLSNe-I using ESA’s Euclid satellite, scheduled for launch in 2020. In particular, we study the Euclid Deep Survey (EDS) which will provide a unique combination of area, depth and cadence over the mission. Methods. We estimated the redshift distribution of Euclid SLSNe-I using the latest information on their rates and spectral energy distribution, as well as known Euclid instrument and survey parameters, including the cadence and depth of the EDS. To estimate the uncertainties, we calculated their distribution with two different set-ups, namely optimistic and pessimistic, adopting different star formation densities and rates. We also applied a standardization method to the peak magnitudes to create a simulated Hubble diagram to explore possible cosmological constraints. Results. We show that Euclid should detect approximately 140 high-quality SLSNe-I to z ~ 3.5 over the first five years of the mission (with an additional 70 if we lower our photometric classification criteria). This sample could revolutionize the study of SLSNe-I at z > 1 and open up their use as probes of star-formation rates, galaxy populations, the interstellar and intergalactic medium. In addition, a sample of such SLSNe-I could improve constraints on a time-dependent dark energy equation-of-state, namely w(a), when combined with local SLSNe-I and the expected SN Ia sample from the Dark Energy Survey. Conclusions. We show that Euclid will observe hundreds of SLSNe-I for free. These luminous transients will be in the Euclid data-stream and we should prepare now to identify them as they offer a new probe of the high-redshift Universe for both astrophysics and cosmology.
Context.
Future weak lensing surveys, such as the
Euclid
mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain ...very low levels of statistical error, and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy.
Aims.
The aims of this paper are twofold. Firstly, we took steps toward a nonparametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to
Euclid
. Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Secondly, we studied the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in an
Euclid
scenario.
Methods.
We extended the recently proposed resolved components analysis approach, which performs super-resolution on a field of under-sampled observations of a spatially varying, image-valued function. We added a spatial interpolation component to the method, making it a true 2-dimensional PSF model. We compared our approach to
PSFEx
, then quantified the impact of PSF recovery errors on galaxy shape measurements through image simulations.
Results.
Our approach yields an improvement over
PSFEx
in terms of the PSF model and on observed galaxy shape errors, though it is at present far from reaching the required
Euclid
accuracy. We also find that the usual formalism used for the propagation of PSF model errors to weak lensing quantities no longer holds in the case of an
Euclid
-like PSF. In particular, different shape measurement approaches can react differently to the same PSF modeling errors.
The papers in this special section focus on multifaceted driver-vehicle systems. The current panorama of driver–vehicle systems is rapidly evolving, as the path toward autonomous vehicles is being ...paved. Many design decisions are yet to be made, and evolving legislative policies will play a significant role in the scientific research, which needs to be addressed. For example, there are countries that are considering a “step change” from fully human- driven to fully automated vehicles, while others are designing a period of coexistence of the two driving modes. Clearly, either of these approaches will require deep study of the interactions that will develop between drivers and vehicles. Related to this, for the first time in history, automated systems will primarily control vehicles and drivers may become passive “spectators” to vehicle control. However, it is also reasonable to expect that humans will be asked to intervene in dubious and/or dangerous situations that are not manageable by automation alone. Such interventions may also be used as bases for “robotic drivers” to learn human-based reasoning and ethics in vehicle control. Some researchers have already started to ask for how long will “former” drivers be able to offer prompt and wise reactions to off-nominal events, as their capabilities will decline with automation dependence and loss of practice. It is easy to foresee that the next decade will be crucial for scientific research in this field. Scientists will be asked to provide prompt and reliable responses to whatever final human–automation interaction scenario will develop for driving activities. To address this need, human–machine systems competencies will be of the utmost importance. In the context outlined above, and in preparation for autonomous vehicles driving around the world, some of the issues that need to be tackled include design of more accurate and reliable driving simulation models for coexistence tests, in which autonomous vehicles are simulated together with humandriven ones. Such technology is needed to reproduce interaction schemes that will develop on roadways and replicate the specific cues to which drivers may be exposed for the assessment of consequent behaviors. These tools are expected to provide more efficient methods for estimating different aspects of driver behaviors.
The prediction of lane changes has been proven to be useful for collision avoidance support in road vehicles. This paper proposes an interactive multiple model (IMM)-based method for predicting lane ...changes in highways. The sensor unit consists of a set of low-cost Global Positioning System/inertial measurement unit (GPS/IMU) sensors and an odometry captor for collecting velocity measurements. Extended Kalman filters (EKFs) running in parallel and integrated by an IMM-based algorithm provide positioning and maneuver predictions to the user. The maneuver states Change Lane (CL) and Keep Lane (KL) are defined by two models that describe different dynamics. Different model sets have been studied to meet the needs of the IMM-based algorithm. Real trials in highway scenarios show the capability of the system to predict lane changes in straight and curved road stretches with very short latency times.
Nowadays, it is common that road vehicle navigation systems employ maps to represent the vehicle positions in a local reference. The most usual process to do that consists in the estimation of the ...vehicle positioning by fusing the Global Navigation Satellite System (GNSS) and some other aiding sensors data, and the subsequent projection of these values on the map by applying map-matching techniques. However, it is possible to benefit from map information also during the process of fusing data for positioning. This paper presents an algorithm for lane-level road vehicle navigation that integrates GNSS, dead-reckoning (odometry and gyro), and map data in the fusion process. Additionally, the proposed method brings some benefits for map-matching at lane level because, on the one hand, it allows the tracking of multiple hypothesis and on the other hand, it provides probability values of lane occupancy for each candidate segment. To do this, a new paradigm that describes lanes as piece-wise sets of clothoids was applied in the elaboration of an enhanced map (Emap). Experimental results in real complex scenarios with multiple lanes show the suitability of the proposed algorithm for the problem under consideration, presenting better results than some state-of-the-art methods of the literature.
Abstract
Adaptive Optics Lucky Imager (AOLI) is a state-of-the-art instrument that combines adaptive optics (AO) and lucky imaging (LI) with the objective of obtaining diffraction-limited images in ...visible wavelength at mid- and big-size ground-based telescopes. The key innovation of AOLI is the development and use of the new Two Pupil Plane Positions Wavefront Sensor (TP3-WFS). The TP3-WFS, working in visible band, represents an advance over classical wavefront sensors such as the Shack-Hartmann WFS because it can theoretically use fainter natural reference stars, which would ultimately provide better sky coverages to AO instruments using this newer sensor. This paper describes the software, algorithms and procedures that enabled AOLI to become the first astronomical instrument performing real-time AO corrections in a telescope with this new type of WFS, including the first control-related results at the William Herschel Telescope.
Lane-level positioning and map matching are some of the biggest challenges for navigation systems. Additionally, in safety applications or in those with critical performance requirements (such as ...satellite-based electronic fee collection), integrity becomes a key word for the navigation community. In this scenario, it is clear that a navigation system that can operate at the lane level while providing integrity parameters that are capable of monitoring the quality of the solution can bring important benefits to these applications. This paper presents a pioneering novel solution to the problem of combined positioning and map matching with integrity provision at the lane level. The system under consideration hybridizes measurements from a global navigation satellite system (GNSS) receiver, an odometer, and a gyroscope, along with the road information stored in enhanced digital maps, by means of a multiple-hypothesis particle-filter-based algorithm. A set of experiments in real environments in France and Germany shows the very good results obtained in terms of positioning, map matching, and integrity consistency, proving the feasibility of our proposal.
Road traffic collisions are an outstanding problem in current developed societies. This paper presents a solution to support collision avoidance based on the timely detection of the vehicle ...maneuvers. Since the longitudinal interaction among vehicles, with the commonly known car-following behavior, is one of the most important causes of crashes, it was decided to focus on longitudinal maneuvers, identifying the maneuvering states of cruise, accelerating or decelerating and stop. The classification is carried out by means of fuzzy rules extracted from navigational data. Therefore, in our proposal no extra sensors are needed apart from two commonly installed for navigation purposes: the odometry of the vehicle and an accelerometer. The system was tested with low-cost sensors showing good results when compared to the literature of the field.
Euclid preparation Adam, R.; Vannier, M.; Maurogordato, S. ...
Astronomy and astrophysics (Berlin),
07/2019, Letnik:
627
Journal Article
Recenzirano
Odprti dostop
Galaxy cluster counts in bins of mass and redshift have been shown to be a competitive probe to test cosmological models. This method requires an efficient blind detection of clusters from surveys ...with a well-known selection function and robust mass estimates, which is particularly challenging at high redshift. The
Euclid
wide survey will cover 15 000 deg
2
of the sky, avoiding contamination by light from our Galaxy and our solar system in the optical and near-infrared bands, down to magnitude 24 in the
H
-band. The resulting data will make it possible to detect a large number of galaxy clusters spanning a wide-range of masses up to redshift ∼2 and possibly higher. This paper presents the final results of the
Euclid
Cluster Finder Challenge (CFC), fourth in a series of similar challenges. The objective of these challenges was to select the cluster detection algorithms that best meet the requirements of the
Euclid
mission. The final CFC included six independent detection algorithms, based on different techniques, such as photometric redshift tomography, optimal filtering, hierarchical approach, wavelet and friend-of-friends algorithms. These algorithms were blindly applied to a mock galaxy catalog with representative
Euclid
-like properties. The relative performance of the algorithms was assessed by matching the resulting detections to known clusters in the simulations down to masses of
M
200
∼ 10
13.25
M
⊙
. Several matching procedures were tested, thus making it possible to estimate the associated systematic effects on completeness to < 3%. All the tested algorithms are very competitive in terms of performance, with three of them reaching > 80% completeness for a mean purity of 80% down to masses of 10
14
M
⊙
and up to redshift
z
= 2. Based on these results, two algorithms were selected to be implemented in the
Euclid
pipeline, the Adaptive Matched Identifier of Clustered Objects (AMICO) code, based on matched filtering, and the PZWav code, based on an adaptive wavelet approach.
Weak lensing, which is the deflection of light by matter along the line of sight, has proven to be an efficient method for constraining models of structure formation and reveal the nature of dark ...energy. So far, most weak-lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak-lensing surveys such as
Euclid
, convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While it carry the same information, the lensing signal is more compressed in the convergence maps than in the shear field. This simplifies otherwise computationally expensive analyses, for instance, non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects caused by holes in the data field, field borders, shape noise, and the fact that the shear is not a direct observable (reduced shear). We present the two mass-inversion methods that are included in the official
Euclid
data-processing pipeline: the standard Kaiser & Squires method (KS), and a new mass-inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS method and includes corrections for mass-mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the
Euclid
Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions and third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method substantially reduces the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5, while the errors on the third- and fourth-order moments are reduced by factors of about 2 and 10, respectively.