In disordered media, quantum interference effects are expected to induce complete suppression of electron conduction. The phenomenon, known as Anderson localization, has a counterpart with classical ...waves that has been observed in acoustics, electromagnetism and optics, but a direct observation for particles remains elusive. Here, we report the observation of the three-dimensional localization of ultracold atoms in a disordered potential created by a speckle laser field. A phenomenological analysis of our data distinguishes a localized component of the resulting density profile from a diffusive component. The observed localization cannot be interpreted as the classical trapping of particles with energy below the classical percolation threshold in the disorder, nor can it be understood as quantum trapping in local potential minima. Instead, our data are compatible with the self-consistent theory of Anderson localization tailored to our system, involving a heuristic energy shift that offers scope for future interpretation. PUBLICATION ABSTRACT
Robust and accurate visual-inertial estimation is crucial to many of today's challenges in robotics. Being able to localize against a prior map and obtain accurate and drift-free pose estimates can ...push the applicability of such systems even further. Most of the currently available solutions, however, either focus on a single session use case, lack localization capabilities, or do not provide an end-to-end pipeline. We believe that only a complete system, combining state-of-the-art algorithms, scalable multisession mapping tools, and a flexible user interface, can become an efficient research platform. We, therefore, present maplab, an open, research-oriented visual-inertial mapping framework for processing and manipulating multisession maps, written in C++. On the one hand, maplab can be seen as a ready-to-use visual-inertial mapping and localization system. On the other hand, maplab provides the research community with a collection of multisession mapping tools that include map merging, visual-inertial batch optimization, and loop closure. Furthermore, it includes an online frontend that can create visual-inertial maps and also track a global drift-free pose within a localization map. In this letter, we present the system architecture, five use cases, and evaluations of the system on public datasets. The source code of maplab is freely available for the benefit of the robotics research community.
This study proposes a 3D global localization method that implements mobile LiDAR mapping and point cloud registration to recognize the locations of objects in an underground mine. An initial global ...point cloud map was built for an entire underground mine area using mobile LiDAR; a local LiDAR scan (local point cloud) was generated at the point where underground positioning was required. We calculated fast point feature histogram (FPFH) descriptors for the global and local point clouds to extract point features. The match areas between the global and the local point clouds were searched and aligned using random sample consensus (RANSAC) and iterative closest point (ICP) registration. The object's location on the global coordinate system was measured using the LiDAR sensor trajectory. Field experiments were performed at the Gwan-in underground mine using three mobile LiDAR systems. The local point cloud dataset formed for the six areas of the underground mine precisely matched the global point cloud, with a low average error of approximately 0.13 m, regardless of the type of mobile LiDAR system used. In addition, the LiDAR senor trajectory was aligned on the global coordinate system to confirm the change in the dynamic object's position over time.
Wi-Fi fingerprint-based localization has attracted significant research interest recently. Previous works in this area mainly focus on locating an individual user, whereas the additional assistance ...from peer-to-peer interactions has not been fully exploited. In this paper, we propose a cooperative localization method which not only utilizes the initial results by the fingerprint-based algorithm but also takes into account the physical constraint of pairwise distances to refine the localization estimates for multiple users simultaneously. The experimental results demonstrate that our algorithm is robust against the ranging error and the outdated fingerprint database. With the proposed peer selection scheme, it considerably improves localization accuracy. We further extend our framework to single-user motion tracking and localization based only on access-point-connectivity data.
Supervised learning-based methods for source localization, being data driven, can be adapted to different acoustic conditions via training and have been shown to be robust to adverse acoustic ...environments. In this paper, a convolutional neural network (CNN) based supervised learning method for estimating the direction of arrival (DOA) of multiple speakers is proposed. Multi-speaker DOA estimation is formulated as a multi-class multi-label classification problem, where the assignment of each DOA label to the input feature is treated as a separate binary classification problem. The phase component of the short-time Fourier transform (STFT) coefficients of the received microphone signals are directly fed into the CNN, and the features for DOA estimation are learnt during training. Utilizing the assumption of disjoint speaker activity in the STFT domain, a novel method is proposed to train the CNN with synthesized noise signals. Through experimental evaluation with both simulated and measured acoustic impulse responses, the ability of the proposed DOA estimation approach to adapt to unseen acoustic conditions and its robustness to unseen noise type is demonstrated. Through additional empirical investigation, it is also shown that with an array of M microphone our proposed framework yields the best localization performance with M -1 convolution layers. The ability of the proposed method to accurately localize speakers in a dynamic acoustic scenario with varying number of sources is also shown.
We present BioSLAM, a lifelong (lifelong simultaneous localization and mapping) SLAM framework for learning various new appearances incrementally and maintaining accurate place recognition for ...previously visited areas. Unlike humans, artificial neural networks suffer from catastrophic forgetting and may forget the previously visited areas when trained with new arrivals. For humans, researchers discover that there exists a memory replay mechanism in the brain to keep the neuron active for previous events. Inspired by this discovery, BioSLAM designs a gated generative replay to control the robot's learning behavior based on the feedback rewards. Specifically, BioSLAM provides a novel dual-memory mechanism for the maintenance of: 1) a dynamic memory to efficiently learn new observations; and 2) a static memory to balance new-old knowledge. When the agent is encountered with different appearances under new domains, the complete processing pipeline can help to incrementally update the place recognition ability, robust to the increasing complexity of long-term place recognition. We demonstrate BioSLAM in three incremental SLAM scenarios as follows. 1) A 120 km city-scale trajectories with LiDAR-based inputs. 2) A multivisited 4.5 km campus-scale trajectories with LiDAR-vision inputs. 3) An official Oxford dataset with 10 km visual inputs under different environmental conditions. We show that BioSLAM can incrementally update the agent's place recognition ability and outperform the state-of-the-art incremental approach, generative replay, by 24% in terms of place recognition accuracy. To the best of our knowledge, BioSLAM is the first memory-enhanced lifelong SLAM system to help incremental place recognition in long-term navigation tasks.
Several works have been carried out in the realm of RGB-D SLAM development, yet they have neither been thoroughly assessed nor adapted for outdoor vehicular contexts. This paper proposes an extension ...of HOOFR SLAM to an enhanced IR-D modality applied to an autonomous vehicle in an outdoor environment. We address the most prevalent camera issues in outdoor contexts: environments with an image-dominant overcast sky and the presence of dynamic objects. We used a depth-based filtering method to identify outlier points based on their depth value. The method is robust against outliers and also computationally inexpensive. For faster processing, we suggest optimization of the pose estimation block by replacing the RANSAC method used for essential matrix estimation with PROSAC. We assessed the algorithm using a self-collected IR-D dataset gathered by the SATIE laboratory instrumented vehicle using a PC and an embedded architecture. We compared the measurement results to those of the most advanced algorithms by assessing translational error and average processing time. The results revealed a significant reduction in localization errors and a significant gain in processing speed compared to the state-of-the-art stereo (HOOFR SLAM) and RGB-D algorithms (Orb-slam2, Rtab-map).