The existing radio frequency identification (RFID) localization methods typically regard the initial phase of the signal as constant. This is feasible when the antenna and the tag are stationary. ...However, in synthetic aperture radar (SAR) RFID localization, the relative motion of the reader antenna and tag changes the antenna phase center and tag orientation, thus resulting in a continuous change of the initial phase. This change, which will cause the nonideal phase offset (NPO) and, thus, have an adverse effect on positioning, has been neglected in most previous SAR RFID studies. In this article, we investigate the antenna phase uncertainty and the tag orientation-dependent phase offset and analyze their influence on localization accuracy. To the best of our knowledge, this is the first time that the initial phase is regarded as a variable in SAR RFID localization. In addition, a novel motion model-based localization (MoLoc) method is proposed, which exploits the relative motion model and visualizes the influence of NPO more intuitively. Compared with the conventional grid-matching methods, MoLoc shows higher accuracy and much lower computation time. Both the influence of NPO and the performance of MoLoc are verified by experiments implemented in real scenario. The experiments also evaluate the influence of NPO on the grid-matching method. The experimental results show that NPO has a significant impact on the accuracy of SAR-based localization methods.
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies ...simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
The ability to localize in the co-ordinate system of a 3D model presents an opportunity for safe trajectory planning. While SLAM-based approaches provide estimates of incremental poses with respect ...to the first camera frame, they do not provide global localization. With the availability of mobile GPUs like the Nvidia TX1 etc., our method provides a novel, elegant and high performance visual method for model based robot localization. We propose a method to learn an environment representation with deep residual nets for localization in a known 3D model representing a real-world area of 25,000 sq. meters. We use the power of modern GPUs and game engines for rendering training images mimicking a downward looking high flying drone using a photorealistic 3D model. We use these images to drive the learning loop of a 50-layer deep neural network to learn camera positions. We next propose to do data augmentation to accelerate training and to make our trained model robust for cross domain generalization, which has been verified with experiments. We test our trained model with synthetically generated data as well as real data captured from a downward looking drone. It takes about 25 miliseconds of GPU processing to predict camera pose. Unlike previous methods, the proposed method does not do rendering at test time and does independent prediction from a learned environment representation.
This paper presents a new robot-vision system architecture for real-time moving object localization. The 6-DOF (3 translation and 3 rotation) motion of the objects is detected and tracked accurately ...in clutter using a model-based approach without information of the objects’ initial positions. An object identification task and an object tracking task are combined under this architecture. The computational time-lag between the two tasks is absorbed by a large amount of frame memory. The tasks are implemented as independent software modules using stereo-vision-based methods which can deal with objects of various shapes with edges, including planar to smooth-curved objects, in cluttered environments. This architecture also leads to failure-recoverable object tracking, because the tracking processes can be automatically recovered, even if the moving objects are lost while tracking. Experimental results obtained with prototype systems demonstrate the effectiveness of the proposed architecture.
This work implements a hydrodynamic model-based localization and navigation system for low-cost autonomous underwater vehicles (AUVs) that are limited to a micro-electro mechanical system (MEMS) ...inertial measurement unit (IMU). The hydrodynamic model of this work is uniquely developed to directly determine the linear velocities of the vehicle using the measured vehicle angular rates and propeller speed as inputs. The proposed system was tested in the field using a fleet of low-cost Bluefin SandShark AUVs. Implementation of the model-based localization system and fusing of the solution into the vehicle navigation loop was conducted using backseat computers of the AUV fleet that run mission orientated operating suite -interval programming (MOOS-IvP). With the model-based navigation system, the maximum localization error (i.e., in comparison to a long baseline (LBL) based ground-truth position) was limited to 15 m and 30 m for two 650-second and 1070-second long missions. Extrapolation of the position drift shows that the model-based localization system is able to limit the position uncertainty to less than 100 m by the end of hour-long mission; whereas, the drift in the default IMU-based localization solution was over 1 km per hour. This is a considerable improvement by only using a MEMS IMU that generally costs less than 100. Furthermore, this work is a step towards generalizing and automating the process of hydrodynamic modeling, model parameter estimation and data fusion (i.e., fusing the localization solution with those from other available aiding sensors and feeding to the navigation loop) so that a model-based localization system can be implemented in any AUV that has backseat computing capability.
This paper presents a strategic approach for localizing and recognizing the vehicles amidst the traffic scenes generated by monocular camera or video. Previous studies on localization and recognition ...of vehicles are Model based recognition, 3D triangle based modeling, Model based on Wheel alignment, Ferryman 29D PCA coefficient model and etc. The disadvantages of above listed proposals are Affine transformation issues, redundant Data's, Noise in computation, inability to arrive at accurate shape parameters, poor occlusion detection and too much of modeling's. This paper addresses the above issues and proposes a Deformable Efficient local Gradient based method for localizing the vehicle and Evolutionary Fitness evaluation method with EDA for recognizing exact vehicle model from the traffic scenes. Each images are projected (12D + 3D = 15D) in the image plane. Since the vehicle moves over the ground plane, the pose of the vehicle is determined by position co-efficient X, Y and orientation Θ (3D), the 12 parameters are the parameters of Shape, and it is set up as the prior information based on the mined rules for vehicle localization and continuous EDA approach for vehicle recovery. The system also deals with occlusion of related structures based on stochastic analysis.