•Interval rough number is introduced to deal with the vagueness in decision-making.•A novel multi-criteria model based on interval rough numbers is proposed.•Multi-criteria techniques were compared ...based on interval rough and fuzzy approaches.
This paper presents a new approach for the treatment of uncertainty which is based on interval-valued fuzzy-rough numbers (IVFRN). It is shown that by integrating the rough approach with the traditional fuzzy approach, the subjectivity that exists when defining the borders of fuzzy sets is eliminated. IVFRN make decision making possible using only the internal knowledge in the operative data available to the decision makers. In this way objective uncertainties are used and there is no need to rely on models of assumptions. Instead of different external parameters in the application of IVFRN, the structure of the given data is used. On this basis an original multi-criteria model was developed based on an IVFRN approach. In this multi-criteria model the traditional steps of the BWM (Best–Worst method) and MABAC (Multi-Attributive Border Approximation area Comparison) methods are modified. The model was tested and validated on a study of the optimal selection of fire fighting helicopters. Testing demonstrated that the model based on IVFRN enabled more objective expert evaluation of the criteria in comparison with traditional fuzzy and rough approaches. A sensitivity analysis of the IVFRN BWM-MABAC model was carried out by means of 57 scenarios, the results of which showed a high degree of stability. The results of the IVFRN model were validated by comparing them with the results of the fuzzy and rough extension of the MABAC, COPRAS and VIKOR models.
Autonomous navigation of unmanned aerial vehicles (UAVs) in GPS‐denied environments is a challenging problem, especially for small‐scale UAVs characterized by a small payload and limited battery ...autonomy. A possible solution to the aforementioned problem is vision‐based simultaneous localization and mapping (SLAM), since cameras, due to their dimensions, low weight, availability, and large information bandwidth, circumvent all the constraints of UAVs. In this paper, we propose a stereo vision SLAM yielding very accurate localization and a dense map of the environment developed with the aim to compete in the European Robotics Challenges (EuRoC) targeting airborne inspection of industrial facilities with small‐scale UAVs. The proposed approach consists of a novel stereo odometry algorithm relying on feature tracking (SOFT), which currently ranks first among all stereo methods on the KITTI dataset. Relying on SOFT for pose estimation, we build a feature‐based pose graph SLAM solution, which we dub SOFT‐SLAM. SOFT‐SLAM has a completely separate odometry and mapping threads supporting large loop‐closing and global consistency. It also achieves a constant‐time execution rate of 20 Hz with deterministic results using only two threads of an onboard computer used in the challenge. The UAV running our SLAM algorithm obtained the highest localization score in the EuRoC Challenge 3, Stage IIa–Benchmarking, Task 2. Furthermore, we also present an exhaustive evaluation of SOFT‐SLAM on two popular public datasets, and we compare it to other state‐of‐the‐art approaches, namely ORB‐SLAM2 and LSD‐SLAM. The results show that SOFT‐SLAM obtains better localization accuracy on the majority of datasets sequences, while also having a lower runtime.
Today, mobile robots have a wide range of real-world applications where they can replace or assist humans in many tasks, such as search and rescue, surveillance, patrolling, inspection, environmental ...monitoring, etc. These tasks usually require a robot to navigate through a dynamic environment with smooth, efficient, and safe motion. In this paper, we propose an online smooth-motion-planning method that generates a smooth, collision-free patrolling trajectory based on clothoid curves. Moreover, the proposed method combines global and local planning methods, which are suitable for changing large environments and enabling efficient path replanning with an arbitrary robot orientation. We propose a method for planning a smoothed path based on the golden ratio wherein a robot’s orientation is aligned with a new path that avoids unknown obstacles. The simulation results show that the proposed algorithm reduces the patrolling execution time, path length, and deviation of the tracked trajectory from the patrolling route compared to the original patrolling method without smoothing. Furthermore, the proposed algorithm is suitable for real-time operation due to its computational simplicity, and its performance was validated through the results of an experiment employing a differential-drive mobile robot.
As robots are progressing towards being ubiquitous and an indispensable part of our everyday environments, such as home, offices, healthcare, education, and manufacturing shop floors, efficient and ...safe collaboration and cohabitation become imperative. Given that, such environments could benefit greatly from accurate human action prediction. In addition to being accurate, human action prediction should be computationally efficient, in order to ensure a timely reaction, and capable of dealing with changing environments, since unstructured interaction and collaboration with humans usually do not assume static conditions. In this paper, we propose a model for human action prediction based on motion cues and gaze using shared-weight Long Short-Term Memory networks (LSTMs) and feature dimensionality reduction. LSTMs have proven to be a powerful tool in processing time series data, especially when dealing with long-term dependencies; however, to maximize their performance, LSTM networks should be fed with informative and quality inputs. Given that, in this paper, we furthermore conducted an extensive input feature analysis based on (i) signal correlation and their strength to act as stand-alone predictors, and (ii) a multilayer perceptron inspired by the autoencoder architecture. We validated the proposed model on a publicly available MoGaze11https://humans-to-robots-motion.github.io/mogaze/. dataset for human action prediction, as well as on a smaller dataset recorded in our laboratory. Our model outperformed alternatives, such as recurrent neural networks, a fully connected LSTM network, and the strongest stand-alone signals (baselines), and can run in real-time on a standard laptop CPU. Since eye gaze might not always be available in a real-world scenario, we have implemented and tested a multi-layer perceptron for gaze estimation from more easily obtainable motion cues, such as head orientation and hand position. The estimated gaze signal can be utilized during inference of our LSTM-based model, thus making our action prediction pipeline suitable for real-time practical applications.
•Human action prediction using LSTM networks.•Dimensionality reduction using correlation and autoencoder-inspired MLP.•Gaze estimation might improve human action prediction.•Experiments verified approach using motion capture input.
Fast planar surface 3D SLAM using LIDAR Lenac, Kruno; Kitanov, Andrej; Cupec, Robert ...
Robotics and autonomous systems,
June 2017, 2017-06-00, Letnik:
92
Journal Article
Recenzirano
In this paper we propose a fast 3D pose based SLAM system that estimates a vehicle’s trajectory by registering sets of planar surface segments, extracted from 360∘ field of view (FOV) point clouds ...provided by a 3D LIDAR. Full FOV and planar representation of the map gives the proposed SLAM system the capability to map large-scale environments while maintaining fast execution time. For efficient point cloud processing we apply image-based techniques to project it to three two-dimensional images. The SLAM backend is based on Exactly Sparse Delayed State Filter as a non-iterative way of updating the pose graph and exploiting sparsity of the SLAM information matrix. Finally, our SLAM system enables reconstruction of the global map by merging the local planar surface segments in a highly efficient way. The proposed point cloud segmentation and registration method was tested and compared with the several state-of-the-art methods on two publicly available datasets. Complete SLAM system was also tested in one indoor and one outdoor experiment. The indoor experiment was conducted using a research mobile robot Husky A200 to map our university building and the outdoor experiment was performed on the publicly available dataset provided by the Ford Motor Company, in which a car equipped with a 3D LIDAR was driven in the downtown Dearborn Michigan.
•Efficient processing of 3D point clouds achieved by projecting them onto image planes.•Performing fast segmentation of projected point clouds into planar segments.•Exploiting the sparsity of SLAM information matrix without approximation error.•Technique that combines planar segments that lie on the same plane into one plane.
Visual localization is a challenging problem, especially over the long run, since places can exhibit significant variation due to dynamic environmental and seasonal changes. To tackle this problem, ...we propose a visual place recognition method based on directed acyclic graph matching and feature maps extracted from deep convolutional neural networks (DCNN). Furthermore, in order to find the best subset of DCNN feature maps with minimal redundancy, we propose to form probability distributions on image representation features and leverage the Jensen–Shannon divergence to rank features. We evaluate the proposed approach on two challenging public datasets, namely the Bonn and the Freiburg datasets, and compare it to the state-of-the-art methods. For image representations, we evaluated the following DCNN architectures: AlexNet, OverFeat, ResNet18 and ResNet50. Due to the proposed graph structure, we are able to account for any kind of correlations in image sequences, and therefore dub our approach NOSeqSLAM. Algorithms with and without feature selection were evaluated based on precision–recall curves, area under the curve score, best recall at 100% precision score and running time, with NOSeqSLAM outperforming the counterpart approaches. Furthermore, by formulating the mutual information-based feature selection specifically for visual place recognition and by selecting the feature percentile with the best score, all the algorithms, and not just NOSeqSLAM, exhibited enhanced performance with the reduced feature set.
•In visual place recognition, nonlinear sequences perform better then linear.•On-the-fly relaxation performs faster than standard shortest path algorithms.•DCNN feature maps represent a place better than hand-crafted features.•Mutual information-based feature selection improves the results in place recognition.
Autonomous navigation of mobile robots is often based on information from a variety of heterogeneous sensors; hence, extrinsic sensor calibration is a fundamental step in the fusion of such ...information. In this paper, we address the problem of extrinsic calibration of a radar–LiDAR–camera sensor system. This problem is primarily challenging due to sparse informativeness of radar measurements. Namely, radars cannot extract rich structural information about the environment, while their lack of elevation resolution, that is nevertheless accompanied by substantial elevation field of view, introduces uncertainty in the origin of the measurements. We propose a novel calibration method which involves a special target design and two-step optimization procedure to solve the aforementioned challenges. First step of the optimization is minimization of a reprojection error based on an introduced point–circle geometric constraint. Since the first step is not able to provide reliable estimates of all the six extrinsic parameters, we introduce a second step to refine the subset of parameters with high uncertainty. We exploit a pattern discovered in the radar cross section estimation that is correlated to the missing elevation angle. Additionally, we carry out identifiability analysis based on the Fisher Information Matrix to show minimal requirements on the dataset and to verify the method through simulations. We test the calibration method on a variety of sensor configurations and address the problem of radar vertical misalignment. In the end, we show via extensive experiment analysis that the proposed method is able to reliably estimate all the six parameters of the extrinsic calibration.
•Extrinsic radar–camera–LiDAR calibration estimated accurately in all 6DoF.•Radar’s missing elevation angle compensated with radar cross section measurements.•Method is suitable for radar vertical misalignment detection.•Identifiability analysis confirms chosen transform parametrization.•Identifiability analysis provides minimal requirements on the dataset.
The complete coverage path planning is a process of finding a path which ensures that a mobile robot completely covers the entire environment while following the planned path. In this paper, we ...propose a complete coverage path planning algorithm that generates smooth complete coverage paths based on clothoids that allow a nonholonomic mobile robot to move in optimal time while following the path. This algorithm greatly reduces coverage time, the path length, and overlap area, and increases the coverage rate compared to the state-of-the-art complete coverage algorithms, which is verified by simulation. Furthermore, the proposed algorithm is suitable for real-time operation due to its computational simplicity and allows path replanning in case the robot encounters unknown obstacles. The efficiency of the proposed algorithm is validated by experimental results on the Pioneer 3DX mobile robot.