On January 3, 2019, the Chang'e-4 (CE-4) probe successfully landed in the Von Kármán crater inside the South Pole-Aitken (SPA) basin. With the support of a relay communication satellite "Queqiao" ...launched in 2018 and located at the Earth-Moon L2 liberation point, the lander and the Yutu-2 rover carried out in-situ exploration and patrol surveys, respectively, and were able to make a series of important scientific discoveries. Owing to the complexity and unpredictability of the lunar surface, teleoperation has become the most important control method for the operation of the rover. Computer vision is an important technology to support the teleoperation of the rover. During the powered descent stage and lunar surface exploration, teleoperation based on computer vision can effectively overcome many technical challenges, such as fast positioning of the landing point, high-resolution seamless mapping of the landing site, localization of the rover in the complex environment on the lunar surface, terrain reconstruction, and path planning. All these processes helped achieve the first soft landing, roving, and in-situ exploration on the lunar farside. This paper presents a high-precision positioning technology and positioning results of the landing point based on multi-source data, including orbital images and CE-4 descent images. The method and its results have been successfully applied in an actual engineering mission for the first time in China, providing important support for the topographical analysis of the landing site and mission planning for subsequent teleoperations. After landing, a 0.03 m resolution DOM was generated using the descent images and was used as one of the base maps for the overall rover path planning. Before each movement, the Yutu-2 rover controlled its hazard avoidance cameras (Hazcam), navigation cameras (Navcam), and panoramic cameras (Pancam) to capture stereo images of the lunar surface at different angles. Local digital elevation models (DEMs) with a 0.02 m resolution were routinely produced at each waypoint using the Navcam and Hazcam images. These DEMs were then used to design an obstacle recognition method and establish a model for calculating the slope, aspect, roughness, and visibility. Finally, in combination with the Yutu-2 rover mobility characteristics, a comprehensive cost map for path search was generated.By the end of the first 12 lunar days, the Yutu-2 rover has been working on the lunar farside for more than 300 days, greatly exceeding the projected service life. The rover was able to overcome the complex terrain on the lunar farside, and travelled a total distance of more than 300 m, achieving the "double three hundred" breakthrough. In future manned lunar landing and exploration of Mars by China, computer vision will play an integral role to support science target selection and scientific investigations, and will become an extremely important core technology for various engineering tasks.
Accurate terrain estimation is critical for autonomous offroad navigation. Reconstruction of a three-dimensional (3D) surface allows rough and hilly ground to be represented, yielding faster driving ...and better planning and control. However, data from a 3D sensor samples the terrain unevenly, quickly becoming sparse at longer ranges and containing large voids because of occlusions and inclines. The proposed approach uses online kernel-based learning to estimate a continuous surface over the area of interest while providing upper and lower bounds on that surface. Unlike other approaches, visibility information is exploited to constrain the terrain surface and increase precision, and an efficient gradient-based optimization allows for realtime implementation. To model sensor noise over varying ranges, a non-stationary covariance function is adopted. Experimental results are presented for several datasets, including groundtruthed terrain and a large 3D stereo dataset.
On 27 August 1549, the popular East Anglian insurgency known as 'Kett's Rebellion' was defeated in a bloody confrontation with loyalist forces at the valley of Dussindale, just outside Norwich, ...England. Despite the battle's significance, and its vital implications for the study of mid-16th-century warfare, its exact site has yet to be determined conclusively, hampering attempts to analyse the conflict further and to record its location accurately for archaeological and heritage purposes. This article will demonstrate how geographical information systems can be utilised alongside historic maps and written sources to identify the 1549 battlefield within the modern landscape. To do this, it will employ methodologies of map regression, similar to those used at Towton (1461), Bosworth (1485), and Edgehill (1642), as a means of testing and advancing the findings of Anne Carter, who in 1984 suggested the most credible theory regarding the engagement's location. With the help of these tools, the article will not only ascertain where the battle took place, but will also reconstruct its historic terrain, fulfilling an essential requirement for considering its tactical aspects. By doing so, it will demonstrate the ways in which digital technologies can be applied to broaden and support traditional research.
Most of the lunar surface area has been observed from different viewing conditions thanks to the on-orbit work of lunar orbiters, a large amount of images are available for photogrammetric ...three-dimensional mapping, which is an important issue for lunar exploration. Theoretically, multi-view images contain more information than a single stereo pair and can get better 3D mapping results. In this paper, the semi-global matching method is applied to the object space, and the steps of cost calculation, cost aggregation, and elevation calculation are performed to obtain the three-dimensional coordinates directly. Compared with the traditional image-based semi-global matching method, the object-based semi-global method is more easily extended to multi-view images, which is beneficial for applying multi-view image information. In addition, it does not require steps such as stereo rectification and forward intersection, that is, the overall pipeline is more elegant. Using the LRO NAC images covering Apollo 11 landing area as the experimental data, the result shows that the object-based semi-global matching is competent for the multi-view image matching and the multi-view image result achieves higher accuracy and more details than the single stereo pair. Furthermore, the experimental results of Zhinyu crater data show that this method can also alleviate the uncertainty of the lunar orbiter's positioning to some extent.
Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the ...remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.
This article describes a reconstruction made of the palaeo-volcanic edifice on Deception Island (South Shetland Islands, Antarctica) prior to the formation of its present caldera. Deception Island is ...an active Quaternary volcano located in the Bransfield Strait, between the South Shetland Islands and the Antarctic Peninsula. The morphology of the island has been influenced mainly by the volcanic activity but geodynamics and volcanic deformation have also contributed. A volcanic reconstruction method, the Geodynamic Regression Model (GRM), which includes a terrain deformation factor, is proposed. In the case of Deception Island, the directions of this deformation are NW–SE and NE–SW, and match both the observed deformation of the Bransfield Strait and the volcanic deformation monitored over the last 20years in the island, using Global Navigation Satellite System (GNSS) techniques. Based on these data, possible volcanic deformation values of 5–15mm/yr in these directions have been derived. A possible coastline derived from a current bathymetry is transformed, according to values for the chosen date, to obtain the palaeo-coastline of Deception Island of 100kyears ago. Topographic, geomorphologic, volcanological and geological data in a GIS system have been considered, for computation of the outside caldera slope, palaeo-coastline, palaeo-summit height and palaeo digital elevation model (DEM). The result is a 3D palaeo-geomorphological surface model of a volcano, reaching 640m in height, with an increase of 4km3 in volume compared to the current edifice, covering 4km2 more surface area and the method reveals the previous existence of parasite volcanoes. Two photorealistic images of the island are obtained by superposition of textures extracted from a current Quick Bird satellite image also. This technique for reconstructing the terrain of an existing volcano could be useful for analysing the past and future geomorphology of this island and similar locations.
► Reconstruction of an island previous to the formation of its caldera is proposed. ► Input data: topography, bathymetry, pre-caldera deposits and deformation rates. ► Taken into account: mass balance due to ice, erosion and sea level. ► The palaeo-summit height calculated for the palaeo-stratovolcano is 640m. ► The method reveals the pre-existence of parasite volcanoes.
The paper presents an algorithm for online coverage path planning of unknown environments using curvature-constrained AUVs. Unlike point vehicles, which can make quick maneuvers in any direction ...towards any goal, curvature-constrained AUVs need significant time to accelerate, decelerate, or turn towards the goal. Therefore, finding a feasible collision-free path to the waypoint in the presence of obstacles is a nontrivial task for curvature-constrained AUVs. In order to overcome this challenge, we develop a new algorithm that dynamically selects the shortest Dubins path from its current state to a neighboring region in a locally optimal manner while providing efficient global coverage. The proposed new algorithm is an extension of our recently developed algorithm called ε*, which utilizes an Exploratory Turing Machine (ETM) as a supervisor to guide the vehicle with adaptive navigation decisions. The performance of the proposed algorithm is validated on a high-fidelity underwater simulator called UWSim, where the collected terrain data is used offline for 3-D reconstruction of the seabed. The simulations show that the proposed algorithm generates feasible and safe coverage paths for curvature-constrained AUVs for accurate reconstruction of the underwater terrain.
Mobile robot operators need to make quick decisions based on information about the robot’s surrounding environment. This study proposes a graphics processing unit (GPU)-based terrain modeling system ...for large-scale LiDAR (Light Detection And Ranging) dataset visualization using a voxel map and a textured mesh. A 3D flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. The sensed 3D point clouds are quantized into regular 3D grids that are allocated in the GPU memory to remove redundant spatial and temporal points. Subsequently, the sensed vertices are segmented as ground and non-ground classes. The ground indices are rendered using a textured mesh to represent the ground surface, and the non-ground indices, using a colored voxel map by a particle rendering method. The proposed approach was tested using a mobile robot equipped with a LiDAR sensor, video camera, GPS receiver, and gyroscope. The simulation was evaluated through a test in an outdoor environment containing trees and buildings, demonstrating the real-time visualization performance of the proposed method in a large-scale environment.
Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, ...including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh.