Ray tracing has long been considered as the next-generation technology for graphics rendering. Recently, there has been strong momentum to adopt ray tracing--based rendering techniques on ...consumer-level platforms due to the inability of further enhancing user experience by increasing display resolution. On the other hand, the computing workload of ray tracing is still overwhelming. A 10-fold performance gap has to be narrowed for real-time applications, even on the latest graphics processing units (GPUs). As a result, hardware acceleration techniques are critical to delivering a satisfying level performance while at the same time meeting an acceptable power budget. A large body of research on ray-tracing hardware has been proposed over the past decade. This article is aimed at providing a timely survey on hardware techniques to accelerate the ray-tracing algorithm. First, a quantitative profiling on the ray-tracing workload is presented. We then review hardware techniques for the main functional blocks in a ray-tracing pipeline. On such a basis, the ray-tracing microarchitectures for both ASIC and processors are surveyed by following a systematic taxonomy.
The widespread use of LiDAR technology in a multitude of domains has produced a growing availability of massive high-resolution point datasets that demand new approaches for efficient organization ...and storage, filtering using different spatio-temporal criteria, selective/progressive visualization, processing and analysis, and collaborative editing. Ideally, LiDAR data coming from multiple sources and organized in different datasets should be accessible in a simple, uniform, and ubiquitous way to comply with the FAIR principle proposed by the Open Geospatial Consortium: Findable, Accessible, Interoperable, and Reusable. With this goal in mind, we present SPSLiDAR, a conceptual model with a simple interface for repositories of LiDAR data that can be adapted to the needs of different applications. SPSLiDAR includes aspects, such as the arrangement of related datasets into workspaces on a world scale, support for overlapping datasets with different resolutions or acquired at different times, and hierarchical organization of point data, enabling levels of detail and selective download. We also describe in detail an implementation of this model aimed at visualization and downloading of large datasets using the MongoDB database. Finally, we show some experimental results of this implementation using real data, such as its space requirements, upload latency, access latency, and throughput.
Celotno besedilo
Dostopno za:
BFBNIB, DOBA, GIS, IJS, IZUM, KILJ, KISLJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
Spatial information technology has been widely used for vehicles in general and for fleet management. Many studies have focused on improving vehicle positioning accuracy, although few studies have ...focused on efficiency improvements for managing large truck fleets in the context of the current complex network of roads. Therefore, this paper proposes a multilayer-based map matching algorithm with different spatial data structures to deal rapidly with large amounts of coordinate data. Using the dimension reduction technique, the geodesic coordinates can be transformed into plane coordinates. This study provides multiple layer grouping combinations to deal with complex road networks. We integrated these techniques and employed a puncture method to process the geometric computation with spatial data-mining approaches. We constructed a spatial division index and combined this with the puncture method, which improves the efficiency of the system and can enhance data retrieval efficiency for large truck fleet dispatching. This paper also used a multilayer-based map matching algorithm with raster data structures. Comparing the results revealed that the look-up table method offers the best outcome. The proposed multilayer-based map matching algorithm using the look-up table method is suited to obtaining competitive performance in identifying efficiency improvements for large truck fleet dispatching.
In the capacitance extraction with the floating random walk (FRW) algorithm, the space management approach is required to facilitate finding the nearest conductor. The Octree and grid-based spatial ...structures have been used to decompose the whole domain into cells and to store information of local conductors. In this letter, the techniques with the distance limit of cell and only searching in cell's neighbor region are proposed to accelerate the construction of the spatial structures. A fast inquiry technique is proposed to fasten the nearest conductor query. We also propose a grid-Octree hybrid structure, which has advantages over existing structures. Experiments on large very large scale integration structures with up to 484441 conductors have validated the efficiency of the proposed techniques. The improved FRW algorithm is faster than RWCap for thousands times while extracting a single net, and several to tens times while extracting 100 nets.
This study explores how to combine variographic spatial characterization with multivariate data analysis, by showing how Principal Component Analysis (PCA) can be applied to unconventional types of ...data matrices, Xvariogram. This is here performed on a specific data set from an agricultural field in western Jutland, Denmark, but the data analytical approach is generic. In order to characterize the heterogeneity of a typical sandy soil, a variographic experiment along a 1-D profile is performed on 38 different minerogenic variables (geochemical elements). While the variogram is defined for one variable only, it is shown how PCA is able to characterize a multitude of variograms simultaneously, facilitating subject-matter interpretation of the fingerprints of the process(es) responsible for the spatial heterogeneity encountered. PCA scores and loadings contain information pertaining to the specific matrix type consisting of variograms, Xvariogram. Together with a companion paper, a complete approach for characterizing scale-varying spatial heterogeneity is presented with a view of developing sampling procedures for managing the intrinsic variability in natural soil and in similar systems (e.g. environmental characterization and monitoring, pollution in time and space, applied geochemistry, medical geology). Sampling in all of these contexts is shown to be much more than a ‘materials handling’ issue, by force involving the Theory of Sampling, TOS. The PCA (Xvariogram) approach can be applied for tuning in of sampling procedures and 1-D and 2-D sampling plans in soil, environmental substrates, pollution and medical geology studies with a carrying-over potential to many other application fields with similar heterogeneity management needs.
•Present specific matrix type consisting of variograms as input, Xvariogram.•Explain a combination of variographic analysis and multivariate data analysis.•Show how PCA is able to characterize a multitude of variograms simultaneously.•Present a complete approach for characterizing scale-varying spatial heterogeneity.•The approach is applicable for any fields with heterogeneity management needs.
The comparison map profile (CMP) method compares two spatially explicit data sets (original images) at each point and through several spatial scales simultaneously. The CMP combines the moving window ...concept with similarity indices for quantitative or qualitative data to visualize and quantify outputs: changes in mean similarity value and its variability through scales are reported on a profile, similarities between regions are estimated on monoscale maps, and their persistence through scales assessed on a mean multiscale map. The CMP method is first illustrated using two images with slight difference in the checkered pattern. Second, two sets of comparisons related to African vegetation are conducted using the CMP method. The first set deals with quantitative data of leaf area index (LAI): Remote-sensed LAI images extracted from the AVHRR-NVDI product are compared to simulated LAI output from a dynamic global vegetation model (DGVM) using the distance and the cross-correlational coefficient for quantitative comparison of values and structure patterns, respectively. The second set of images deals with qualitative data: the remote-sensed product of land cover type by IGBP-MODIS is compared to the DGVM classified LAI output into land cover types using the Kappa statistics as similarity index. Results show that taking spatial patterns into account using the CMP method decreases the mean correlation by 50%, and increases the distance by 50% as compared to the global pixel-to-pixel indices. Similarly, comparison of land cover maps costs only 35% of the global Kappa value. Equatorial gradients of vegetation from forests to grassland are the most persistent similar regions between both types of data sets. Potential limits and strengths of the CMP method are discussed.
In this paper, we present a new approach for encrypting binary images. Putting different scan patterns at the same level in the scan tree structure and employing a two-dimensional run-encoding (2DRE) ...technique, our encryption method can encrypt images with higher security and good compression ratio when compared to the previous results. Detailed security analysis from the combinatorial viewpoint is also given. Some experimentations are carried out to illustrate the good performance of our proposed method.
The problem of tracking multiple objects has been investigated in various research and industrial fields. Among existing methods, random finite set (RFS) solutions such as the generalized labeled ...multi-Bernoulli (GLMB) filter has provided efficient solutions with solid theoretical justifications. Furthermore, implementations show that the GLMB approach is efficient under challenging scenarios. In this paper, we study an RFS-based method for multi-object tracking (MOT) through a simple data structure for label partitioning. Specifically, grid index structure based techniques for splitting a label space and a label-partitioned GLMB tracker are investigated. We finally evaluate the performance of label partitioning and the GLMB filter via various means such as visualization, execution time, and MOT metrics.
A framework for interactive modeling of three dimensional caves is presented. It is based on a new spatial data structure that extends existing terrain rendering methods. The cave is represented as a ...set of slabs, each encoding a portion of the model along an axis. We describe methods for a user to modify this model locally and procedural methods for global alteration. We wish to allow cave modeling as easily as existing terrain editing programs that restrict the model to a single two dimensional manifold. In this paper, we discuss existing cave visualization programs, including their limitations, as well as how terrain editing and rendering methods can be used in the process of modeling caves.