•Removal of artifacts on silhouettes in splat-based models.•Efficient silhouette detection of splat-based models on object space.•Silhouette detection can be used for non-photorealistic ...renderings.•Good trade-off between quality and efficiency of splat-based rendering.
Display omitted
Surface splatting has proven to be a good approach to render point-based models. The technique keeps the simplicity of point based models with a high quality rendering due to local filtering between the samples. However, silhouettes and sharp features are noticed as high frequency variations on image-space. Thus, filtering cannot hide artifacts to appear next to these areas without an explicit approach, mainly on bad sampling conditions as low density of samples. In this paper, we present a new curved splat approach, called here quadric splat, which improves rendering of a model near silhouette and sharp features. These quadric splats may be placed on a surface using any sampling process applying our proposed error metric. We also propose a simple method to detect splats near silhouettes on object-space. This approach allows an efficient silhouette detection on GPUs. Our silhouette detection was applied on a surface splatting pipeline aiming to render quadric splats only where their effect is more noticed, but the technique is independent and can be applied even for non-realistic rendering of splat-based models.
Display omitted
•Out-of-core algorithms allow to interactively visualize massive 3D point clouds•Web-based rendering concepts eliminate the need to store whole data sets locally•Thin and thick client ...rendering provides scaling for varying computing capabilities•A modular pipeline concept allows to efficiently combine atomic processing steps•Specialized rendering techniques enable task and data-specific filtering
3D point cloud technology facilitates the automated and highly detailed acquisition of real-world environments such as assets, sites, and countries. We present a web-based system for the interactive exploration and inspection of arbitrary large 3D point clouds. Our approach is able to render 3D point clouds with billions of points using spatial data structures and level-of-detail representations. Point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering, e.g., based on semantics. A set of interaction techniques allows users to collaboratively work with the data (e.g., measuring distances and annotating). Additional value is provided by the system’s ability to display additional, context-providing geodata alongside 3D point clouds and to integrate processing and analysis operations. We have evaluated the presented techniques and in case studies and with different data sets from aerial, mobile, and terrestrial acquisition with up to 120 billion points to show their practicality and feasibility.
Illumination effects in translucent materials are a combination of several physical phenomena: refraction at the surface, absorption and scattering inside the material. Because refraction can focus ...light deep inside the material, where it will be scattered, practical illumination simulation inside translucent materials is difficult. In this paper, we present an a Point-Based Global Illumination method for light transport on homogeneous translucent materials with refractive boundaries. We start by placing light samples inside the translucent material and organizing them into a spatial hierarchy. At rendering, we gather light from these samples for each camera ray. We compute separately the sample contributions for single, double and multiple scattering, and add them. We present two implementations of our algorithm: an offline version for high-quality rendering and an interactive GPU implementation. The offline version provides significant speed-ups and reduced memory footprints compared to state-of-the-art algorithms, with no visible impact on quality. The GPU version yields interactive frame rates: 30 fps when moving the viewpoint, 25 fps when editing the light position or the material parameters.
Current state-of-the-art point rendering techniques such as splat rendering generally require very high-resolution point clouds in order to create high-quality photo realistic renderings. These can ...be very time consuming to acquire and oftentimes also require high-end expensive scanners. This paper proposes a novel deep learning-based approach that can generate high-resolution photo realistic point renderings from low-resolution point clouds. More specifically, we propose to use co-registered high-quality photographs as the ground truth data to train the deep neural network for point-based rendering. The proposed method can generate high-quality point rendering images very efficiently and can be used for interactive navigation of large-scale 3D scenes as well as image-based localization. Extensive quantitative evaluations on both synthetic and real datasets show that the proposed method outperforms state-of-the-art methods.
We present an efficient technique for out-of-core multi-resolution construction and high quality interactive visualization of massive point clouds. Our approach introduces a novel hierarchical level ...of detail (LOD) organization based on
multi-way kd-trees
, which simplifies memory management and allows control over the LOD-tree height. The LOD tree, constructed bottom up using a fast high-quality point simplification method, is fully balanced and contains all uniformly sized nodes. To this end, we introduce and analyze three efficient point simplification approaches that yield a desired number of high-quality output points. For constant rendering performance, we propose an efficient rendering-on-a-budget method with asynchronous data loading, which delivers fully continuous high quality rendering through LOD geo-morphing and deferred blending. Our algorithm is incorporated in a full end-to-end rendering system, which supports both local rendering and cluster-parallel distributed rendering. The method is evaluated on complex models made of hundreds of millions of point samples.
Active grid cells in scalar volume data are typically identified by many isosurface rendering methods when extracting another representation of the data for rendering. However, the use of grid cells ...themselves as rendering primitives is not extensively explored in the literature. In this paper, we propose a cluster-based data structure for storing the data of active grid cells for fast cell rasterisation via billboard splatting. Compared to previous cell rasterisation approaches, eight corner scalar values are stored with each active grid cell, so that the full volume data is not required during rendering. The grid cells can be quickly extracted and use about 37 percent memory compared to a typical efficient mesh-based representation, while supporting large grid sizes. We present further improvements such as a visibility buffer for cluster culling and EWA-based interpolation of attributes such as normals. We also show that our data structure can be used for hybrid ray tracing or path tracing to compute global illumination.
A study on the video plus depth representation for multi-view video sequences is presented. Such a 3D representation enables functionalities like 3D television and free viewpoint video. Compression ...is based on algorithms for multi-view video coding, which exploit statistical dependencies from both temporal and inter-view reference pictures for prediction of both color and depth data. Coding efficiency of prediction structures with and without inter-view reference pictures is analyzed for multi-view video plus depth data, reporting gains in luma PSNR of up to 0.5 dB for depth and 0.3 dB for color. The main benefit from using a multi-view video plus depth representation is that intermediate views can be easily rendered. Therefore the impact on image quality of rendered arbitrary intermediate views is investigated and analyzed in a second part, comparing compressed multi-view video plus depth data at different bit rates with the uncompressed original.
With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh ...surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.