This paper proposes to blindly evaluate the quality of images synthesized via a depth image-based rendering (DIBR) procedure. As a significant branch of virtual reality (VR), superior DIBR techniques ...provide free viewpoints in many real applications, including remote surveillance and education; however, limited efforts have been made to measure the performance of DIBR techniques, or equivalently the quality of DIBR-synthesized views, especially in the condition when references are unavailable. To achieve this aim, we develop a novel blind image quality assessment (IQA) method via multiscale natural scene statistical analysis (MNSS). The design principle of our proposed MNSS metric is based on two new natural scene statistics (NSS) models specific to the DBIR-synthesized IQA. First, the DIBR-introduced geometric distortions damage the local self-similarity characteristic of natural images, and the damage degrees of self-similarity present particular variations at different scales. Systematically combining the measurements of the variations mentioned above can gauge the naturalness of the input image and thus indirectly reflect the quality changes of images generated using different DIBR methods. Second, it was found that the degradations in main structures of natural images at different scales remain almost the same, whereas the statistical regularity is destroyed in the DIBR-synthesized views. Estimating the deviation of degradations in main structures at different scales between one DIBR-synthesized image and the statistical model, which is constructed based on a large number of natural images, can quantify how a DIBR method damages the main structures and thus infer the image quality. Via trials, the two NSS-based features extracted above can well predict the quality of DIBR-synthesized images. Further, the two features come from distinct points of view, and we hence integrate them via a straightforward multiplication to derive the proposed blind MNSS metric, which achieves better performance than each component and state-of-the-art quality methods.
An Overview of Digital Video Watermarking Asikuzzaman, Md; Pickering, Mark R.
IEEE transactions on circuits and systems for video technology,
09/2018, Volume:
28, Issue:
9
Journal Article
Peer reviewed
The illegal distribution of a digital movie is a common and significant threat to the film industry. With the advent of high-speed broadband Internet access, a pirated copy of a digital video can now ...be easily distributed to a global audience. A possible means of limiting this type of digital theft is digital video watermarking whereby additional information, called a watermark, is embedded in the host video. This watermark can be extracted at the decoder and used to determine whether the video content is watermarked. This paper presents a review of the digital video watermarking techniques in which their applications, challenges, and important properties are discussed, and categorizes them based on the domain in which they embed the watermark. It then provides an overview of a few emerging innovative solutions using watermarks. Protecting a 3D video by watermarking is an emerging area of research. The relevant 3D video watermarking techniques in the literature are classified based on the image-based representations of a 3D video in stereoscopic, depth-image-based rendering, and multi-view video watermarking. We discuss each technique, and then present a survey of the literature. Finally, we provide a summary of this paper and propose some future research directions.
Scene representation networks (SRNs) have been recently proposed for compression and visualization of scientific data. However, state-of-the-art SRNs do not adapt the allocation of available network ...parameters to the complex features found in scientific data, leading to a loss in reconstruction quality. We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN) and propose a domain decomposition training and inference technique for accelerated parallel training on multi-GPU systems. We also release an open-source neural volume rendering application that allows plug-and-play rendering with any PyTorch-based SRN. Our proposed APMGSRN architecture uses multiple spatially adaptive feature grids that learn where to be placed within the domain to dynamically allocate more neural network resources where error is high in the volume, improving state-of-the-art reconstruction accuracy of SRNs for scientific data without requiring expensive octree refining, pruning, and traversal like previous adaptive models. In our domain decomposition approach for representing large-scale data, we train an set of APMGSRNs in parallel on separate bricks of the volume to reduce training time while avoiding overhead necessary for an out-of-core solution for volumes too large to fit in GPU memory. After training, the lightweight SRNs are used for realtime neural volume rendering in our open-source renderer, where arbitrary view angles and transfer functions can be explored. A copy of this paper, all code, all models used in our experiments, and all supplemental materials and videos are available at https://github.com/skywolf829/APMGSRN.
Smoothed-particle hydrodynamics (SPH) is a mesh-free method used to simulate volumetric media in fluids, astrophysics, and solid mechanics. Visualizing these simulations is problematic because these ...datasets often contain millions, if not billions of particles carrying physical attributes and moving over time. Radial basis functions (RBFs) are used to model particles, and overlapping particles are interpolated to reconstruct a high-quality volumetric field; however, this interpolation process is expensive and makes interactive visualization difficult. Existing RBF interpolation schemes do not account for color-mapped attributes and are instead constrained to visualizing just the density field. To address these challenges, we exploit ray tracing cores in modern GPU architectures to accelerate scalar field reconstruction. We use a novel RBF interpolation scheme to integrate per-particle colors and densities, and leverage GPU-parallel tree construction and refitting to quickly update the tree as the simulation animates over time or when the user manipulates particle radii. We also propose a Hilbert reordering scheme to cluster particles together at the leaves of the tree to reduce tree memory consumption. Finally, we reduce the noise of volumetric shadows by adopting a spatially temporal blue noise sampling scheme. Our method can provide a more detailed and interactive view of these large, volumetric, time-series particle datasets than traditional methods, leading to new insights into these physics simulations.
We present NeFF, a 3D neural scene representation estimated from captured images. Neural radiance fields(NeRF) have demonstrated their excellent performance for image based photo-realistic ...free-viewpoint rendering. However, one limitation of current NeRF based methods is the shape-radiance ambiguity, which means that without any regularization, there may be an incorrect shape that explains the training set very well but that generalizes poorly to novel views. This degeneration becomes particularly evident when fewer input views are provided. We propose an explicit regularization to avoid the ambiguity by introducing the Neural Feature Fields which map spatial locations to view-independent features. We synthesize feature maps by projecting the feature fields into images using volume rendering techniques as NeRF does and get an auxiliary loss that encourages the correct view-independent geometry. Experimental results demonstrate that our method has better robustness when dealing with sparse input views.
Voxel-based segmentation volumes often store a large number of labels and voxels, and the resulting amount of data can make storage, transfer, and interactive visualization difficult. We present a ...lossless compression technique which addresses these challenges. It processes individual small bricks of a segmentation volume and compactly encodes the labelled regions and their boundaries by an iterative refinement scheme. The result for each brick is a list of labels, and a sequence of operations to reconstruct the brick which is further compressed using rANS-entropy coding. As the relative frequencies of operations are very similar across bricks, the entropy coding can use global frequency tables for an entire data set which enables efficient and effective parallel (de)compression. Our technique achieves high throughput (up to gigabytes per second both for compression and decompression) and strong compression ratios of about 1% to 3% of the original data set size while being applicable to GPU-based rendering. We evaluate our method for various data sets from different fields and demonstrate GPU-based volume visualization with on-the-fly decompression, level-of-detail rendering (with optional on-demand streaming of detail coefficients to the GPU), and a caching strategy for decompressed bricks for further performance improvement.
A depth image-based rendering (DIBR) approach with advanced inpainting methods is presented. The DIBR algorithm can be used in 3-D video applications to synthesize a number of different perspectives ...of the same scene, e.g., from a multiview-video-plus-depth (MVD) representation. This MVD format consists of video and depth sequences for a limited number of original camera views of the same natural scene. Here, DIBR methods allow the computation of additional new views. An inherent problem of the view synthesis concept is the fact that image information which is occluded in the original views may become visible, especially in extrapolated views beyond the viewing range of the original cameras. The presented algorithm synthesizes these occluded textures. The synthesizer achieves visually satisfying results by taking spatial and temporal consistency measures into account. Detailed experiments show significant objective and subjective gains of the proposed method in comparison to the state-of-the-art methods.
In this work, we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust and is able to generate animations of time‐resolved light transport ...featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient‐state. We extend stead‐state photon beam radiance estimates to include the temporal domain. Then, we develop a progressive variant of our approach which provably converges to the correct solution using finite memory by averaging independent realizations of the estimates with progressively reduced kernel bandwidths. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media.
In this work, we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust and is able to generate animations of time‐resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient‐state. We extend stead‐state photon beam radiance estimates to include the temporal domain. Then, we develop a progressive variant of our approach which provably converges to the correct solution using finite memory by averaging independent realizations of the estimates with progressively reduced kernel bandwidths. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media.
Automatic Mesh and Shader Level of Detail Liang, Yuzhi; Song, Qi; Wang, Rui ...
IEEE transactions on visualization and computer graphics,
2023-Oct.-1, 2023-10-1, 20231001, Volume:
29, Issue:
10
Journal Article
Peer reviewed
The level of detail (LOD) technique has been widely exploited as a key rendering optimization in many graphics applications. Numerous approaches have been proposed to automatically generate different ...kinds of LODs, such as geometric LOD or shader LOD. However, none of them have considered simplifying the geometry and shader at the same time. In this paper, we explore the observation that simplifications of geometric and shading details can be combined to provide a greater variety of tradeoffs between performance and quality. We present a new discrete multiresolution representation of objects, which consists of mesh and shader LODs. Each level of the representation could contain both simplified representations of shader and mesh. To create such LODs, we propose two automatic algorithms that pursue the best simplifications of meshes and shaders at adaptively selected distances. The results show that our mesh and shader LOD achieves better performance-quality tradeoffs than prior LOD representations, such as those that only consider simplified meshes or shaders.
Haptic rendering has been developing for decades with different rendering approaches and many factors that affect the stability when rendering rigid-body interactions have been investigated. To get ...an overall understanding of the challenges in haptic rendering, we approach this topic by conducting a systematic review. This review examines different haptic rendering approaches and how to deal with instability factors in rendering. A total of 25 papers are reviewed to answer the following questions: (1) what are the most common haptic rendering approaches for rigid-body interaction? and (2) what are the most important factors for instability of haptic rendering and how to address them? Through the process of investigating these questions, we get the insight that transparency can be further explored and technical terms to describe haptic rendering can be more standardized to push the topic forward.