The study of vascular structures, using medical 3D models, is an active field of research. Illustrative visualizations have been applied to this domain in multiple ways. Researchers made the ...geometric properties of vasculature more comprehensive and augmented the surface with representations of multivariate clinical data. Techniques that head beyond the application of colour‐maps or simple shading approaches require a surface parameterization, that is, texture coordinates, in order to overcome locality. When extracting 3D models, the computation of texture coordinates on the mesh is not always part of the data processing pipeline. We combine existing techniques to a simple parameterization approach that is suitable for tree‐like structures. The parameterization is done w.r.t. to a pre‐defined source vertex. For this, we present an automatic algorithm, that detects the tree root. The parameterization is partly done in screen‐space and recomputed per frame. However, the screen‐space computation comes with positive features that are not present in object‐space approaches. We show how the resulting texture coordinates can be used for varying hatching, contour parameterization, display of decals, as additional depth cues and feature extraction. A further post‐processing step based on parameterization allows for a segmentation of the structure and visualization of its tree topology.
The study of vascular structures, using medical 3D models, is an active field of research. Illustrative visualizations have been applied to this domain in multiple ways. Researchers made the geometric properties of vasculature more comprehensive and augmented the surface with representations of multivariate clinical data. Techniques that head beyond the application of colour‐maps or simple shading approaches require a surface parameterization, that is, texture coordinates, in order to overcome locality. When extracting 3D models, the computation of texture coordinates on the mesh is not always part of the data processing pipeline.
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly ...non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing. We provide the source code and other supplemental material on https://repo-sam.inria.fr/fungraph/neural_catacaustics/
Textured meshes are widely used in computer graphics to represent 3D scenes, with UV mapping playing a crucial role in establishing a bijective mapping between the 3D mesh surface and a 2D texture. ...This mapping not only allows for the enhancement of rendering quality but also enables the compression of mesh textures using standard 2D image or video codecs. However, when reconstructing meshes from real-world multiview images, the resulting UV texture maps often suffer from fragmentation due to geometric inaccuracies and excessive tessellation of the reconstructed surfaces, leading to decreased compression performance. In this paper, we propose a novel and effective preprocessing approach for UV texture map compression based on rate-rendering distortion (R-RD) optimization. Unlike existing methods that rely on padding or smoothing, our method iteratively updates the texture map using the gradient of a joint cost of bitrate and rendering distortion. This cost is estimated through a differentiable image encoder and a differentiable texture sampling. Experimental results with lossless compressed mesh geometry demonstrate that our preprocessing method outperforms existing texture padding methods, achieving BD-rate reductions of at least 10.23%, 15.24%, and 12.10% when combined with JPEG, HEVC, and VVC, respectively. We also validate the effectiveness of our method with lossy compressed meshes using Google Draco, showing improved compression efficiency compared to the lossless geometry scenario. Subjective evaluations further confirm that our method enhances both color and structural continuities in the texture map by automatically eliminating high-frequency components unfavorable to compression. The paper provides comprehensive experiments and analyses, including rate estimation with different choices of differentiable image encoders, texture map distortion vs. rendering distortion, and complexity comparison with existing methods.
Image‐Based Tree Variations Argudo, Oscar; Andújar, Carlos; Chica, Antoni
Computer graphics forum,
February 2020, Volume:
39, Issue:
1
Journal Article, Publication
Peer reviewed
Open access
The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new ...approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette‐defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.
The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette‐defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.
We propose a method for generating video-realistic animations of real humans under user control. In contrast to conventional human character rendering, we do not require the availability of a ...production-quality photo-realistic three-dimensional (3D) model of the human but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person. With that, our approach significantly reduces production cost compared to conventional rendering approaches based on production-quality 3D models and can also be used to realistically edit existing videos. Technically, this is achieved by training a neural network that translates simple synthetic images of a human character into realistic imagery. For training our networks, we first track the 3D motion of the person in the video using the template model and subsequently generate a synthetically rendered version of the video. These images are then used to train a conditional generative adversarial network that translates synthetic images of the 3D model into realistic imagery of the human. We evaluate our method for the reenactment of another person that is tracked to obtain the motion data, and show video results generated from artist-designed skeleton motion. Our results outperform the state of the art in learning-based human image synthesis.
As many different 3D volumes could produce the same 2D x‐ray image, inverting this process is challenging. We show that recent deep learning‐based convolutional neural networks can solve this task. ...As the main challenge in learning is the sheer amount of data created when extending the 2D image into a 3D volume, we suggest firstly to learn a coarse, fixed‐resolution volume which is then fused in a second step with the input x‐ray into a high‐resolution volume. To train and validate our approach we introduce a new dataset that comprises of close to half a million computer‐simulated 2D x‐ray images of 3D volumes scanned from 175 mammalian species. Future applications of our approach include stereoscopic rendering of legacy x‐ray images, re‐rendering of x‐rays including changes of illumination, view pose or geometry. Our evaluation includes comparison to previous tomography work, previous learning methods using our data, a user study and application to a set of real x‐rays.
Foveated rendering
synthesizes images with progressively less detail outside the eye fixation region, potentially unlocking significant speedups for wide field-of-view displays, such as head mounted ...displays, where target framerate and resolution is increasing faster than the performance of traditional real-time renderers.
To study and improve potential gains, we designed a foveated rendering user study to evaluate the perceptual abilities of human peripheral vision when viewing today's displays. We determined that filtering peripheral regions reduces contrast, inducing a sense of tunnel vision. When applying a postprocess contrast enhancement, subjects tolerated up to 2× larger blur radius before detecting differences from a non-foveated ground truth. After verifying these insights on both desktop and head mounted displays augmented with high-speed gaze-tracking, we designed a
perceptual target
image to strive for when engineering a production foveated renderer.
Given our perceptual target, we designed a practical foveated rendering system that reduces number of shades by up to 70% and allows coarsened shading up to 30° closer to the fovea than Guenter et al. 2012 without introducing perceivable aliasing or blur. We filter both pre- and post-shading to address aliasing from undersampling in the periphery, introduce a novel multiresolution- and saccade-aware temporal antialising algorithm, and use contrast enhancement to help recover peripheral details that are resolvable by our eye but degraded by filtering.
We validate our system by performing another user study. Frequency analysis shows our system closely matches our perceptual target. Measurements of temporal stability show we obtain quality similar to temporally filtered non-foveated renderings.
Reconstructing objects with realistic materials from multi-view images is problematic, since it is highly ill-posed. Although the neural reconstruction approaches have exhibited impressive ...reconstruction ability, they are designed for objects with specific materials (e.g., diffuse or specular materials). To this end, we propose a novel framework for robust geometry and material reconstruction, where the geometry is expressed with the implicit signed distance field (SDF) encoded by a tensorial representation, namely TensoSDF. At the core of our method is the roughness-aware incorporation of the radiance and reflectance fields, which enables a robust reconstruction of objects with arbitrary reflective materials. Furthermore, the tensorial representation enhances geometry details in the reconstructed surface and reduces the training time. Finally, we estimate the materials using an explicit mesh for efficient intersection computation and an implicit SDF for accurate representation. Consequently, our method can achieve more robust geometry reconstruction, outperform the previous works in terms of relighting quality, and reduce 50% training times and 70% inference time. Codes and datasets are available at https://github.com/Riga2/TensoSDF.
Gradient-based optimization is now ubiquitous across graphics, but unfortunately can not be applied to problems with undefined or zero gradients. To circumvent this issue, the loss function can be ...manually replaced by a "surrogate" that has similar minima but is differentiable. Our proposed framework, ZeroGrads, automates this process by learning a neural approximation of the objective function, which in turn can be used to differentiate through arbitrary black-box graphics pipelines. We train the surrogate on an actively smoothed version of the objective and encourage locality, focusing the surrogate's capacity on what matters at the current training episode. The fitting is performed online, alongside the parameter optimization, and self-supervised, without pre-computed data or pre-trained models. As sampling the objective is expensive (it requires a full rendering or simulator run), we devise an efficient sampling scheme that allows for tractable run-times and competitive performance at little overhead. We demonstrate optimizing diverse non-convex, non-differentiable black-box problems in graphics, such as visibility in rendering, discrete parameter spaces in procedural modelling or optimal control in physics-driven animation. In contrast to other derivative-free algorithms, our approach scales well to higher dimensions, which we demonstrate on problems with up to 35k interlinked variables.
The efficiency of inverse optimization in physically based differentiable rendering heavily depends on the variance of Monte Carlo estimation. Despite recent advancements emphasizing the necessity of ...tailored differential sampling strategies, the general approaches remain unexplored. In this paper, we investigate the interplay between local sampling decisions and the estimation of light path derivatives. Considering that modern differentiable rendering algorithms share the same path for estimating differential radiance and ordinary radiance, we demonstrate that conventional guiding approaches, conditioned solely on the last vertex, cannot attain this density. Instead, a mixture of different sampling distributions is required, where the weights are conditioned on all the previously sampled vertices in the path. To embody our theory, we implement a conditional mixture path guiding that explicitly computes optimal weights on the fly. Furthermore, we show how to perform positivization to eliminate sign variance and extend to scenes with millions of parameters. To the best of our knowledge, this is the first generic framework for applying path guiding to differentiable rendering. Extensive experiments demonstrate that our method achieves nearly one order of magnitude improvements over state-of-the-art methods in terms of variance reduction in gradient estimation and errors of inverse optimization. The implementation of our proposed method is available at https://github.com/mollnn/conditional-mixture.