Monte Carlo techniques for light transport simulation rely on importance sampling when constructing light transport paths. Previous work has shown that suitable sampling distributions can be ...recovered from particles distributed in the scene prior to rendering. We propose to represent the distributions by a parametric mixture model trained in an on-line (i.e. progressive) manner from a potentially infinite stream of particles. This enables recovering good sampling distributions in scenes with complex lighting, where the necessary number of particles may exceed available memory. Using these distributions for sampling scattering directions and light emission significantly improves the performance of state-of-the-art light transport simulation algorithms when dealing with complex lighting.
Direct illumination calculation is an important component of any physically-based Tenderer with a substantial impact on the overall performance. We present a novel adaptive solution for unbiased ...Monte Carlo direct illumination sampling, based on online learning of the light selection probability distributions. Our main contribution is a formulation of the learning process as Bayesian regression, based on a new, specifically designed statistical model of direct illumination. The net result is a set of regularization strategies to prevent over-fitting and ensure robustness even in early stages of calculation, when the observed information is sparse. The regression model captures spatial variation of illumination, which enables aggregating statistics over relatively large scene regions and, in turn, ensures a fast learning rate. We make the method scalable by adopting a light clustering strategy from the Lightcuts method, and further reduce variance through the use of control variates. As a main design feature, the resulting algorithm is virtually free of any preprocessing, which enables its use for interactive progressive rendering, while the online learning still enables super-linear convergence.
We present a method to create personalized anatomical models ready for physics-based animation, using only a set of 3D surface scans. We start by building a template anatomical model of an average ...male which supports deformations due to both 1) subject-specific variations: shapes and sizes of bones, muscles, and adipose tissues and 2) skeletal poses. Next, we capture a set of 3D scans of an actor in various poses. Our key contribution is formulating and solving a large-scale optimization problem where we compute both subject-specific and pose-dependent parameters such that our resulting anatomical model explains the captured 3D scans as closely as possible. Compared to data-driven body modeling techniques that focus only on the surface, our approach has the advantage of creating physics-based models, which provide realistic 3D geometry of the bones and muscles, and naturally supports effects such as inertia, gravity, and collisions according to Newtonian dynamics.
Many existing Monte Carlo methods rely on multiple importance sampling (MIS) to achieve robustness and versatility. Typically, the balance or power heuristics are used, mostly thanks to the seemingly ...strong guarantees on their variance. We show that these MIS heuristics are oblivious to the effect of certain variance reduction techniques like stratification. This shortcoming is particularly pronounced when unstratified and stratified techniques are combined (e.g., in a bidirectional path tracer). We propose to enhance the balance heuristic by injecting variance estimates of individual techniques, to reduce the variance of the combined estimator in such cases. Our method is simple to implement and introduces little overhead.
Multiple Importance Sampling (MIS) is a key technique for achieving robustness of Monte Carlo estimators in computer graphics and other fields. We derive optimal weighting functions for MIS that ...provably minimize the variance of an MIS estimator, given a set of sampling techniques. We show that the resulting variance reduction over the balance heuristic can be higher than predicted by the variance bounds derived by Veach and Guibas, who assumed only non-negative weights in their proof. We theoretically analyze the variance of the optimal MIS weights and show the relation to the variance of the balance heuristic. Furthermore, we establish a connection between the new weighting functions and control variates as previously applied to mixture sampling. We apply the new optimal weights to integration problems in light transport and show that they allow for new design considerations when choosing the appropriate sampling techniques for a given integration problem.
While Russian roulette (RR) and splitting are considered fundamental importance sampling techniques in neutron transport simulations, they have so far received relatively little attention in light ...transport. In computer graphics, RR and splitting are most often based solely on local reflectance properties. However, this strategy can be far from optimal in common scenes with non-uniform light distribution as it does not accurately predict the actual path contribution. In our approach, like in neutron transport, we estimate the expected contribution of a path as the product of the path weight and a pre-computed estimate of the adjoint transport solution. We use this estimate to generate so-called weight window which keeps the path contribution roughly constant through RR and splitting. As a result, paths in unimportant regions tend to be terminated early while in the more important regions they are spawned by splitting. This results in substantial variance reduction in both path tracing and photon tracing-based simulations. Furthermore, unlike the standard computer graphics RR, our approach does not interfere with importance-driven sampling of scattering directions, which results in superior convergence when such a technique is combined with our approach. We provide a justification of this behavior by relating our approach to the zero-variance random walk theory.
In full-color inkjet 3D printing, a key problem is determining the material configuration for the millions of voxels that a printed object is made of. The goal is a configuration that minimises the ...difference between desired target appearance and the result of the printing process. So far, the techniques used to find such a configuration have relied on domain-specific methods or heuristic optimization, which allowed only a limited level of control over the resulting appearance. We propose to use differentiable volume rendering in a continuous material-mixture space, which leads to a framework that can be used as a general tool for optimising inkjet 3D printouts. We demonstrate the technical feasibility of this approach, and use it to attain fine control over the fabricated appearance, and high levels of faithfulness to the specified target.
Efficiently computing light transport in participating media in a manner that is robust to variations in media density, scattering albedo, and anisotropy is a difficult and important problem in ...realistic image synthesis. While many specialized rendering techniques can efficiently resolve subsets of transport in specific media, no single approach can robustly handle all types of effects. To address this problem we unify volumetric density estimation, using point and beam estimators, and Monte Carlo solutions to the path integral formulation of the rendering and radiative transport equations. We extend multiple importance sampling to correctly handle combinations of these fundamentally different classes of estimators. This, in turn, allows us to develop a single rendering algorithm that correctly combines the benefits and mediates the limitations of these powerful volume rendering techniques.
Color texture reproduction in 3D printing commonly ignores volumetric light transport (cross-talk) between surface points on a 3D print. Such light diffusion leads to significant blur of details and ...color bleeding, and is particularly severe for highly translucent resin-based print materials. Given their widely varying scattering properties, this cross-talk between surface points strongly depends on the internal structure of the volume surrounding each surface point. Existing scattering-aware methods use simplified models for light difusion, and often accept the visual blur as an immutable property of the print medium. In contrast, our work counteracts heterogeneous scattering to obtain the impression of a crisp albedo texture on top of the 3D print, by optimizing for a fully volumetric material distribution that preserves the target appearance. Our method employs an efficient numerical optimizer on top of a general Monte-Carlo simulation of heterogeneous scattering, supported by a practical calibration procedure to obtain scattering parameters from a given set of printer materials. Despite the inherent translucency of the medium, we reproduce detailed surface textures on 3D prints. We evaluate our system using a commercial, five-tone 3D print process and compare against the printer's native color texturing mode, demonstrating that our method preserves high-frequency features well without having to compromise on color gamut.
Accurately controllable shading detail is a crucial aspect of realistic appearance modelling. Two fundamental building blocks for this are microfacet BRDFs, which describe the statistical behaviour ...of infinitely small facets, and normal maps, which provide user‐controllable spatio‐directional surface features. We analyse the filtering of the combined effect of a microfacet BRDF and a normal map. By partitioning the half‐vector domain into bins we show that the filtering problem can be reduced to evaluation of an integral histogram (IH), a generalization of a summed‐area table (SAT). Integral histograms are known for their large memory requirements, which are usually proportional to the number of bins. To alleviate this, we introduce Inverse Bin Maps, a specialised form of IH with a memory footprint that is practically independent of the number of bins. Based on these, we present a memory‐efficient, production‐ready approach for filtering of high resolution normal maps with arbitrary Beckmann flake roughness. In the corner case of specular normal maps (zero, or very small roughness values) our method shows similar convergence rates to the current state of the art, and is also more memory efficient.