In this paper, we propose two real‐time models for simulating subsurface scattering for a large variety of translucent materials, which need under 0.5 ms per frame to execute. This makes them a ...practical option for real‐time production scenarios. Current state‐of‐the‐art, real‐time approaches simulate subsurface light transport by approximating the radially symmetric non‐separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to 12) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low‐rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high‐quality diffusion simulation, while the second one offers an attractive trade‐off between physical accuracy and artistic control. Both allow rendering of subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance‐sampling and jittering strategies, only seven samples per pixel are required. Our methods can be implemented as simple post‐processing steps without intrusive changes to existing rendering pipelines.
In this paper, we propose two real‐time models for simulating subsurface scattering of subsurface scattering for a large variety of translucent materials, which need under 0.5 ms per frame to execute. This makes them a practical option for real‐time production scenarios. Current state‐of‐the‐art, real‐time approaches simulate subsurface light transport by approximating the radially symmetric non‐separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to 12) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low‐rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high‐quality diffusion simulation, while the second one offers an attractive trade‐off between physical accuracy and artistic control. Both allow rendering of subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance‐sampling and jittering strategies, only seven samples per pixel are required.
There has recently been great interest in neural rendering methods. Some approaches use 3D geometry reconstructed with Multi‐View Stereo (MVS) but cannot recover from the errors of this process, ...while others directly learn a volumetric neural representation, but suffer from expensive training and inference. We introduce a general approach that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel‐view synthesis. A key element of our approach is our new differentiable point‐based pipeline, based on bi‐directional Elliptical Weighted Average splatting, a probabilistic depth test and effective camera selection. We use these elements together in our neural renderer, that outperforms all previous methods both in quality and speed in almost all scenes we tested. Our pipeline can be applied to multi‐view harmonization and stylization in addition to novel‐view synthesis.
Achieving photorealism when rendering virtual scenes in movies or architecture visualizations often depends on providing a realistic illumination and background. Typically, spherical environment maps ...serve both as a natural light source from the Sun and the sky, and as a background with clouds and a horizon. In practice, the input is either a static high‐resolution HDR photograph manually captured on location in real conditions, or an analytical clear sky model that is dynamic, but cannot model clouds. Our approach bridges these two limited paradigms: a user can control the sun position and cloud coverage ratio, and generate a realistically looking environment map for these conditions. It is a hybrid data‐driven analytical model based on a modified state‐of‐the‐art GAN architecture, which is trained on matching pairs of physically‐accurate clear sky radiance and HDR fisheye photographs of clouds. We demonstrate our results on renders of outdoor scenes under varying time, date and cloud covers. Our source code and a dataset of 39 000 HDR sky images are publicly available at https://github.com/CGGMFF/SkyGAN.
SkyGAN generates cloudy sky images from a user‐chosen sun position that are readily usable as an environment map in any rendering system. We leverage an existing clear sky model to produce the input to our neural network which enhances the sky with clouds, haze and horizons learned from real photographs.
The recent research explosion around implicit neural representations, such as NeRF, shows that there is immense potential for implicitly storing high‐quality scene and lighting information in compact ...neural networks. However, one major limitation preventing the use of NeRF in real‐time rendering applications is the prohibitive computational cost of excessive network evaluations along each view ray, requiring dozens of petaFLOPS. In this work, we bring compact neural representations closer to practical rendering of synthetic content in real‐time applications, such as games and virtual reality. We show that the number of samples required for each view ray can be significantly reduced when samples are placed around surfaces in the scene without compromising image quality. To this end, we propose a depth oracle network that predicts ray sample locations for each view ray with a single network evaluation. We show that using a classification network around logarithmically discretized and spherically warped depth values is essential to encode surface locations rather than directly estimating depth. The combination of these techniques leads to DONeRF, our compact dual network design with a depth oracle network as its first step and a locally sampled shading network for ray accumulation. With DONeRF, we reduce the inference costs by up to 48× compared to NeRF when conditioning on available ground truth depth information. Compared to concurrent acceleration methods for raymarching‐based neural representations, DONeRF does not require additional memory for explicit caching or acceleration structures, and can render interactively (20 frames per second) on a single GPU.
White light‐emitting diodes (WLEDs) are promising next‐generation solid‐state light sources. However, the commercialization route for WLED production suffers from challenges in terms of insufficient ...color‐rendering index (CRI), color instability, and incorporation of rare‐earth elements. Herein, a new two‐component strategy is developed by assembling two broadband emissive materials with self‐trapped excitons (STEs) for high CRI and stable WLEDs. The strategy addresses effectively the challenging issues facing current WLEDs. Based on first‐principles thermodynamic calculations, copper‐based ternary halides composites, CsCu2I3@Cs3Cu2I5, are synthesized by a facile one‐step solution approach. The composites exhibit an ideal white‐light emission with a cold/warm white‐light tuning and a robust stability against heat, ultraviolet light, and environmental oxygen/moisture. A series of cold/warm tunable WLEDs is demonstrated with a maximum luminance of 145 cd m−2 and an external quantum efficiency of 0.15%, and a record high CRI of 91.6 is achieved, which is the highest value for lead‐free WLEDs. Importantly, the fabricated device demonstrates an excellent operation stability in a continuous current mode, exhibiting a long half‐lifetime of 238.5 min. The results promise the use of the hybrids of STEs‐derived broadband emissive materials for high‐performance WLEDs.
Stable and highly luminescent CsCu2I3@Cs3Cu2I5 composites are synthesized through a one‐step spin‐coating method. They exhibit white‐light emission through self‐trapped excitons, as well as cold/warm white‐light tuning. By using the composites as a white‐light emitter, electrically driven cold/warm tunable WLEDs with a record color‐rendering index of 91.6 are successfully demonstrated, and a long half‐lifetime of 238.5 min is achieved.