How do people edit light fields? Jarabo, Adrian; Masia, Belen; Bousseau, Adrien ...
ACM transactions on graphics,
07/2014, Letnik:
33, Številka:
4
Journal Article
Recenzirano
Odprti dostop
We present a thorough study to evaluate different light field editing interfaces, tools and workflows from a user perspective. This is of special relevance given the multidimensional nature of light ...fields, which may make common image editing tasks become complex in light field space. We additionally investigate the potential benefits of using depth information when editing, and the limitations imposed by imperfect depth reconstruction using current techniques. We perform two different experiments, collecting both objective and subjective data from a varied number of editing tasks of increasing complexity based on local point-and-click tools. In the first experiment, we rely on perfect depth from synthetic light fields, and focus on simple edits. This allows us to gain basic insight on light field editing, and to design a more advanced editing interface. This is then used in the second experiment, employing real light fields with imperfect reconstructed depth, and covering more advanced editing tasks. Our study shows that users can edit light fields with our tested interface and tools, even in the presence of imperfect depth. They follow different workflows depending on the task at hand, mostly relying on a combination of different depth cues. Last, we confirm our findings by asking a set of artists to freely edit both real and synthetic light fields.
Surfaces in the real world exhibit complex appearance due to spatial variations in both their reflectance and local shading frames (i.e. the local coordinate system defined by the normal and tangent ...direction). For opaque surfaces, existing fabrication solutions can reproduce well only the spatial variations of isotropic reflectance. In this paper, we present a system for fabricating surfaces with desired spatially-varying reflectance, including anisotropic ones, and local shading frames. We approximate each input reflectance, rotated by its local frame, as a small patch of oriented facets coated with isotropic glossy inks. By assigning different ink combinations to facets with different orientations, this bi-scale material can reproduce a wider variety of reflectance than the printer gamut, including anisotropic materials. By orienting the facets appropriately, we control the local shading frame. We propose an algorithm to automatically determine the optimal facets orientations and ink combinations that best approximate a given input appearance, while obeying manufacturing constraints on both geometry and ink gamut. We fabricate the resulting surface with commercially available hardware, a 3D printer to fabricate the facets and a flatbed UV printer to coat them with inks. We validate our method by fabricating a variety of isotropic and anisotropic materials with rich variations in normals and tangents.
User-Controllable Color Transfer An, Xiaobo; Pellacini, Fabio
Computer graphics forum,
20/May , Letnik:
29, Številka:
2
Journal Article
Recenzirano
This paper presents an image editing framework where users use reference images to indicate desired color edits. In our approach, users specify pairs of strokes to indicate corresponding regions in ...both the original and the reference image that should have the same color “style”. Within each stroke pair, a nonlinear constrained parametric transfer model is used to transfer the reference colors to the original. We estimate the model parameters by matching color distributions, under constraints that ensure no visual artifacts are present in the transfer result. To perform transfer on the whole image, we employ optimization methods to propagate the model parameters defined at each stroke location to spatially‐close regions of similar appearance. This stroke‐based formulation requires minimal user effort while retaining the high degree of user control necessary to allow artistic interpretations. We demonstrate our approach by performing color transfer on a number of image pairs varying in content and style, and show that our algorithm outperforms state‐of‐the‐art color transfer methods on both user‐controllability and visual qualities of the transfer results.
Many real world surfaces exhibit translucent appearance due to subsurface scattering. Although various methods exists to measure, edit and render subsurface scattering effects, no solution exists for ...manufacturing physical objects with desired translucent appearance. In this paper, we present a complete solution for fabricating a material volume with a desired surface BSSRDF. We stack layers from a fixed set of manufacturing materials whose thickness is varied spatially to reproduce the heterogeneity of the input BSSRDF. Given an input BSSRDF and the optical properties of the manufacturing materials, our system efficiently determines the optimal order and thickness of the layers. We demonstrate our approach by printing a variety of homogenous and heterogenous BSSRDFs using two hardware setups: a milling machine and a 3D printer.
Mimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature film, architecture and medical industries. Images with ...well‐designed shading are an important tool for conveying information about the world, be it the shape and function of a computer‐aided design (CAD) model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this review, we provide a comprehensive survey of artistic appearance, lighting and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering back ends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research.
Mimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature film, architecture and medical industries. Images with well‐designed shading are an important tool for conveying information about the world, be it the shape and function of a computer‐aided design (CAD) model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this review we provide a comprehensive survey of artistic appearance, lighting, and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students, and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering backends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research.
Rendering complex scenes with indirect illumination, high dynamic range environment lighting, and many direct light sources remains a challenging problem. Prior work has shown that all these effects ...can be approximated by many point lights. This paper presents a scalable solution to the many-light problem suitable for a GPU implementation. We view the problem as a large matrix of sample-light interactions; the ideal final image is the sum of the matrix columns. We propose an algorithm for approximating this sum by sampling entire rows and columns of the matrix on the GPU using shadow mapping. The key observation is that the inherent structure of the transfer matrix can be revealed by sampling just a small number of rows and columns. Our prototype implementation can compute the light transfer within a few seconds for scenes with indirect and environment illumination, area lights, complex geometry and arbitrary shaders. We believe this approach can be very useful for rapid previewing in applications like cinematic and architectural lighting design.
Microfacet models suffer from a significant limitation: they are not energy preserving, resulting in an unexpected darkening of rough specular surfaces. Energy compensation methods face this ...limitation by adding to the BSDF a secondary component accounting for multiple scattering contributions. While these methods are fast, robust and can be added to a renderer with relatively minor modifications, they involve the computation of the directional albedo. This quantity is expressed as an integral that does not have a closed-form solution, but it needs to be precomputed and stored in tables. These look-up tables are notoriously cumbersome to use, in particular on GPUs. This work obviates the need of look-up tables by fitting an analytic approximation of the directional albedo, which is a more practical solution. We enforce energy preservation by rescaling the specular albedo, thus maintaining the same lobe shape. We propose a 2D rational polynomial of degree three to fit conductors and a 3D rational polynomial of degree three to fit dielectrics and materials composed of a specular layer on top of a diffuse one, such as plastics. As an alternative, multi-layer perceptrons can be used, ensuring a more accurate approximation for dielectrics at the expense of a larger number of parameters to store. We validated our results via the furnace test, highlighting that materials rendered using our analytic approximations almost exactly match the behavior of the ones rendered with the use of look-up tables, resulting in an energy-preserving model even at maximum roughness. The software we use to fit coefficients is open-source and can be used to fit other BSDF models as well.
Display omitted
•We enforce energy preservation in microfacet models by rescaling the directional albedo.•We propose rational polynomials of degree three to fit the albedo of conductors, dielectrics and glossy materials.•We compared approximations made with polynomials, rational polynomials and neural networks.
AppProp An, Xiaobo; Pellacini, Fabio
ACM transactions on graphics,
08/2008, Letnik:
27, Številka:
3
Journal Article
Recenzirano
We present an intuitive and efficient method for editing the appearance of complex spatially-varying datasets, such as images and measured materials. In our framework, users specify rough adjustments ...that are refined interactively by enforcing the policy that similar edits are applied to spatially-close regions of similar appearance. Rather than proposing a specific user interface, our method allows artists to quickly and imprecisely specify the initial edits with any method or workflow they feel most comfortable with. An energy optimization formulation is used to propagate the initial rough adjustments to the final refined ones by enforcing the editing policy over all pairs of points in the dataset. We show that this formulation is equivalent to solving a large linear system defined by a dense matrix. We derive an approximate algorithm to compute such a solution interactively by taking advantage of the inherent structure of the matrix. We demonstrate our approach by editing images, HDR radiance maps, and measured materials. Finally, we show that our framework generalizes prior methods while providing significant improvements in generality, robustness and efficiency.
Version control systems are the foundation of collaborative workflows for text documents. For 3D environments though, version control is still an open problem due to the heterogeneous data of 3D ...scenes and their size. In this paper, we present a practical version control system for 3D scenes comprised of shapes, materials, textures, and animations, combined together in scene graphs. We version objects at their finest granularity, to make repositories smaller and to allow artists to work concurrently on the same object. Since, for some scene data, computing an optimal set of changes between versions is not computationally feasible, version control systems use heuristics. Compared to prior work, we propose heuristics that are efficient, robust, and independent of the application. We test our system on a variety of large scenes edited with different workflows, and show that our approach can handle all cases well while remaining efficient as scene size increases. Compared to prior work, we are significantly faster and more robust. A user study confirms that our system aids collaboration.
Although real-world surfaces can exhibit significant variation in materials - glossy, diffuse, metallic, etc. - printers are usually used to reproduce color or gray-scale images. We propose a ...complete system that uses appropriate inks and foils to print documents with a variety of material properties. Given a set of inks with known Bidirectional Reflectance Distribution Functions (BRDFs), our system automatically finds the optimal linear combinations to approximate the BRDFs of the target documents. Novel gamut-mapping algorithms preserve the relative glossiness between different BRDFs, and halftoning is used to produce patterns to be sent to the printer. We demonstrate the effectiveness of this approach with printed samples of a number of measured spatially-varying BRDFs.