Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or ...hundreds of pictures required by traditional optimization‐based approaches. However, a single image is often simply not enough to observe the rich appearance of real‐world materials. We present a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order‐independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images ‐ a sweet spot between existing single‐image and complex multi‐image approaches.
Controlling Material Appearance by Examples Hu, Yiwei; Hašan, Miloš; Guerrero, Paul ...
Computer graphics forum,
July 2022, 2022-07-00, 20220701, Letnik:
41, Številka:
4
Journal Article
Recenzirano
Odprti dostop
Despite the ubiquitous use of materials maps in modern rendering pipelines, their editing and control remains a challenge. In this paper, we present an example‐based material control method to ...augment input material maps based on user‐provided material photos. We train a tileable version of MaterialGAN and leverage its material prior to guide the appearance transfer, optimizing its latent space using differentiable rendering. Our method transfers the micro and meso‐structure textures of user provided target(s) photographs, while preserving the structure and quality of the input material. We show our methods can control existing material maps, increasing realism or generating new, visually appealing materials.
In this paper, we propose a deep learning approach for estimating the spatially‐varying BRDFs (SVBRDF) from a single image. Most existing deep learning techniques use pixel‐wise loss functions which ...limits the flexibility of the networks in handling this highly unconstrained problem. Moreover, since obtaining ground truth SVBRDF parameters is difficult, most methods typically train their networks on synthetic images and, therefore, do not effectively generalize to real examples. To avoid these limitations, we propose an adversarial framework to handle this application. Specifically, we estimate the material properties using an encoder‐decoder convolutional neural network (CNN) and train it through a series of discriminators that distinguish the output of the network from ground truth. To address the gap in data distribution of synthetic and real images, we train our network on both synthetic and real examples. Specifically, we propose a strategy to train our network on pairs of real images of the same object with different lighting. We demonstrate that our approach is able to handle a variety of cases better than the state‐of‐the‐art methods.
Guided Fine‐Tuning for Large‐Scale Material Transfer Deschaintre, Valentin; Drettakis, George; Bousseau, Adrien
Computer graphics forum,
July 2020, 2020-07-00, 20200701, 2020-07-01, Letnik:
39, Številka:
4
Journal Article
Recenzirano
Odprti dostop
We present a method to transfer the appearance of one or a few exemplar SVBRDFs to a target image representing similar materials. Our solution is extremely simple: we fine‐tune a deep ...appearance‐capture network on the provided exemplars, such that it learns to extract similar SVBRDF values from the target image. We introduce two novel material capture and design workflows that demonstrate the strength of this simple approach. Our first workflow allows to produce plausible SVBRDFs of large‐scale objects from only a few pictures. Specifically, users only need take a single picture of a large surface and a few close‐up flash pictures of some of its details. We use existing methods to extract SVBRDF parameters from the close‐ups, and our method to transfer these parameters to the entire surface, enabling the lightweight capture of surfaces several meters wide such as murals, floors and furniture. In our second workflow, we provide a powerful way for users to create large SVBRDFs from internet pictures by transferring the appearance of existing, pre‐designed SVBRDFs. By selecting different exemplars, users can control the materials assigned to the target image, greatly enhancing the creative possibilities offered by deep appearance capture.
This paper presents a deep learning based method for estimating the spatially varying surface reflectance properties from a single image of a planar surface under unknown natural lighting trained ...using only photographs of exemplar materials without referencing any artist generated or densely measured spatially varying surface reflectance training data. Our method is based on an empirical study of Li et al.'s LDPT17 self‐augmentation training strategy that shows that the main role of the initial approximative network is to provide guidance on the inherent ambiguities in single image appearance estimation. Furthermore, our study indicates that this initial network can be inexact (i.e., trained from other data sources) as long as it resolves the inherent ambiguities. We show that the single image estimation network trained without manually labeled data outperforms prior work in terms of accuracy as well as generality.