We present SeamlessGAN, a method capable of automatically generating tileable texture maps from a single input exemplar. In contrast to most existing methods, focused solely on solving the synthesis ...problem, our work tackles both problems, synthesis and tileability, simultaneously. Our key idea is to realize that tiling a latent space within a generative network trained using adversarial expansion techniques produces outputs with continuity at the seam intersection that can be then be turned into tileable images by cropping the central area. Since not every value of the latent space is valid to produce high-quality outputs, we leverage the discriminator as a perceptual error metric capable of identifying artifact-free textures during a sampling process. Further, in contrast to previous work on deep texture synthesis, our model is designed and optimized to work with multi-layered texture representations, enabling textures composed of multiple maps such as albedo, normals, etc. We extensively test our design choices for the network architecture, loss function and sampling parameters. We show qualitatively and quantitatively that our approach outperforms previous methods and works for textures of different types.
We present a deep learning-based method for propagating spatially-varying visual material attributes (e.g. texture maps or image stylizations) to larger samples of the same or similar materials. For ...training, we leverage images of the material taken under multiple illuminations and a dedicated data augmentation policy, making the transfer robust to novel illumination conditions and affine deformations. Our model relies on a supervised image-to-image translation framework and is agnostic to the transferred domain; we showcase a semantic segmentation, a normal map, and a stylization. Following an image analogies approach, the method only requires the training data to contain the same visual structures as the input guidance. Our approach works at interactive rates, making it suitable for material edit applications. We thoroughly evaluate our learning methodology in a controlled setup providing quantitative measures of performance. Last, we demonstrate that training the model on a single material is enough to generalize to materials of the same type without the need for massive datasets.
Mortality among hemodialysis patients remains high. An elevated ultrafiltration rate adjusted by weight (UFR/W) has been associated with hypotension and higher risk of death and/or cardiovascular ...events.
We evaluated the association between UFR/W and mortality in 215 hemodialysis patients. The mean follow-up was 28 ± 6.12 months. We collected patients’ baseline characteristics and mean UFR/W throughout the follow-up.
Mean UFR/W was 9.0 ± 2,4 and tertiles 7.1 y 10.1 mL/kg/h. We divided our population according to the percentage of sessions with UFR/W above the limits described in the literature associated with increased mortality (10.0 ml/kg/h and 13.0 mL/kg/h). Patients with higher UFR/W were younger, with higher interdialytic weight gain and weight reduction percentage but lower dry, pre and post dialysis weight. Throughout the follow-up, 46 (21.4%) patients died, the majority over 70 years old, diabetic or with cardiovascular disease. There were neither differences regarding mortality between groups nor differences in UFR/W among patients who died and those who did not. Contrary to previous studies, we did not find an association between UFR/W and mortality, maybe due to a higher prevalence in the use of cardiovascular protection drugs and lower UFR/W.
The highest UFR/W were observed in younger patients with lower weight and were not associated with an increased mortality.
La mortalidad de los pacientes en hemodiálisis es alta. Una tasa de ultrafiltración horaria ajustada por peso (UFR/W) elevada se ha asociado a episodios de hipotensión arterial y mayor riesgo de muerte y/o eventos cardiovasculares.
Hemos evaluado la asociación entre UFR/W y mortalidad en 215 pacientes en hemodiálisis prevalentes seguidos durante 28 ± 6,12 meses. Se estimaron características clínicas basales y UFR/W media a lo largo del seguimiento.
La UFR/W media fue 9,0 ± 2,4 y los terciles 7,1 y 10,1 mL/kg/h. Se categorizó a la población en función del tiempo que habían estado con UFR/W igual o superior a los puntos de corte descritos en la literatura como asociados a mayor mortalidad (10,0 mL/kg/h y 13,0 mL/kg/h). Los pacientes con mayor UFR/W fueron más jóvenes, con mayor ganancia de peso interdiálisis y porcentaje de reducción de peso, pero con menor peso seco, inicial y final. Durante el seguimiento, fallecieron 46 (21,4%) pacientes de los cuales la mayoría eran >70 años, diabéticos o con enfermedad cardiovascular. No hubo diferencias en mortalidad entre los grupos de UFR/W ni diferencias en la UFR/W entre los fallecidos y no fallecidos. En comparación con estudios previos donde describieron la asociación entre UFR/W y mortalidad, en nuestra población había más prevalencia de medicación protectora cardiovascular y no se observaron UFR/W tan altas.
En nuestro medio, la UFR/W más elevada se observó en pacientes más jóvenes y de menor peso y no se asoció a mayor mortalidad.
We introduce TexTile, a novel differentiable metric to quantify the degree upon which a texture image can be concatenated with itself without introducing repeating artifacts (i.e., the tileability). ...Existing methods for tileable texture synthesis focus on general texture quality, but lack explicit analysis of the intrinsic repeatability properties of a texture. In contrast, our TexTile metric effectively evaluates the tileable properties of a texture, opening the door to more informed synthesis and analysis of tileable textures. Under the hood, TexTile is formulated as a binary classifier carefully built from a large dataset of textures of different styles, semantics, regularities, and human annotations.Key to our method is a set of architectural modifications to baseline pre-train image classifiers to overcome their shortcomings at measuring tileability, along with a custom data augmentation and training regime aimed at increasing robustness and accuracy. We demonstrate that TexTile can be plugged into different state-of-the-art texture synthesis methods, including diffusion-based strategies, and generate tileable textures while keeping or even improving the overall texture quality. Furthermore, we show that TexTile can objectively evaluate any tileable texture synthesis method, whereas the current mix of existing metrics produces uncorrelated scores which heavily hinders progress in the field.
The Ecological Metadata Language (EML) is an XML-based metadata specification developed for the description of datasets and their associated context in ecology. The conversion of EML metadata to an ...ontological form has been addressed in existing observation ontologies, which are able of providing a degree of computational semantics to the description of the datasets, including the reuse of scientific ontologies to express the observed entities and their characteristics. However, a number of practical issues regarding the automated translation of the available EML datasets to a representation with formal semantics and its subsequent integration into Research Information Systems (RIS) require separate attention. These issues include expressing meaning by using existing terminologies, the mapping of EML with models of research information and the mapping with mainstream metadata schema. This paper describes the approach taken for that purpose in the VOA3R project, describing the main mapping and translation decisions taken so far and some common pitfalls with metadata records as they are currently available through the Web.
Neural material representations are becoming a popular way to represent materials for rendering. They are more expressive than analytic models and occupy less memory than tabulated BTFs. However, ...existing neural materials are immutable, meaning that their output for a certain query of UVs, camera, and light vector is fixed once they are trained. While this is practical when there is no need to edit the material, it can become very limiting when the fragment of the material used for training is too small or not tileable, which frequently happens when the material has been captured with a gonioreflectometer. In this paper, we propose a novel neural material representation which jointly tackles the problems of BTF compression, tiling, and extrapolation. At test time, our method uses a guidance image as input to condition the neural BTF to the structural features of this input image. Then, the neural BTF can be queried as a regular BTF using UVs, camera, and light vectors. Every component in our framework is purposefully designed to maximize BTF encoding quality at minimal parameter count and computational complexity, achieving competitive compression rates compared with previous work. We demonstrate the results of our method on a variety of synthetic and captured materials, showing its generality and capacity to learn to represent many optical properties.
To enable light fields of large environments to be captured, they would have to be sparse, i.e. with a relatively large distance between views. Such sparseness, however, causes subsequent processing ...to be much more difficult than would be the case with dense light fields. This includes segmentation. In this paper, we address the problem of meaningful segmentation of a sparse planar light field, leading to segments that are coherent between views. In addition, uniquely our method does not make the assumption that all surfaces in the environment are perfect Lambertian reflectors, which further broadens its applicability. Our fully automatic segmentation pipeline leverages scene structure, and does not require the user to navigate through the views to fix inconsistencies. The key idea is to combine coarse estimations given by an over-segmentation of the scene into super-rays, with detailed ray-based processing. We show the merit of our algorithm by means of a novel way to perform intrinsic light field decomposition, outperforming state-of-the-art methods.
We propose a method to estimate the mechanical parameters of fabrics using a casual capture setup with a depth camera. Our approach enables to create mechanically-correct digital representations of ...real-world textile materials, which is a fundamental step for many interactive design and engineering applications. As opposed to existing capture methods, which typically require expensive setups, video sequences, or manual intervention, our solution can capture at scale, is agnostic to the optical appearance of the textile, and facilitates fabric arrangement by non-expert operators. To this end, we propose a sim-to-real strategy to train a learning-based framework that can take as input one or multiple images and outputs a full set of mechanical parameters. Thanks to carefully designed data augmentation and transfer learning protocols, our solution generalizes to real images despite being trained only on synthetic data, hence successfully closing the sim-to-real loop.Key in our work is to demonstrate that evaluating the regression accuracy based on the similarity at parameter space leads to an inaccurate distances that do not match the human perception. To overcome this, we propose a novel metric for fabric drape similarity that operates on the image domain instead on the parameter space, allowing us to evaluate our estimation within the context of a similarity rank. We show that out metric correlates with human judgments about the perception of drape similarity, and that our model predictions produce perceptually accurate results compared to the ground truth parameters.
The field of image classification has shown an outstanding success thanks to the development of deep learning techniques. Despite the great performance obtained, most of the work has focused on ...natural images ignoring other domains like artistic depictions. In this paper, we use transfer learning techniques to propose a new classification network with better performance in illustration images. Starting from the deep convolutional network VGG19, pre-trained with natural images, we propose two novel models which learn object representations in the new domain. Our optimized network will learn new low-level features of the images (colours, edges, textures) while keeping the knowledge of the objects and shapes that it already learned from the ImageNet dataset. Thus, requiring much less data for the training. We propose a novel dataset of illustration images labelled by content where our optimized architecture achieves \(\textbf{86.61\%}\) of top-1 and \(\textbf{97.21\%}\) of top-5 precision. We additionally demonstrate that our model is still able to recognize objects in photographs.