Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning ...computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-basedcompression.
Generational overlap affects the care time demands on parents and grandparents worldwide. Here, we present the first global estimates of the experience of simultaneously having frail older parents ...and young children (“sandwichness”) or young grandchildren (“grandsandwichness”) for the 1970–2040 cohorts, using demographic methods and microsimulations. We find that sandwichness is more prevalent in the Global South—for example, almost twice as prevalent in sub‐Saharan Africa as it is in Europe for the 1970 cohort—but is expected to decline globally by one‐third between 1970 and 2040. The Global North might have reached a peak in the simultaneous care time demands from multiple generations but the duration of the grandsandwich state will increase by up to one year in Africa and Asia. This increasing generational overlap implies more care time demands over the entire adult life course, but also opens up an opportunity for the full potential of grandparenthood to materialize.
We present a novel hyperspectral image reconstruction algorithm, which overcomes the long-standing tradeoff between spectral accuracy and spatial resolution in existing compressive imaging ...approaches. Our method consists of two steps: First, we learn nonlinear spectral representations from real-world hyperspectral datasets; for this, we build a convolutional autoencoder which allows reconstructing its own input through its encoder and decoder networks. Second, we introduce a novel optimization method, which jointly regularizes the fidelity of the learned nonlinear spectral representations and the sparsity of gradients in the spatial domain, by means of our new fidelity prior. Our technique can be applied to any existing compressive imaging architecture, and has been thoroughly tested both in simulation, and by building a prototype hyperspectral imaging system. It outperforms the state-of-the-art methods from each architecture, both in terms of spectral accuracy and spatial resolution, while its computational complexity is reduced by two orders of magnitude with respect to sparse coding techniques. Moreover, we present two additional applications of our method: hyperspectral interpolation and demosaicing. Last, we have created a new high-resolution hyperspectral dataset containing sharper images of more spectral variety than existing ones, available through our project website.
Despite the uniquely high thermal conductivity of graphene is well known, the exploitation of graphene into thermally conductive nanomaterials and devices is limited by the inefficiency of thermal ...contacts between the individual nanosheets. A fascinating yet experimentally challenging route to enhance thermal conductance at contacts between graphene nanosheets is through molecular junctions, allowing covalently connecting nanosheets, otherwise interacting only via weak Van der Waals forces. Beside the bare existence of covalent connections, the choice of molecular structures to be used as thermal junctions should be guided by their vibrational properties, in terms of phonon transfer through the molecular junction. In this paper, density functional tight-binding combined with Green’s functions formalism was applied for the calculation of thermal conductance and phonon spectra of several different aliphatic and aromatic molecular junctions between graphene nanosheets. Effects of molecular junction length, conformation, and aromaticity were studied in detail and correlated with phonon tunnelling spectra. The theoretical insight provided by this work can guide future experimental studies to select suitable molecular junctions, in order to enhance the thermal transport by suppressing the interfacial thermal resistances. This is attractive for various systems, including graphene nanopapers and graphene polymer nanocomposites, as well as related devices. In a broader view, the possibility to design molecular junctions to control phonon transport currently appears as an efficient way to produce phononic devices and controlling heat management in nanostructures.
Current HDR acquisition techniques are based on either (i) fusing multibracketed, low dynamic range (LDR) images, (ii) modifying existing hardware and capturing different exposures simultaneously ...with multiple sensors, or (iii) reconstructing a single image with spatially‐varying pixel exposures. In this paper, we propose a novel algorithm to recover high‐quality HDRI images from a single, coded exposure. The proposed reconstruction method builds on recently‐introduced ideas of convolutional sparse coding (CSC); this paper demonstrates how to make CSC practical for HDR imaging. We demonstrate that the proposed algorithm achieves higher‐quality reconstructions than alternative methods, we evaluate optical coding schemes, analyze algorithmic parameters, and build a prototype coded HDR camera that demonstrates the utility of convolutional sparse HDRI coding with a custom hardware platform.
A similarity measure for illustration style Garces, Elena; Agarwala, Aseem; Gutierrez, Diego ...
ACM transactions on graphics,
07/2014, Volume:
33, Issue:
4
Journal Article
Peer reviewed
This paper presents a method for measuring the similarity in style between two pieces of vector art, independent of content. Similarity is measured by the differences between four types of features: ...color, shading, texture, and stroke. Feature weightings are learned from crowdsourced experiments. This perceptual similarity enables style-based search. Using our style-based search feature, we demonstrate an application that allows users to create stylistically-coherent clip art mash-ups.
Traditional cinematography has relied for over a century on a well-established set of editing rules, called continuity editing, to create a sense of situational continuity. Despite massive changes in ...visual content across cuts, viewers in general experience no trouble perceiving the discontinuous flow of information as a coherent set of events. However, Virtual Reality (VR) movies are intrinsically different from traditional movies in that the viewer controls the camera orientation at all times. As a consequence, common editing techniques that rely on camera orientations, zooms, etc., cannot be used. In this paper we investigate key relevant questions to understand how well traditional movie editing carries over to VR, such as: Does the perception of continuity hold across edit boundaries? Under which conditions? Does viewers' observational behavior change after the cuts? To do so, we rely on recent cognition studies and the event segmentation theory, which states that our brains segment continuous actions into a series of discrete, meaningful events. We first replicate one of these studies to assess whether the predictions of such theory can be applied to VR. We next gather gaze data from viewers watching VR videos containing different edits with varying parameters, and provide the first systematic analysis of viewers' behavior and the perception of continuity in VR. From this analysis we make a series of relevant findings; for instance, our data suggests that predictions from the cognitive event segmentation theory are useful guides for VR editing; that different types of edits are equally well understood in terms of continuity; and that spatial misalignments between regions of interest at the edit boundaries favor a more exploratory behavior even
after
viewers have fixated on a new region of interest. In addition, we propose a number of metrics to describe viewers' attentional behavior in VR. We believe the insights derived from our work can be useful as guidelines for VR content creation.
Decomposing an input image into its intrinsic shading and reflectance components is a long‐standing ill‐posed problem. We present a novel algorithm that requires no user strokes and works on a single ...image. Based on simple assumptions about its reflectance and luminance, we first find clusters of similar reflectance in the image, and build a linear system describing the connections and relations between them. Our assumptions are less restrictive than widely‐adopted Retinex‐based approaches, and can be further relaxed in conflicting situations. The resulting system is robust even in the presence of areas where our assumptions do not hold. We show a wide variety of results, including natural images, objects from the MIT dataset and texture images, along with several applications, proving the versatility of our method.
Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven ...appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this paper, we present an intuitive control space for predictable editing of captured BRDF data, which allows for artistic creation of plausible novel material appearances, bypassing the difficulty of acquiring novel samples. We first synthesize novel materials, extending the existing MERL dataset up to 400 mathematically valid BRDFs. We then design a large-scale experiment, gathering 56,000 subjective ratings on the high-level perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals mapping the perceptual attributes to an underlying PCA-based representation of BRDFs. We show that our functionals are excellent predictors of the perceived attributes of appearance. Our control space enables many applications, including intuitive material editing of a wide range of visual properties, guidance for gamut mapping, analysis of the correlation between perceptual attributes, or novel appearance similarity metrics. Moreover, our methodology can be used to derive functionals applicable to classic analytic BRDF representations. We release our code and dataset publicly, in order to support and encourage further research in this direction.
A framework for transient rendering Jarabo, Adrian; Marco, Julio; Muñoz, Adolfo ...
ACM transactions on graphics,
11/2014, Volume:
33, Issue:
6
Journal Article
Peer reviewed
Open access
Recent advances in ultra-fast imaging have triggered many promising applications in graphics and vision, such as capturing transparent objects, estimating hidden geometry and materials, or ...visualizing light in motion. There is, however, very little work regarding the
effective
simulation and analysis of transient light transport, where the speed of light can no longer be considered infinite. We first introduce the
transient path integral
framework, formally describing light transport in transient state. We then analyze the difficulties arising when considering the light's time-of-flight in the simulation (rendering) of images and videos. We propose a novel density estimation technique that allows reusing sampled paths to reconstruct time-resolved radiance, and devise new sampling strategies that take into account the distribution of radiance along time in participating media. We then efficiently simulate time-resolved phenomena (such as caustic propagation, fluorescence or temporal chromatic dispersion), which can help design future ultra-fast imaging devices using an analysis-by-synthesis approach, as well as to achieve a better understanding of the nature of light transport.