Daylight vision begins when light activates cone photoreceptors in the retina, creating spatial patterns of neural activity. These cone signals are then combined and processed in downstream neural ...circuits, ultimately producing visual perception. Recent technical advances have made it possible to deliver visual stimuli to the retina that probe this processing by the visual system at its elementary resolution of individual cones. Physiological recordings from nonhuman primate retinas reveal the spatial organization of cone signals in retinal ganglion cells, including how signals from cones of different types are combined to support both spatial and color vision. Psychophysical experiments with human subjects characterize the visual sensations evoked by stimulating a single cone, including the perception of color. Future combined physiological and psychophysical experiments focusing on probing the elementary visual inputs are likely to clarify how neural processing generates our perception of the visual world.
Color, lightness, and glossiness are perceptual attributes associated with object reflectance. For these perceptual representations to be useful, they must correlate with physical reflectance ...properties of objects and not be overly affected by changes in illumination or viewing context. We employed a matching paradigm to investigate the perception of lightness and glossiness under geometric changes in illumination. Stimuli were computer simulations of spheres presented on a high-dynamic-range display. Observers adjusted the diffuse and specular reflectance components of a test sphere so that its appearance matched that of a reference sphere simulated under a different light field. Diffuse component matches were close to veridical across geometric changes in light field. In contrast, specular component matches were affected by geometric changes in light field. We tested several independence principles and found (i) that the effect of changing light field geometry on the diffuse component matches was independent of the reference sphere specular component; (ii) that the effect of changing light field geometry on the specular component matches was independent of the reference sphere diffuse component; and (iii) that diffuse and specular components of the match depended only slightly on the roughness of the specular component. Finally, we found that equating simple statistics (i.e., standard deviation, skewness, and kurtosis) computed from the luminance histograms of the spheres did not predict the matches: these statistics differed substantially between spheres that matched in appearance across geometric changes in the light field.
Optoretinography has enabled noninvasive visualization of physiological changes in cone photoreceptors exposed to light. Understanding the cone optoretinogram in healthy subjects is essential for ...establishing it as a biomarker for cone function in disease. Here, we measure the population cone intensity optoretinogram in healthy adults, for multiple irradiance/duration combinations of visible stimuli with equal energy. We study the within and between session repeatability and reciprocity of the ORG in five healthy subjects. We find the cone optoretinogram exhibits equivalent amplitudes for equal-energy stimuli. We also find good within-subject repeatability, which allows us to show differences across the five subjects.
Vision provides information about the properties and identity of objects. The ease with which we perceive object properties belies the difficulty of the underlying information-processing task. In the ...case of object color, retinal information about object reflectance is confounded with information about the illumination as well as about the object's shape and pose. There is no obvious rule that allows transformation of the retinal image to a color representation that depends primarily on object surface reflectance. Under many circumstances, however, object color appearance is remarkably stable across scenes in which the object is viewed. Here, we review a line of experiments and theory that aim to understand how the visual system stabilizes object color appearance. Our emphasis is on models derived from explicit analysis of the computational problem of estimating the physical properties of illuminants and surfaces from the retinal image, and experiments that test these models. We argue that this approach has considerable promise for allowing generalization from simplified laboratory experiments to richer scenes that more closely approximate natural viewing. We discuss the relation between the work we review and other theoretical approaches available in the literature.
There is a large literature characterizing human perception of the lightness and color of matte surfaces arranged in coplanar arrays. In the past ten years researchers have begun to examine ...perception of lightness and color using wider ranges of stimuli intended to better approximate the conditions of everyday viewing. One emerging line of research concerns perception of lightness and color in scenes that approximate the three-dimensional environment we live in, with objects that need not be matte or coplanar and with geometrically complex illumination. A second concerns the perception of material surface properties other than color and lightness, such as gloss or roughness. This special issue features papers that address the rich set of questions and approaches that have emerged from these new research directions. Here, we briefly describe the articles in the issue and their relation to previous work.
Color constancy is our ability to perceive constant surface colors despite changes in illumination. Although color constancy has been studied extensively, its mechanisms are still largely unknown. ...Three classic hypotheses are that constancy is mediated by local adaptation, by adaptation to the spatial mean of the image, or by adaptation to the most intense image region. We measure color constancy under nearly natural viewing conditions, by using a design that allows us to test these three hypotheses directly. By suitable stimulus manipulation, we are able to titrate the degree of constancy between 11% and 83%, indicating that we have achieved good laboratory control. Our results rule out all three classic hypotheses and thus suggest that there is more to constancy than can be easily explained by the action of simple visual mechanisms.
Human color constancy has been studied for over 100 years, and there is extensive experimental data for the case where a spatially diffuse light source illuminates a set of flat matte surfaces. In ...natural viewing, however, three-dimensional objects are viewed in three-dimensional scenes. Little is known about color constancy for three-dimensional objects. We used a forced-choice task to measure the achromatic chromaticity of matte disks, matte spheres, and glossy spheres. In all cases, the test stimuli were viewed in the context of stereoscopically viewed graphics simulations of three-dimensional scenes, and we varied the scene illuminant. We studied conditions both where all cues were consistent with the simulated illuminant change (consistent-cue conditions) and where local contrast was silenced as a cue (reduced-cue conditions). We computed constancy indices from the achromatic chromaticities. To first order, constancy was similar for the three test object types. There was, however, a reliable interaction between test object type and cue condition. In the consistent-cue conditions, constancy tended to be best for the matte disks, while in the reduced-cue conditions constancy was best for the spheres. The presence of this interaction presents an important challenge for theorists who seek to generalize models that account for constancy for flat tests to the more general case of three-dimensional objects.
Demosaicing is an important part of the image-processing chain for many digital color cameras. The demosaicing operation converts a raw image acquired with a single sensor array, overlaid with a ...color filter array, into a full-color image. In this paper, we report the results of two perceptual experiments that compare the perceptual quality of the output of different demosaicing algorithms. In the first experiment, we found that a Bayesian demosaicing algorithm produced the most preferred images. Detailed examination of the data, however indicated that the good performance of this algorithm was at least in part due to the fact that it sharpened the images while it demosaiced them. In a second experiment, we silenced image sharpness as a factor by applying a sharpening algorithm to the output of each demosaicing algorithm. The optimal amount of sharpening to be applied to each image was chosen using the results of a preliminary experiment. Once sharpness was equated in this way, an algorithm developed by Freeman based on bilinear interpolation combined with median filtering, gave the best results. An analysis of our data suggests that our perceptual results cannot be easily predicted using an image metric.
Many models of color constancy assume that the visual system estimates the scene illuminant and uses this estimate to determine an object's color appearance. A version of this illumination-estimation ...hypothesis, in which the illuminant estimate is associated with the explicitly perceived illuminant, was tested. Observers made appearance matches between two experimental chambers. Observers adjusted the illumination in one chamber to match that in the other and then adjusted a test patch in one chamber to match the surface lightness of a patch in the other. The illumination-estimation hypothesis, as formulated here, predicted that after both matches the luminances of the light reflected from the test patches would be identical. The data contradict this prediction. A second experiment showed that manipulating the immediate surround of a test patch can affect perceived lightness without affecting perceived illumination. This finding also falsifies the illumination-estimation hypothesis.
Bayesian color constancy Brainard, D H; Freeman, W T
Journal of the Optical Society of America. A, Optics, image science, and vision,
07/1997, Letnik:
14, Številka:
7
Journal Article
Recenzirano
The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of ...Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor responses. Second, we construct prior distributions that describe the probability that particular illuminants and surfaces exist in the world. Given a set of photosensor responses, we can then use Bayes's rule to compute the posterior distribution for the illuminants and the surfaces in the scene. There are two widely used methods for obtaining a single best estimate from a posterior distribution. These are maximum a posteriori (MAP) and minimum mean-square-error (MMSE) estimation. We argue that neither is appropriate for perception problems. We describe a new estimator, which we call the maximum local mass (MLM) estimate, that integrates local probability density. The new method uses an optimality criterion that is appropriate for perception tasks: It finds the most probable approximately correct answer. For the case of low observation noise, we provide an efficient approximation. We develop the MLM estimator for the color-constancy problem in which flat matte surfaces are uniformly illuminated. In simulations we show that the MLM method performs better than the MAP estimator and better than a number of standard color-constancy algorithms. We note conditions under which even the optimal estimator produces poor estimates: when the spectral properties of the surfaces in the scene are biased.