Panum's limiting case is a phenomenon of monocular occlusion in binocular vision. This occurs when one object is occluded by the other object for one eye, but the two objects are both visible for the ...other eye. Although previous studies have found that vertical gradient of horizontal disparity and cue conflict are two important factors for double fusion, the effect of training on the sensitivity and stability of Panum's limiting case remains unknown. The current study trained 26 participants for 5 days with several of Panum's configurations (Gilliam, Frisby, and Wang series). The latency and duration of double fusion were recorded to examine the effects of training on sensitivity and stability of double fusion in Panum's limiting case. For each level of vertical gradient of horizontal disparity and cue conflict, the latency of double fusion decreased and the duration of double fusion increased with each additional training session. The results showed that vertical gradient of horizontal disparity and cue conflict interacted, and the duration of high cue conflict was significantly shorter than that of medium and low cue conflict for each level of vertical gradient of horizontal disparity. The findings suggest that there is an effect of training for vertical gradient of horizontal disparity and cue conflict in Panum's limiting case, and that the three factors jointly affect the sensitivity and stability of double fusion.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Structured light projection is a widely adopted approach for depth perception in consumer electronics and other machine vision systems. Diffractive optical element (DOE) is a key component for ...structured light projection that redistributes a collimated laser beam to a spot array with uniform intensity. Conventional DOEs for laser spot projection are binary-phase gratings, suffering from low efficiency and low uniformity when designed for a large field of view (FOV). Here, by combining vectorial electromagnetic simulation and interior-point method for optimization, we experimentally demonstrate polarization-independent silicon-based metasurfaces that can project a collimated laser beam to a spot array in the far-field with an exceedingly large FOV over 120° × 120°. The metasurface DOE with large FOV may benefit a number of depth perception-related applications such as face-unlock and motion sensing.
Full text
Available for:
IJS, KILJ, NUK, PNG, UL, UM
Abstract Efficient semantic segmentation of large-scale point cloud scenes is a fundamental and essential task for perception or understanding the surrounding 3d environments. However, due to the ...vast amount of point cloud data, it is always a challenging to train deep neural networks efficiently and also difficult to establish a unified model to represent different shapes effectively due to their variety and occlusions of scene objects. Taking scene super-patch as data representation and guided by its contextual information, we propose a novel multiscale super-patch transformer network (MSSPTNet) for point cloud segmentation, which consists of a multiscale super-patch local aggregation (MSSPLA) module and a super-patch transformer (SPT) module. Given large-scale point cloud data as input, a dynamic region-growing algorithm is first adopted to extract scene super-patches from the sampling points with consistent geometric features. Then, the MSSPLA module aggregates local features and their contextual information of adjacent super-patches at different scales. Owing to the self-attention mechanism, the SPT module exploits the similarity among scene super-patches in high-level feature space. By combining these two modules, our MSSPTNet can effectively learn both local and global features from the input point clouds. Finally, the interpolating upsampling and multi-layer perceptrons are exploited to generate semantic labels for the original point cloud data. Experimental results on the public S3DIS dataset demonstrate its efficiency of the proposed network for segmenting large-scale point cloud scenes, especially for those indoor scenes with a large number of repetitive structures, i.e., the network training of our MSSPTNet is much faster than other segmentation networks by a factor of tens to hundreds.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
The inhibition of return (IOR) is a phenomenon where response times (RTs) to a target appearing at a previously cued location are slower than those for an uncued location. IOR can improve visual ...search efficiency. This study aimed to investigate IOR in badminton athletes at different cue depths using a cue-target paradigm in three-dimensional (3-D) static and dynamic scenarios. The study involved 28 badminton athletes (M age = 21.29, SD = 2.39, 14 males) and 25 non-athletes (M age = 21.56, SD = 2.38, 11 males). In the static scenario (Experiment 1), no significant difference between IOR in cueing near and far conditions. IOR was showed both in cueing the near and far condition. Badminton athletes had a speed advantage than non-athletes. In the dynamic scenario (Experiment 2), only badminton athletes showed IOR in cueing the far-to-near condition, but not for the near-to-far. The present study showed that depth information influenced the IOR only in far-to-near condition. Badminton athletes showed more sensitivity to depth information than non-athletes. Additionally, the study expands the object-based IOR in 3-D dynamic scenario.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Stereoscopic, head-tracked display systems can show users realistic, world-locked virtual objects and environments (i.e., rendering perspective-correct binocular images with accurate motion ...parallax). However, discrepancies between the rendering pipeline and physical viewing conditions can lead to perceived instability in the rendered content resulting in reduced immersion and, potentially, visually-induced motion sickness. Precise requirements to achieve perceptually stable world-locked rendering (WLR) are unknown due to the challenge of constructing a wide field of view, distortion-free display with highly accurate head and eyetracking. We present a custom-built system capable of rendering virtual objects over real-world references without perceivable drift under such constraints. This platform is used to study acceptable errors in render camera position for WLR in augmented and virtual reality scenarios, where we find an order of magnitude difference in perceptual sensitivity. We conclude with an analytic model which examines changes to apparent depth and visual direction in response to camera displacement errors, and visual direction is highlighted as a potentially important consideration for WLR alongside depth errors from incorrect disparity.
Utilizing the high temporal resolution of event-related potentials (ERPs), we compared the time course of processing incongruent color versus 3D-depth information. Participants were asked to judge ...whether the food color (color condition) or 3D structure (3D-depth condition) was congruent or incongruent with their previous knowledge and experience. The behavioral results showed that the reaction times in the congruent 3D-depth condition were slower than those in the congruent color condition. The reaction times in the incongruent 3D-depth condition were slower than those in the incongruent color condition. The ERP results showed that incongruent color stimuli induced a larger N270, larger P300, and smaller N400 components in the fronto-central region than the congruent color stimuli. Incongruent 3D-depth stimuli induced a smaller N1 in the occipital region, larger P300 and smaller N400 in the parietal-occipital region than congruent 3D-depth stimuli. The time–frequency analysis found that incongruent color stimuli induced a larger theta band (360–580 ms) activation in the fronto-central region than congruent color stimuli. Incongruent 3D-depth stimuli induced larger alpha and beta bands (240–350 ms) activation in the parietal region than congruent 3D-depth stimuli. Our results suggest that the human brain deals with violating general color or depth knowledge in different time courses. We speculate that the depth perception conflict was dominated by solving the problem with visual processing, whereas the color perception conflict was dominated by solving the problem with semantic violation.
•Incongruent color stimuli induced N270 components but not incongruent depth ones.•Incongruent depth stimuli induced N1 components but not incongruent color ones.•Incongruent color and depth stimuli induced P300 and N400 components.•Incongruent color stimuli induced theta band activation.•Incongruent depth stimulus induced alpha and beta bands activation.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
The propagation methods of a non-diffractive beam (NDB) for optical sensing in scattering media have been extensively studied. However, those methods can realize the high resolution and long depth of ...focus in the viewpoint of microscopic imaging. In this study, we focus on macroscopic sensing in living tissues with a depth of a few tens centimeters. An experimental approach for generating adequate NDB in dense scattering media based on the linear relationship between propagation distance and transport mean free path is reported. For annular beams with different diameters, the same changes of the center intensity ratio of NDB are obtained from the experiment results. They are discussed with theoretical analysis. As a result, the maximum center intensity ratio of the adequate generated NDB can be estimated at arbitrary propagation distance in the dense scattering media.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
Abstract
An object’s identity can influence depth-position judgments. The mechanistic underpinnings underlying this phenomenon are largely unknown. Here, we asked whether context-dependent ...modulations of stereoscopic depth perception are expertise dependent. In 2 experiments, we tested whether training that attaches meaning (i.e. classification labels) to otherwise novel, stereoscopically presented objects changes observers’ sensitivity for judging their depth position. In Experiment 1, observers were randomly assigned to 3 groups: a Greeble-classification training group, an orientation-discrimination training group, or a no-training group, and were tested on their stereoscopic depth sensitivity before and after training. In Experiment 2, participants were tested before and after training while fMRI responses were concurrently imaged. Behaviorally, stereoscopic performance was significantly better following Greeble-classification (but not orientation-discrimination, or no-) training. Using the fMRI data, we trained support vector machines to predict whether the data were from the pre- or post-training sessions. Results indicated that classification accuracies in V4 were higher for the Greeble-classification group as compared with the orientation-discrimination group for which accuracies were at chance level. Furthermore, classification accuracies in V4 were negatively correlated with response times for Greeble identification. We speculate that V4 is implicated in an expertise-dependent, object-tuning manner that allows it to better guide stereoscopic depth retrieval.
Perceived prescription for multifocal contact lenses Raso, Alicia López; Alonso, Jose Manuel Lopez; Alcocer, Javier Ruiz ...
Acta ophthalmologica (Oxford, England),
December 2022, 2022-12-00, 20221201, Volume:
100, Issue:
S275
Journal Article
Peer reviewed
Purpose: Multifocal contact lenses have a variable power map that is typically radial. The prescription of these lenses is given as a Sphere plus Addition values in diopters. They are calculated by ...assigning different radial rings of the lens to different mean power values (near and far vision zones). Depth of focus (DOF) should be high in order to focus both images on the retina. This calculation usually depends on each manufacturer and design type. The purpose of this work is to give a simple common criterion for perceived prescription, based on visual criteria and patient's pupil.
Methods: The radial power profile of different multifocal contact lens designs was measured with a NIMO TR1504 (Lambda‐X). Using geometrical optics and MATLAB, ray tracing is performed and the position of the smallest spot size plane, noted as Df, is recorded. Spot size in that plane is recorded too. From Df, graphs of perceived diopters (inverse of Df in meters) versus different diameter portions of the lens are obtained.
Results: The method provides the sphere and addition values for lenses with zonal designs based on the maximum and minimum effective power seen by the patient as a function of pupil size. The main perceived diopter grows or decrease smoothly depending on centre‐near or centre‐distance designs. The calculation of the approximate size of the spot with the diameter also allows estimating the DOF too. In Extended Depth of Focus (EDOF) lenses the method is also applicable without changes of criteria.
Conclusions: The determination of the prescription perceived by the patient with multifocal contact lenses is important and there are no uniform methods for the different designs. The inclusion of the diameter used by each patient allows the inclusion of visual criteria for calculation of main perceived diopter and the use of a common method.
Full text
Available for:
DOBA, FZAB, GIS, IJS, IZUM, KILJ, NLZOH, NUK, OILJ, PILJ, PNG, SAZU, SBCE, SBMB, UILJ, UKNU, UL, UM, UPUK