Gaze behavior during scene and object recognition can highlight the relevant information for a task. For example, salience maps-highlighting regions that have heightened luminance, contrast, color, ...etc. in a scene-can be used to predict gaze targets. Certain tasks, such as face recognition, result in a typical pattern of fixations on high salience features. While local salience of a 2-D feature may contribute to gaze behavior and object recognition, we are perfectly capable of recognizing objects from 3-D depth cues devoid of meaningful 2-D features. Faces can be recognized from pure texture, binocular disparity, or structure-from-motion displays (Dehmoobadsharifabadi & Farivar, 2016; Farivar, Blanke, & Chaudhuri, 2009; Liu, Collin, Farivar, & Chaudhuri, 2005), and yet these displays are devoid of local salient 2-D features. We therefore sought to determine whether gaze behavior is driven by an underlying 3-D representation that is depth-cue invariant or depth-cue specific. By using a face identification task comprising morphs of 3-D facial surfaces, we were able to measure identification thresholds and thereby equate for task difficulty across different depth cues. We found that gaze behavior for faces defined by shading and texture cues was highly comparable, but we observed some deviations for faces defined by binocular disparity. Interestingly, we found no effect of task difficulty on gaze behavior. The results are discussed in the context of depth-cue invariant representations for facial surfaces, with gaze behavior being constrained by low-level limits of depth extraction from specific cues such as binocular disparity.
Independent of edges and 2-D shape that can be highly informative of object identity, depth cues alone can also give rise to vivid and effective object percepts. The processing of different depth ...cues engages segregated cortical areas, and an efficient object representation would be one that is invariant to depth cues. Here, we investigated depth-cue invariance of object representations by measuring the category-specific response to faces-the M170 response measured with magnetoencephalography. The M170 response is strongest to faces and is sensitive to adaptation, such that repeated presentation of a face diminishes subsequent M170 responses. We used this feature of the M170 and measured the degree to which the adaptation effect is affected by variations in depth cue and 3-D object shape. Subjects viewed a rapid presentation of two stimuli-an adaptor and a test stimulus. The adaptor was either a face, a chair, or a face-like oval surface, and rendered with a single depth cue (shading, structure from motion, or texture). The test stimulus was always a shaded face of a random identity, thus completely controlling for low-level influences on the M170 response to the test stimulus. In the left fusiform face area, we found strong M170 adaptation when the adaptor was a face regardless of its depth cue. This adaptation was marginal in the right fusiform and negligible in the occipital regions. Our results support the presence of depth-cue-invariant representations in the human visual system, alongside size, position, and viewpoint invariance.
Hippocampal volumetry is a critical biomarker of aging and dementia, and it is widely used as a predictor of cognitive performance; however, automated hippocampal segmentation methods are limited ...because the algorithms are (a) not publicly available, (b) subject to error with significant brain atrophy, cerebrovascular disease and lesions, and/or (c) computationally expensive or require parameter tuning. In this study, we trained a 3D convolutional neural network using 259 bilateral manually delineated segmentations collected from three studies, acquired at multiple sites on different scanners with variable protocols. Our training dataset consisted of elderly cases difficult to segment due to extensive atrophy, vascular disease, and lesions. Our algorithm, (HippMapp3r), was validated against four other publicly available state‐of‐the‐art techniques (HippoDeep, FreeSurfer, SBHV, volBrain, and FIRST). HippMapp3r outperformed the other techniques on all three metrics, generating an average Dice of 0.89 and a correlation coefficient of 0.95. It was two orders of magnitude faster than some of the tested techniques. Further validation was performed on 200 subjects from two other disease populations (frontotemporal dementia and vascular cognitive impairment), highlighting our method's low outlier rate. We finally tested the methods on real and simulated “clinical adversarial” cases to study their robustness to corrupt, low‐quality scans. The pipeline and models are available at: https://hippmapp3r.readthedocs.ioto facilitate the study of the hippocampus in large multisite studies.
The human brain can recognize objects using a variety of depth cues like shading, texture, motion and stereo. The visual system is believed to use different computations to process different types of ...depth information and different cortical areas are involved for each process. Meanwhile, the perception of the depth of an object is consistent regardless of the depth cue which suggests similar representation of the object defined by different depth cue in higher-level of visual processing. The aim of my studies was to investigate to what extent the information from individual depth cues is segregated and to what extent the visual system integrates this information to produce the perception of the depth of an object. We used three different approaches to investigate the visual processing of depth cue. In the first project, we recorded the cortical responses to different depth cues using functional Magnetic Resonance Imaging (fMRI). We found similar cortical responses in higher levels of ventral pathway to different depth cues. The second project used Magnetoencephalography (MEG) to discover that adaptation effect in higher-levels is transferable between depth cues. And finally, in the last experiment, we investigated the strategy with which the visual system acquires information from the object by monitoring the saccadic eye-movements while the subject was performing face recognition task. By tracking the gaze, we could detect consistent movement of the eye in different depth cues.
This paper presents an efficient model for optimal planning of electrical energy management in a smart residential microgrid (SRMG) by considering energy interactions among smart homes and with the ...purpose of SRMG’s resilience improvement. A two-state linearized mathematical programming framework is given for the development of such notions. The first state proclaims a regular, disruption-free SRMG state, whereas the second state declares a disturbed SRMG state. In the initial state, two steps are taken into account. In the first stage, the minimum energy cost for each smart home, and in the second stage, minimum load profile deviation for SRMG is considered. In the second state energy interactions among in-home energy management systems to enhance the resilience is deliberated. The main objective of these two states is to advance SRMG’s resiliency by considering modified SRMG’s load profile and minimized smart homes’ daily energy cost. Based on the obtained numerical results, energy interactions among smart homes have diminished smart home’s energy cost about 3.34% in normal state, modified SRMG’s load profile deviation about 3.62% in normal state, and improved SRMG’s resilience about 32% in disrupted state.