Successfully interpreting and navigating our natural visual environment requires us to track its dynamics constantly. Additionally, we focus our attention on behaviorally relevant stimuli to enhance ...their neural processing. Little is known, however, about how sustained attention affects the ongoing tracking of stimuli with rich natural temporal dynamics. Here, we used MRI-informed source reconstructions of magnetoencephalography (MEG) data to map to what extent various cortical areas track concurrent continuous quasi-rhythmic visual stimulation. Further, we tested how top-down visuo-spatial attention influences this tracking process. Our bilaterally presented quasi-rhythmic stimuli covered a dynamic range of 4–20 Hz, subdivided into three distinct bands. As an experimental control, we also included strictly rhythmic stimulation (10 vs 12 Hz). Using a spectral measure of brain-stimulus coupling, we were able to track the neural processing of left vs. right stimuli independently, even while fluctuating within the same frequency range. The fidelity of neural tracking depended on the stimulation frequencies, decreasing for higher frequency bands. Both attended and non-attended stimuli were tracked beyond early visual cortices, in ventral and dorsal streams depending on the stimulus frequency. In general, tracking improved with the deployment of visuo-spatial attention to the stimulus location. Our results provide new insights into how human visual cortices process concurrent dynamic stimuli and provide a potential mechanism – namely increasing the temporal precision of tracking – for boosting the neural representation of attended input.
•We studied cortical tracking of temporally dynamic stimuli.•Dynamics were defined as frequency fluctuations in different bands.•We projected MEG-recorded tracking onto a highly detailed cortical map.•Extent and magnitude of cortical tracking differed between frequencies.•Attending to a stimulus location enhanced its cortical tracking response.
Fluctuations in arousal, controlled by subcortical neuromodulatory systems, continuously shape cortical state, with profound consequences for information processing. Yet, how arousal signals ...influence cortical population activity in detail has so far only been characterized for a few selected brain regions. Traditional accounts conceptualize arousal as a homogeneous modulator of neural population activity across the cerebral cortex. Recent insights, however, point to a higher specificity of arousal effects on different components of neural activity and across cortical regions. Here, we provide a comprehensive account of the relationships between fluctuations in arousal and neuronal population activity across the human brain. Exploiting the established link between pupil size and central arousal systems, we performed concurrent magnetoencephalographic (MEG) and pupillographic recordings in a large number of participants, pooled across three laboratories. We found a cascade of effects relative to the peak timing of spontaneous pupil dilations: Decreases in low-frequency (2-8 Hz) activity in temporal and lateral frontal cortex, followed by increased high-frequency (>64 Hz) activity in mid-frontal regions, followed by monotonic and inverted U relationships with intermediate frequency-range activity (8-32 Hz) in occipito-parietal regions. Pupil-linked arousal also coincided with widespread changes in the structure of the aperiodic component of cortical population activity, indicative of changes in the excitation-inhibition balance in underlying microcircuits. Our results provide a novel basis for studying the arousal modulation of cognitive computations in cortical circuits.
Our visual system extracts the emotional meaning of human facial expressions rapidly and automatically. Novel paradigms using fast periodic stimulations have provided insights into the ...electrophysiological processes underlying emotional content extraction: the regular occurrence of specific identities and/or emotional expressions alone can drive diagnostic brain responses. Consistent with a processing advantage for social cues of threat, we expected angry facial expressions to drive larger responses than neutral expressions. In a series of four EEG experiments, we studied the potential boundary conditions of such an effect: (i) we piloted emotional cue extraction using 9 facial identities and a fast presentation rate of 15 Hz (N = 16); (ii) we reduced the facial identities from 9 to 2, to assess whether (low or high) variability across emotional expressions would modulate brain responses (N = 16); (iii) we slowed the presentation rate from 15 Hz to 6 Hz (N = 31), the optimal presentation rate for facial feature extraction; (iv) we tested whether passive viewing instead of a concurrent task at fixation would play a role (N = 30). We consistently observed neural responses reflecting the rate of regularly presented emotional expressions (5 Hz and 2 Hz at presentation rates of 15 Hz and 6 Hz, respectively). Intriguingly, neutral expressions consistently produced stronger responses than angry expressions, contrary to the predicted processing advantage for threat-related stimuli. Our findings highlight the influence of physical differences across facial identities and emotional expressions.
The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial ...attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further “pulsed” (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration.
•Audio-visual (AV) synchrony and spatial attention enhance stimulus processing.•We tested whether AV synchrony does so by attracting spatial attention.•Frequency-tagging allowed monitoring simultaneous stimuli in EEG responses.•This afforded separating AV synchrony and spatial attention gain effects.•We found AV synchrony to bias processing independent of spatial attention.
Many everyday situations require focusing on visual or auditory information while ignoring the other modality. Previous findings suggest an attentional mechanism that operates between sensory ...modalities and governs such states. To date, evidence is equivocal as to whether this ‘intermodal’ attention relies on a distribution of resources either common or specific to sensory modalities. We provide new insights by investigating consequences of a shift from simultaneous (‘bimodal’) attention to vision and audition to unimodal selective attention. Concurrently presented visual and auditory stimulus streams were frequency-tagged to elicit steady-state responses (SSRs) recorded simultaneously in electro- and magnetoencephalograms (EEG/MEG). After the shift, decreased amplitudes of the SSR corresponding to the unattended sensory stream indicated reduced processing. We did not observe an amplitude increase of the SSR corresponding to the attended sensory stream. These findings are incompatible with a common-resources account. A redistribution of attentional resources between vision and audition would result in simultaneous processing gain in the attended sensory modality and reduction in the unattended sensory modality. Our results favor a modality-specific-resources account, which allows for independent modulation of early cortical processing in each sensory modality.
► SSRs were used to index modality-specific early visual and auditory processing. ► Shifts from bimodal to unimodal selective attention modulated SSRs. ► Processing in the unattended sense was reduced after shifts. ► Processing in the attended sense remained constant after shifts. ► Results are in favor of modality-specific attentional resources.
Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks ...falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This “different-hemifield advantage” has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield (“within-hemifield” conditions) or separated by the vertical meridian (“across-hemifield” conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during “across-hemifield” relative to “within-hemifield” conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in “across-hemifield” condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between “across-hemifield” and “within-hemifield” conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.
Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that ...audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
Attention filters behaviorally relevant stimuli from the constant stream of sensory information comprising our environment. Research into underlying neural mechanisms in humans suggests that visual ...attention biases mutual suppression between stimuli resulting from competition for limited processing resources. As a consequence, processing of an attended stimulus is facilitated. This account makes 2 assumptions: 1) An attended stimulus is released from mutual suppression with competing stimuli and 2) an attended stimulus experiences greater gain in the presence of competing stimuli than when it is presented alone. Here, we tested these assumptions by recording frequency-tagged potentials elicited in early visual cortex that index stimulus-specific processing. We contrasted the processing of a given stimulus when its location was attended or unattended and in the presence or the absence of a nearby competing stimulus. At variance with previous findings, competition similarly suppressed processing of attended and unattended stimuli. Moreover, the magnitude of attentional gain was comparable in the presence or the absence of competing stimuli. We conclude that visuospatial selective attention does not directly modulate mutual suppression between stimuli but instead acts as a signal gain, which biases processing toward attended stimuli independent of competition.
Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of ...natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4–7Hz), alpha (8–13Hz) and beta bands (14–20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking (“entrainment”) to quasi-rhythmic visual input and “frequency-tagging” experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias.
•Dynamic visual stimuli constitute large parts of our perceptual experience.•Strictly rhythmic dynamics condense in EEG-recorded mass-neural activity.•We tested how stimuli with fluctuating rhythms reflect in the EEG.•We found that the EEG allows tracing two quasi-rhythmic stimuli in parallel.•Dynamics of attended stimuli may be tracked with greater temporal precision.
The nestor guideline for preservation planning is the latest in a series of nestor publications. nestor is the German competence network for digital preservation and it offers all interested parties ...from the private and public domains the possibility to participate in working groups. The guideline for preservation planning is the result of such a working group, which discussed the conceptual and practical issues of implementing the OAIS Functional Entity “Preservation Planning”.The guideline describes a process model and offers some guidance on potential implementations. It integrates and builds on recognized community concepts like Significant Properties, the OAIS Designated Community, the National Archives of Australia’s Performance Model, the PREMIS concept of Intellectual Entities and Representations, and the PLANET’s approach to preservation planning. Furthermore, it introduces the concepts “intended use” (Nutzungsziele), “information type” (Informationstyp) and “preservation group” (Erhaltungsgruppe). The purpose of these new categories is that information objects shall be grouped by information type (e.g., audio, video, text…) and intended use (e.g., reading for pleasure, search for specific information…) to preservation groups for automatic processing. Significant properties can then be derived for whole preservation groups. The file format alone is considered as not completely sufficient for such categorisation. Some exemplary implementation solutions of the new concepts are presented in an annex.The guideline takes into account that resources for preservation planning and preservation actions are limited and has therefore adopted 4 premises: adequacy, financial viability, automation, and authenticity of archived objects. Its pragmatic approach becomes apparent in the definition and explanation of these dimensions. The guideline is written from the point of view of representatives of memory institutions, i.e., libraries, archives, museums, and is primarily targeted at this context although it may be useful for other information preserving institutions too.This contribution introduces the nestor guideline for preservation planning (for now only available in German; English translation envisaged for 1st half of 2014) to an international audience for the first time. It also matches the process model and the new concepts intended use, information type and preservation group to the collection and preservation reality of the German National Library.