Studies of cerebral lateralization often involve participants completing a series of perceptual tasks under laboratory conditions. This has constrained the number of participants recruited in such ...studies. Online testing can allow for much larger sample sizes but limits the amount of experimental control that is feasible. Here we considered whether online testing could give valid and reliable results on four tasks: a rhyme decision visual half-field task, a dichotic listening task, a chimeric faces task, and a finger tapping task. We recruited 392 participants, oversampling left-handers, who completed the battery twice. Three of the tasks showed evidence of both validity and reliability, insofar as they showed hemispheric advantages in the expected direction and test-retest reliability of at least r = .75. The reliability of the rhyme decision task was less satisfactory (r = .62). We also confirmed a prediction that extreme left-handers were more likely to depart from typical lateralization. Lateralization across the two language tasks (dichotic listening and rhyme judgement) was weakly correlated, but unrelated to lateralization on the chimeric faces task. We conclude that three of the tasks, dichotic listening, chimeric faces and finger tapping, show considerable promise for online evaluation of cerebral lateralization.
Celotno besedilo
Dostopno za:
BFBNIB, DOBA, FSPLJ, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
Linear models are becoming increasingly popular to investigate brain activity in response to continuous and naturalistic stimuli. In the context of auditory perception, these predictive models can be ...‘encoding’, when stimulus features are used to reconstruct brain activity, or ‘decoding’ when neural features are used to reconstruct the audio stimuli. These linear models are a central component of some brain–computer interfaces that can be integrated into hearing assistive devices (e.g., hearing aids). Such advanced neurotechnologies have been widely investigated when listening to speech stimuli but rarely when listening to music. Recent attempts at neural tracking of music show that the reconstruction performances are reduced compared with speech decoding. The present study investigates the performance of stimuli reconstruction and electroencephalogram prediction (decoding and encoding models) based on the cortical entrainment of temporal variations of the audio stimuli for both music and speech listening. Three hypotheses that may explain differences between speech and music stimuli reconstruction were tested to assess the importance of the speech‐specific acoustic and linguistic factors. While the results obtained with encoding models suggest different underlying cortical processing between speech and music listening, no differences were found in terms of reconstruction of the stimuli or the cortical data. The results suggest that envelope‐based linear modelling can be used to study both speech and music listening, despite the differences in the underlying cortical mechanisms.
This study compared the performance of cortical linear modelling during naturalistic listening to different types of sound by using both encoding and decoding modelling approaches based on the envelope of speech and music. The results suggest that differences may exist between the listening processes. However, the performance of the modelling accuracy is comparable between speech and music.
A common approach to study emotional reactions to music is to attempt to obtain direct links between musical surface features such as tempo and a listener’s response. However, such an analysis ...ultimately fails to explain why emotions are aroused in the listener. In this article, we propose an alternative approach, which seeks to explain musical emotions in terms of a set of underlying mechanisms that are activated by different types of information in musical events. We illustrate this approach by reporting a listening experiment, which manipulated a piece of music to activate four mechanisms: brain stem reflex; emotional contagion; episodic memory; and musical expectancy. The musical excerpts were played to 20 listeners, who were asked to rate their felt emotions on 12 scales. Pulse rate, skin conductance, and facial expressions were also measured. Results indicated that target mechanisms were activated and aroused emotions largely as predicted by a multi-mechanism framework.
Relatively few studies have explored the factors influencing the use of listening strategies despite a growing recognition of their importance in L2 listening comprehension. This study applies the ...expectancy‐value theory to investigate whether motivational beliefs (i.e., listening anxiety, intrinsic motivation, extrinsic motivation, and listening self‐efficacy) can invariably predict different levels of listening strategic processing among Chinese tertiary English learners. Responses to a questionnaire solicited from 237 students were analyzed using a structural equation modeling approach. The results provided evidence of the positive effects of intrinsic motivation and listening self‐efficacy on English as a foreign language (EFL) learners’ exploitation of top‐down and bottom‐up strategies. However, extrinsic motivation only had a positive effect on the use of bottom‐up strategies, and listening anxiety only had a negative effect on the use of top‐down strategies. These findings revealed the different predictive effects of motivational beliefs on listening strategy use. The paper concludes with pedagogical implications for listening instruction.
摘要
尽管听力策略在二语听力理解中的重要性日益受到学界关注, 却少有研究探讨影响听力策略使用的因素。本研究基于期望‐价值理论探究了四类动机信念(即听力焦虑、内在动机、外在动机和听力自我效能感)对中国大学生英语学习者听力策略的使用是否有同等预测作用。本研究用结构方程模型分析了237名中国大学生的问卷调查数据, 结果表明, 内在动机和听力自我效能感对自上而下和自下而上的听力策略均有积极影响, 外在动机仅对使用自下而上的听力策略有积极影响, 听力焦虑仅对使用自上而下的听力策略有消极影响。本研究揭示了动机信念对听力策略使用的不同预测作用, 并探讨了可能的原因。本文对英语作为外语的听力教学有一定的启示.
This study used a pretest-posttest-delayed posttest design at one-week intervals to determine the extent to which written, audio, and audiovisual L2 input contributed to incidental vocabulary ...learning. Seventy-six university students learning EFL in China were randomly assigned to four groups. Each group was presented with the input from the same television documentary in different modes: reading the printed transcript, listening to the documentary, viewing the documentary, and a nontreatment control condition. Checklist and multiple-choice tests were designed to measure knowledge of target words. The results showed that L2 incidental vocabulary learning occurred through reading, listening, and viewing, and that the gain was retained in all modes of input one week after encountering the input. However, no significant differences were found between the three modes on the posttests indicating that each mode of input yielded similar amounts of vocabulary gain and retention. A significant relationship was found between prior vocabulary knowledge and vocabulary learning, but not between frequency of occurrence and vocabulary learning. The study provides further support for the use of L2 television programs for language learning.
Listening to degraded speech can be challenging and requires a continuous investment of cognitive resources, which is more challenging for those with hearing loss. However, while alpha power (8-12 ...Hz) and pupil dilation have been suggested as objective correlates of listening effort, it is not clear whether they assess the same cognitive processes involved, or other sensory and/or neurophysiological mechanisms that are associated with the task. Therefore, the aim of this study is to compare alpha power and pupil dilation during a sentence recognition task in 15 randomized levels of noise (-7 to +7 dB SNR) using highly intelligible (16 channel vocoded) and moderately intelligible (6 channel vocoded) speech. Twenty young normal-hearing adults participated in the study, however, due to extraneous noise, data from only 16 (10 females, 6 males; aged 19-28 years) was used in the Electroencephalography (EEG) analysis and 10 in the pupil analysis. Behavioral testing of perceived effort and speech performance was assessed at 3 fixed SNRs per participant and was comparable to sentence recognition performance assessed in the physiological test session for both 16- and 6-channel vocoded sentences. Results showed a significant interaction between channel vocoding for both the alpha power and the pupil size changes. While both measures significantly decreased with more positive SNRs for the 16-channel vocoding, this was not observed with the 6-channel vocoding. The results of this study suggest that these measures may encode different processes involved in speech perception, which show similar trends for highly intelligible speech, but diverge for more spectrally degraded speech. The results to date suggest that these objective correlates of listening effort, and the cognitive processes involved in listening effort, are not yet sufficiently well understood to be used within a clinical setting.
Decoding training is an approach to teaching listening skills to help learners develop the ability to recognize individual words from speech. Although it has been historically underemphasized, recent ...empirical studies have pointed to its potential value in listening education. However, instructors and students generally face certain challenges when developing decoding skills. In this study, we used a meta-synthesis approach to examine all available empirical studies and identify five main challenges in decoding training: (a) insufficient time and practice, (b) student disengagement, (c) cognitive overload, (d) undifferentiated learning, and (e) ineffective feedback. We also discuss how technology was used in these studies to address these challenges. Finally, we identify several gaps in technology-assisted decoding training and offer recommendations for future research.
Nonlinear frequency compression (NFC) is a signal processing technique designed to lower high frequency inaudible sounds for a listener to a lower frequency that is audible. Because the maximum ...frequency that is audible to a listener with hearing loss will vary with the input speech level, the input level used to set nonlinear frequency compression could impact speech recognition.
The purpose of this study was to determine the influence of the input level used to set nonlinear frequency compression on nonsense syllable recognition.
Nonsense syllable recognition was measured for three NFC fitting condition (i.e., with nonlinear frequency compression set based on speech input levels of 50-, 60-, and 70-dB SPL, respectively), as well as without nonlinear frequency compression (restricted bandwidth condition).
Twenty-three adults (ages 42-80 years old) with hearing loss.
Data were collected, monaurally, using a hearing aid simulator. The start frequency and frequency compression ratios were set based on the SoundRecover Fitting Assistant. Speech stimuli were 657 consonant-vowel-consonant nonwords presented at 50, 60, and 70 dB SPL and mixed with steady noise (6 dB SNR) and scored based on entire word, initial consonant, vowel, and final consonant. Linear mixed effects examined the effects of NFC fitting condition , presentation level, and scoring method on percent correct recognition. Additional predictor variables of start frequency and frequency-compression ratio were examined.
Nonsense syllable recognition increased as presentation level increased. Nonsense syllable recognition for all presentation levels was highest when nonlinear frequency compression was set based on the 70 dB SPL input level and decreased significantly when set based on the 60- and 50-dB SPL inputs. Relative to consonant recognition, there was a greater reduction in vowel recognition. Nonsense syllable recognition between NFC fitting conditions improved with increases in the start frequency, where higher start frequencies led to better nonsense word recognition.
Nonsense syllable recognition was highest when setting nonlinear frequency compression based on a 70 dB SPL presentation level and suggest that a high presentation level should be used to determine nonlinear frequency compression parameters for an individual patient.
•L1 and L2 processing in late bilinguals could be captured by brain dynamic states.•Distinct brain state dynamics were found in L1 and L2 processing in late bilinguals.•L1 processing was associated ...with more integrated states and greater transition flexibility.•L2 processing was associated with more segregated states and insufficient transition flexibility.
The process of complex cognition, which includes language processing, is dynamic in nature and involves various network modes or cognitive modes. This dynamic process can be manifested by a set of brain states and transitions between them. Previous neuroimaging studies have shed light on how bilingual brains support native language (L1) and second language (L2) through a shared network. However, the mechanism through which this shared brain network enables L1 and L2 processing remains unknown. This study examined this issue by testing the hypothesis that L1 and L2 processing is associated with distinct brain state dynamics in terms of brain state integration and transition flexibility. A group of late Chinese-English bilinguals was scanned using functional magnetic resonance imaging (fMRI) while listening to eight short narratives in Chinese (L1) and English (L2). Brain state dynamics were modeled using the leading eigenvector dynamic analysis framework. The results show that L1 processing involves more integrated states and frequent transitions between integrated and segregated states, while L2 processing involves more segregated states and fewer transitions. Our work provides insight into the dynamic process of narrative listening comprehension in late bilinguals and sheds new light on the neural representation of language processing and related disorders.