This book analyzes 153 languages from a large variety of families to establish a previously unexplored relationship between phonetically conditioned sound changes such as lenitions and functional ...(meaning maintenance related) considerations. Carefully collecting numerous inventories of consonants, this collection is likely to become an important resource for future linguistics research. By distinguishing between phonetic and phonological neutralization, and showing that the first does not necessarily result in the second, Naomi Gurevich uncovers previously unexplored and often surprising trends in the relationship between phonetics and phonology.
One of the most important auditory categorization tasks a listener faces is determining a sound's domain, a process which is a prerequisite for successful within-domain categorization tasks such as ...recognizing different speech sounds or musical tones. Speech and song are universal in human cultures: how do listeners categorize a sequence of words as belonging to one or the other of these domains? There is growing interest in the acoustic cues that distinguish speech and song, but it remains unclear whether there are cross-cultural differences in the evidence upon which listeners rely when making this fundamental perceptual categorization. Here we use the speech-to-song illusion, in which some spoken phrases perceptually transform into song when repeated, to investigate cues to this domain-level categorization in native speakers of tone languages (Mandarin and Cantonese speakers residing in the United Kingdom and China) and in native speakers of a non-tone language (English). We find that native tone-language and non-tone-language listeners largely agree on which spoken phrases sound like song after repetition, and we also find that the strength of this transformation is not significantly different across language backgrounds or countries of residence. Furthermore, we find a striking similarity in the cues upon which listeners rely when perceiving word sequences as singing versus speech, including small pitch intervals, flat within-syllable pitch contours, and steady beats. These findings support the view that there are certain widespread cross-cultural similarities in the mechanisms by which listeners judge if a word sequence is spoken or sung.
Speakers tailor their speech to different types of interlocutors. For example, speech directed to voice technology has different acoustic-phonetic characteristics than speech directed to a human. The ...present study investigates the perceptual consequences of human- and device-directed registers in English. We compare two groups of speakers: participants whose first language is English (L1) and bilingual L1 Mandarin-L2 English talkers. Participants produced short sentences in several conditions: an initial production and a repeat production after a human or device guise indicated either understanding or misunderstanding. In experiment 1, a separate group of L1 English listeners heard these sentences and transcribed the target words. In experiment 2, the same productions were transcribed by an automatic speech recognition (ASR) system. Results show that transcription accuracy was highest for L1 talkers for both human and ASR transcribers. Furthermore, there were no overall differences in transcription accuracy between human- and device-directed speech. Finally, while human listeners showed an intelligibility benefit for coda repair productions, the ASR transcriber did not benefit from these enhancements. Findings are discussed in terms of models of register adaptation, phonetic variation, and human-computer interaction.
Indicators of letter frequency and similarity have long been available for Indo-European languages. They have not only been pivotal in controlling the design of experimental psycholinguistic studies ...seeking to determine the factors that underlie reading ability and literacy acquisition, but have also been useful for studies examining the more general aspects of human cognition. Despite their importance, however, such indicators are still not available for Modern Standard Arabic (MSA), a language that, by virtue of its orthographic system, presents an invaluable environment for the experimental investigation of visual word processing. This paper presents for the first time the frequencies of Arabic letters and their allographs based on a 40-million-word corpus, along with their similarity/confusability indicators in three domains: (1) the visual domain, based on human ratings; (2) the auditory domain, based on an analysis of the phonetic features of letter sounds; and (3) the motoric domain, based on an analysis of the stroke features used to write letters and their allographs. Taken together, the frequency and similarity of Arabic letters and their allographs in the visual and motoric domains, as well as the similarities among the letter sounds, will be useful for researchers interested in the processes underpinning orthographic processing, visual word recognition, reading, and literacy acquisition.
Purpose: Heterogeneous child speech was force-aligned to investigate whether (a) manipulating specific parameters could improve alignment accuracy and (b) forced alignment could be used to replicate ...published results on acoustic characteristics of /s/ production by children. Method: In Part 1, child speech from 2 corpora was force-aligned with a trainable aligner (Prosodylab-Aligner) under different conditions that systematically manipulated input training data and the type of transcription used. Alignment accuracy was determined by comparing hand and automatic alignments as to how often they overlapped (%-Match) and absolute differences in duration and boundary placements. Using mixed-effects regression, accuracy was modeled as a function of alignment conditions, as well as segment and child age. In Part 2, forced alignments derived from a subset of the alignment conditions in Part 1 were used to extract spectral center of gravity of /s/ productions from young children. These findings were compared to published results that used manual alignments of the same data. Results: Overall, the results of Part 1 demonstrated that using training data more similar to the data to be aligned as well as phonetic transcription led to improvements in alignment accuracy. Speech from older children was aligned more accurately than younger children. In Part 2, /s/ center of gravity extracted from force-aligned segments was found to diverge in the speech of male and female children, replicating the pattern found in previous work using manually aligned segments. This was true even for the least accurate forced alignment method. Conclusions: Alignment accuracy of child speech can be improved by using more specific training and transcription. However, poor alignment accuracy was not found to impede acoustic analysis of /s/ produced by even very young children. Thus, forced alignment presents a useful tool for the analysis of child speech.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, ODKLJ, OILJ, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK, VSZLJ
Purpose: The goal of this article (PM I) is to describe the rationale for and development of the Pause Marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent ...childhood apraxia of speech from speech delay. Method: The authors describe and prioritize 7 criteria with which to evaluate the research and clinical utility of a diagnostic marker for childhood apraxia of speech, including evaluation of the present proposal. An overview is given of the Speech Disorders Classification System, including extensions completed in the same approximately 3-year period in which the PM was developed. Results: The finalized Speech Disorders Classification System includes a nosology and cross-classification procedures for childhood and persistent speech disorders and motor speech disorders (Shriberg, Strand, & Mabie, 2017). A PM is developed that provides procedural and scoring information, and citations to papers and technical reports that include audio exemplars of the PM and reference data used to standardize PM scores are provided. Conclusions: The PM described here is an acoustic-aided perceptual sign that quantifies one aspect of speech precision in the linguistic domain of phrasing. This diagnostic marker can be used to discriminate early or persistent childhood apraxia of speech from speech delay.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, ODKLJ, OILJ, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK, VSZLJ
To understand spoken words, listeners must appropriately interpret co-occurring talker characteristics and speech sound content. This ability was tested in 6- to 14-months-olds by measuring their ...looking to named food and body part images. In the new talker condition (n = 90), pictures were named by an unfamiliar voice; in the mispronunciation condition (n = 98), infants' mothers "mispronounced" the words (e.g., nazz for nose) Six- to 7-month-olds fixated target images above chance across conditions, understanding novel talkers, and mothers' phonologically deviant speech equally. Eleven- to 14-months-olds also understood new talkers, but performed poorly with mispronounced speech, indicating sensitivity to phonological deviation. Between these ages, performance was mixed. These findings highlight the changing roles of acoustic and phonetic variability in early word comprehension, as infants learn which variations alter meaning.
This volume is the first comprehensive handbook of Japanese phonetics and phonology describing the basic phonetic and phonological structures of modern Japanese with main focus on standard Tokyo ...Japanese. Its primary goal is to provide a comprehensive overview and descriptive generalizations of major phonetic and phonological phenomena in modern Japanese by reviewing important studies in the fields over the past century. It also presents a summary of interesting questions that remain unsolved in the literature. The volume consists of eighteen chapters in addition to an introduction to the whole volume. In addition to providing descriptive generalizations of empirical phonetic/phonological facts, this volume also aims to give an overview of major phonological theories including, but not restricted to, traditional generative phonology, lexical phonology, prosodic morphology, intonational phonology, and the more recent Optimality Theory. It also touches on theories of speech perception and production. This book serves as a comprehensive guide to Japanese phonetics and phonology for all interested in linguistics and speech sciences.
This research investigated the concurrent association between early reading skills and phonological awareness (PA), print knowledge, language, cognitive, and demographic variables in 101 ...five-year-old children with prelingual hearing losses ranging from mild to profound who communicated primarily via spoken language. All participants were fitted with hearing aids (n = 71) or cochlear implants (n = 30). The participants completed standardized assessments of PA, receptive vocabulary, letter knowledge, word and nonword reading, passage comprehension, math reasoning, and nonverbal cognitive ability. Multiple regressions revealed that PA (assessed using judgments of similarity based on words' initial or final sounds) made a significant, independent contribution to children's early reading ability (for both letters and words/nonwords) after controlling for variation in receptive vocabulary, nonverbal cognitive ability, and a range of demographic variables, including gender, degree of hearing loss, communication mode, type of sensory device, age at fitting of sensory devices, and level of maternal education. Importantly, the relationship between PA and reading was specific to reading and did not generalize to another academic ability, math reasoning. Additional multiple regressions showed that letter knowledge (names or sounds) was superior in children whose mothers had undertaken postsecondary education and that better receptive vocabulary was associated with less severe hearing loss, use of a cochlear implant, and earlier age at implant switch-on. Earlier fitting of hearing aids or cochlear implants was not, however, significantly associated with better PA or reading outcomes in this cohort of children, most of whom were fitted with sensory devices before 3 years of age.
Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is ...because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties – such as a predominance of errors on consonants rather than vowels – but not with other measures – such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties.
•Phonological simplifications occur in some patients but not others.•They are linked to phonetic errors and a concentration of errors on consonants.•They are not associated with slow speech and overall severity.•They provide evidence for a processing level between phonology and articulation.•They can be taken as a hallmark of AoS.