Abstract
Park Chan-wook, one of the most internationally acclaimed Korean filmmakers, uses language as an important aspect
of characterization in
The Handmaiden
, his adaptation of Sarah Water’s ...novel
Fingersmith
. The
historical background and the characters’ nationalities are changed, but code-switching between two languages – i.e., Korean and
Japanese – recurs throughout the film, thereby enhancing its relevance for the Korean audience. Drawing on the notion of
‘proximity’ and reader response theory, this study examines the role of languages in Park’s characterization and proximation of
the original work for the Korean audience, and the extent to which the shifts in proximity and the use of languages contribute to
British audiences’ affective experiences when this Korean adaptation is subtitled in English.
Movie and TV subtitles are frequently employed in natural language processing (NLP) applications, but there are limited Japanese-Chinese bilingual corpora accessible as a dataset to train neural ...machine translation (NMT) models. In our previous study, we effectively constructed a corpus of a considerable size containing bilingual text data in both Japanese and Chinese by collecting subtitle text data from websites that host movies and television series. The unsatisfactory translation performance of the initial corpus, Web-Crawled Corpus of Japanese and Chinese (WCC-JC 1.0), was predominantly caused by the limited number of sentence pairs. To address this shortcoming, we thoroughly analyzed the issues associated with the construction of WCC-JC 1.0 and constructed the WCC-JC 2.0 corpus by first collecting subtitle data from movie and TV series websites. Then, we manually aligned a large number of high-quality sentence pairs. Our efforts resulted in a new corpus that includes about 1.4 million sentence pairs, an 87% increase compared with WCC-JC 1.0. As a result, WCC-JC 2.0 is now among the largest publicly available Japanese-Chinese bilingual corpora in the world. To assess the performance of WCC-JC 2.0, we calculated the BLEU scores relative to other comparative corpora and performed manual evaluations of the translation results generated by translation models trained on WCC-JC 2.0. We provide WCC-JC 2.0 as a free download for research purposes only.
Hearing loss (HL) is common among middle-age and older adults, but hearing aid adoption is low. The purpose of this study was to measure the 10-year incidence of hearing aid adoption in a sample of ...primarily middle-age adults with high-frequency HL and identify factors associated with hearing aid adoption.
This study included 579 adults (ages 34-80 years) with high-frequency pure-tone average > 25 dB HL (3-8 kHz) enrolled in the Beaver Dam Offspring Study. Hearing aid adoption was measured at 5- and 10-year follow-up examinations. Cox discrete-time proportional hazards models were used to evaluate factors associated with hearing aid adoption (presented as hazards ratios HRs and 95% confidence intervals 95% CI).
The 10-year cumulative incidence of hearing aid adoption was 14 per 1,000 person years. Factors significantly associated with adoption in a multivariable model were higher education (vs. 16+ years; 0-12: HR: 0.36, 95% CI 0.19, 0.69; 13-15: HR: 0.52, 95% CI 0.27, 0.98), worse high-frequency pure-tone average (per +1 dB; HR: 1.04, 95% CI 1.02, 1.06), self-reported hearing handicap (screening versions of the Hearing Handicap Inventory score > 8; HR: 1.85, 95% CI 1.02, 3.38), answering yes to "Do friends and relatives think you have a hearing problem?" (HR: 3.18, 95% CI 1.60, 6.33) and using closed captions (HR: 2.86, 95% CI 1.08, 7.57). Effects of age and sex were not significant.
Hearing aid adoption rates were low. Hearing sensitivity, socioeconomic status, and measures of the impact of HL on daily life were associated with adoption. Provider awareness of associated factors can contribute to timely and appropriate intervention.
Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. ...However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/.
Background: With the widespread promotion of the COVID-19 vaccination in China, videos about the vaccination have become increasingly available on social video platforms. With the User Generated ...Content model, different creators' interpretations of COVID-19 vaccines may influence the attitudes towards the vaccines and vaccination. Objective: To explore the overview of COVID-19 vaccine-related videos on Bilibili, discussing the communication effects of COVID-19 topic videos and its influencing factors. Methods: A content analysis was applied to the 202 video samples obtained through data mining regarding the creator's information, video presentation, and COVID-19 vaccine-related content. Results: Individuals and medical professionals preferred VLOG videos, media chose to upload informational videos, and enterprises preferred to post showcase videos. Individuals were more likely to discuss the adverse reactions in their videos, while medical professionals were more likely to discuss the vaccination process for the COVID-19 vaccine. Videos with core issues positively influenced the video's dissemination breadth. The attitudes toward the COVID-19 vaccine in the videos positively influence the recognition of the videos. The richness of knowledge points related to the COVID-19 vaccine negatively affected the recognition and participation. Conclusion: Social video platforms could play an active role in the vaccination promotion for the youth. Health promotion-related departments and individuals could strengthen agenda setting, grasp the characteristics of young groups, and express positive attitudes toward health issues to achieve better health (vaccine) promotion. Keywords: COVID-19 vaccine, social media, Bilibili, health promotion, vaccination
Within instructed second language research, there is growing interest in research focusing on primary school vocabulary learning. Research has emphasized classroom-based learning of vocabulary ...knowledge, with growing focus on the potential for using captioned videos and increased word encounters. The present study investigated the effects of various captioning conditions (i.e. full captioning, keyword captioning, and no captions), the number of word encounters (one and three), and the combinations of these two variables on incidental learning of new words while viewing a video. Six possible conditions were explored. A total of 257 primary school students learning English as a second language (ESL) were divided into six groups and randomly assigned to a condition in which 15 target lexical items were included. A post-test, measuring the recognition of word form/meaning and recall of word meaning, was administered immediately after participants viewed the video. The post-test was not disclosed to the learners in advance. The group viewing the full captioning video scored significantly higher than the keyword captioning group and the no-captioning group. Repeated encounters with the targeted lexical items led to more successful learning. The combination of full captioning and three encounters was most effective for incidental learning of lexical items. This quasi-experimental study contributes to the literature by providing evidence which suggests that captioned videos coordinate two domains (i.e. auditory and visual components) and help ESL learners to obtain greater depth of word form processing, identify meaning by unpacking language chunks, and reinforce the form-meaning link.
This article explores current creative practices involving the representation of sign languages, sign language interpreting, sign language translation (Napier and Leeson 2016; HBB4ALL 2017; CNLSE ...2017; Tamayo 2022), and sign language live translation (Tamayo 2022) in audiovisual content. To that end, a review of the concept "creative sign language" and a review of previous publications on the matter will be provided. Subsequently, the implementation of creativity at different production stages, and the use of different resources when sign languages are present in audiovisual content, will be discussed by analyzing some selected innovative examples (mostly of practices in Spain). Finally, a taxonomy that takes into account not only "internal creativity" (that is inherent to sign languages), but also "collaborative" and "external creativity." Conclusions will focus on how creative practices can expand our understanding of different art expressions, human communication, and inclusion, and can help establish new and meaningful connections among them.
This study evaluates the potential for incidentally learning early reading vocabulary through the extensive viewing (EV) of children's movies/television with subtitles. Recent research has ...investigated how much exposure to important vocabulary EV and extensive reading (ER) provides. Investigations compute the number of repetitions of target vocabulary in corpora designed to represent EV/ER. Curriculum time estimates are then computed based on the time needed to reach vocabulary repetition thresholds linked to incidental learning. This study focuses on an understudied area of EV, namely children's transition to literacy. It investigates whether early reading vocabulary is available in children's movies/television, a form of compelling, comprehensible input. Recent research has found vocabulary acquisition gains from EV are enhanced by subtitles. Therefore, this study analyses 743 subtitles from children's movies (4.8 million words) and 3174 subtitles from children's series (6.4 million words). Using two recent wordlists representing early reading vocabulary, vocabulary frequency and approximate curriculum time estimates are computed for three thresholds linked to incidental vocabulary acquisition, i.e. 6, 12 and 20 encounters. Results indicate that EV with subtitles could support the development of an oral language vocabulary that contains a pool of words needed for early reading, and provide print exposure to this essential vocabulary.
Understanding the way people watch subtitled films has become a central concern for subtitling researchers in recent years. Both subtitling scholars and professionals generally believe that in order ...to reduce cognitive load and enhance readability, line breaks in twoline subtitles should follow syntactic units. However, previous research has been inconclusive as to whether syntactic-based segmentation facilitates comprehension and reduces cognitive load. In this study, we assessed the impact of text segmentation on subtitle processing among different groups of viewers: hearing people with different mother tongues (English, Polish, and Spanish) and deaf, hard of hearing, and hearing people with English as a first language. We measured three indicators of cognitive load (difficulty, effort, and frustration) as well as comprehension and eye tracking variables. Participants watched two video excerpts with syntactically and non-syntactically segmented subtitles. The aim was to determine whether syntactic-based text segmentation as well as the viewers' linguistic background influence subtitle processing. Our findings show that non-syntactically segmented subtitles induced higher cognitive load, but they did not adversely affect comprehension. The results are discussed in the context of cognitive load, audiovisual translation, and deafness.