Recently, semantic communication has been brought to the forefront because deep learning (DL)-based methods, such as Transformer, have achieved great success in semantic extraction. Although semantic ...communication has been successfully applied in sentence transmission to reduce semantic errors, the existing architecture is usually fixed in terms of codeword length and inefficient and inflexible for varying sentence lengths. In this study, we exploit hybrid automatic repeat request (HARQ) to reduce the semantic transmission error further. We combine semantic coding (SC) with Reed-Solomon (RS) channel coding and HARQ (called SC-RS-HARQ). SC-RS-HARQ exploits the superiority of SC and the reliability of conventional methods successfully. Although SC-RS-HARQ can be easily applied in existing HARQ systems, we also develop an end-to-end architecture called SCHARQ to pursue enhanced performance. Numerical results demonstrate that SCHARQ significantly reduces the required number of bits for semantic sentence transmission and the sentence error rate. We also attempt to replace error detection from cyclic redundancy check to a similarity detection network called Sim32 to allow the receiver to reserve wrong sentences with similar semantic information and conserve transmission resources.
There is an ongoing controversy over whether readers can access the meaning of multiple words, simultaneously. To date, different experimental methods have generated seemingly contradictory evidence ...in support of serial or parallel processing accounts. For example, dual-task studies suggest that readers can process a maximum of one word at a time (White, Palmer & Boynton, 2018), while ERP studies have demonstrated neural priming effects that are more consistent with parallel activation (Wen, Snell & Grainger, 2019). To help reconcile these views, I measured neural responses and behavioral accuracy in a dual-task sentence comprehension paradigm. Participants saw masked sentences and two-word phrases and had to judge whether or not they were grammatical. Grammatically correct sentences (This girl is neat) produced smaller N400 responses compared to scrambled sentences (Those girl is fled): an N400 sentence superiority effect. Critically, participants' grammaticality judgements on the same trials showed striking capacity limitations, with dual-task deficits closely matching the predictions of a serial, all-or-none processing account. Together, these findings suggest that the N400 sentence superiority effect is fully compatible with serial word recognition, and that readers are unable to process multiple sentence positions simultaneously.
•We propose a new approach for sentence representation.•Our approach can capture the complex semantic compositionality.•Extensive experiments show that our proposed approach is effective.
Recursive ...Neural Network (RecNN), a type of model which composes words or phrases recursively over syntactic tree structures, has been proven to have superior ability to obtain sentence representation for a variety of NLP tasks. However, RecNN is born with a thorny problem that a shared compositional function for each node of trees can’t capture the complex semantic compositionality so that the expressive power of model is limited. In this paper, in order to address this problem, we propose Tag-Guided HyperRecNN/TreeLSTM (TG-HRecNN/TreeLSTM), which introduces hypernetwork into RecNNs to take as inputs Part-of-Speech (POS) tags of word/phrase and generate the semantic composition parameters dynamically. Experimental results on five datasets for two typical NLP tasks show proposed models both obtain significant improvement compared with RecNN and TreeLSTM consistently. Our TG-HTreeLSTM outperforms all existing RecNN-based models and achieves or is competitive with state-of-the-art on four sentence classification benchmarks. The effectiveness of our models is also demonstrated by qualitative analysis.
In this paper, we propose a novel deep Efficient Relational Sentence Ordering Network (referred to as ERSON) by leveraging pre-trained language model in both encoder and decoder architectures to ...strengthen the coherence modeling of the entire model. Specifically, we first introduce a divide-and-fuse BERT (referred to as DF-BERT), a new refactor of BERT network, where lower layers in the improved model encode each sentence in the paragraph independently, which are shared by different sentence pairs, and the higher layers learn the cross-attention between sentence pairs jointly. It enables us to capture the semantic concepts and contextual information between the sentences of the paragraph, while significantly reducing the runtime and memory consumption without sacrificing the model performance. Besides, a Relational Pointer Decoder (referred to as RPD) is developed, which utilizes the pre-trained Next Sentence Prediction (NSP) task of BERT to capture the useful relative ordering information between sentences to enhance the order predictions. In addition, a variety of knowledge distillation based losses are added as auxiliary supervision to further improve the ordering performance. The extensive evaluations on Sentence Ordering, Order Discrimination, and Multi-Document Summarization tasks show the superiority of ERSON to the state-of-the-art ordering methods.
In individuals with Down syndrome (DS) deficits in verbal short-term memory (VSTM) and deficits in sentence comprehension co-occur, suggesting that deficits in VSTM might be causal for the deficits ...in sentence comprehension. The present study aims to explore the presumed relationship between VSTM and sentence comprehension in individuals with DS by specifically targeting the influence of task demands. The authors assessed VSTM skills in 18 German-speaking children/adolescents with DS by a nonword repetition (NWR) test and elicited data from three different tasks on the comprehension of complex sentence structures: two sentence-picture-matching tasks (TROG-D and a passive test) and one picture-pointing task on object wh-questions. Whereas performance in NWR yielded a significant degree of prediction for scores obtained in the TROG-D and in passive comprehension, no significant degree of prediction was found for NWR and object wh-question comprehension. Moreover, implicational scaling analyses indicated that mental-age adequate performance in sentence comprehension did not imply adequate performance in NWR. Research is needed that specifies the relation between memory systems and sentence comprehension while considering the influence of task demands.
Developing the utilized intelligent systems is increasingly important to learn effective text representations, especially extract the sentence features. Numerous previous studies have been ...concentrated on the task of sentence representation learning based on deep learning approaches. However, the present approaches are mostly proposed with the single task or replied on the labeled corpus when learning the embedding of the sentences. In this paper, we assess the factors in learning sentence representation and propose an efficient unsupervised learning framework with multi-task learning (USR-MTL), in which various text learning tasks are merged into the unitized framework. With the syntactic and semantic features of sentences, three different factors to some extent are reflected in the task of the sentence representation learning that is the wording, or the ordering of the neighbored sentences of a target sentence in other words. Hence, we integrate the word-order learning task, word prediction task, and the sentence-order learning task into the proposed framework to attain meaningful sentence embeddings. Here, the process of sentence embedding learning is reformulated as a multi-task learning framework of the sentence-level task and the two word-level tasks. Moreover, the proposed framework is motivated by an unsupervised learning algorithm utilizing the unlabeled corpus. Based on the experimental results, our approach achieves the state-of-the-art performances on the downstream natural language processing tasks compared to the popular unsupervised representation learning techniques. The experiments on representation visualization and task analysis demonstrate the effectiveness of the tasks in the proposed framework in creating reasonable sentence representations proving the capacity of the proposed unsupervised multi-task framework for the sentence representation learning.
Negation is frequently used in natural language, yet relatively little is known about its processing. More importantly, what is known regarding the neurophysiological processing of negation is mostly ...based on results of studies using written stimuli (the word-by-word paradigm). While the results of these studies have suggested processing costs in connection to negation (increased negativities in brain responses), it is difficult to know how this translates into processing of spoken language. We therefore developed an auditory paradigm based on a previous visual study investigating processing of affirmatives, sentential negation (
), and prefixal negation (
-). The findings of processing costs were replicated but differed in the details. Importantly, the pattern of ERP effects suggested less effortful processing for auditorily presented negated forms (restricted to increased anterior and posterior positivities) in comparison to visually presented negated forms. We suggest that the natural flow of spoken language reduces variability in processing and therefore results in clearer ERP patterns.
Response onset latencies for sentences that start with a conjoined noun phrase are typically longer than for sentences starting with a simple noun phrase. This suggests that advance planning has ...phrasal scope, which may or may not be lexically driven. All previous studies have involved spoken production, leaving open the possibility that effects are, in part, modality-specific. In 3 image-description experiments (Ns = 32) subjects produced sentences with conjoined (e.g., Peter and the hat) and simple initial noun phrases (e.g., Peter) in both speech and writing. Production onset latencies and participants' eye movements were recorded. Ease of lexical retrieval of sentences' second noun was assessed by manipulating codability (Experiment 1) and by gaze-contingent name priming (Experiments 2 and 3). Findings confirmed a modality-independent phrasal scope for advance planning but did not support obligatory lexical retrieval beyond the sentence-initial noun. This research represents the first direct experimental comparison of sentence planning in speech and writing.
The main focus of the study is to analyze the simple sentence structure and its word-order patterns of Standard Arabic syntactically. Main methods concern description and comparison of word-order ...patterns observed. Primarily the current study deals with some differentiations of the terms on sentence types and word-order patterns described by both medieval grammarians and modern linguists. Moreover, the so called Sībawayhian theory of ʿamil’ also provides some explanations of sentence structures and word-order patterns in Standard Arabic. Simple sentences are highlighted to examine the occasions for using different patterns and where they are commonly found, along with examples to facilitate the explanation and use of these patterns. It is essential to point out that Standard Arabic is considered to be a language with a flexible word-order, which is why there exist word-order patterns of both VOS and SVO languages, though the latter is more frequently used.
Recently processed syntactic information is likely to play a fundamental role in online sentence comprehension. For example, there is now a good deal of evidence that the processing of a syntactic ...structure (the target) is facilitated if the same structure was processed on the immediately preceding trial (the prime), a phenomenon known as structural priming. However, compared with structural priming in production, structural priming in comprehension remains relatively understudied. We investigate an aspect of structural priming in comprehension that is comparatively well understood in production but has received little attention in comprehension: the cumulative effect of structural primes on subsequently processed sentences. We further ask whether this effect is modulated by lexical overlap between preceding primes and the target. In 3 self-paced reading experiments, we find that structural priming effects in comprehension are cumulative and of similar magnitude both with and without lexical overlap. We discuss the relevance of our results to questions about the relationship between recent experience and online language processing.