We investigated age-related differences in syntactic comprehension in young and older adults. Most previous research found no evidence of age-related decline in syntactic processing. We investigated ...elementary syntactic comprehension of minimal sentences (e.g., I cook), minimizing the influence of working memory. We also investigated the contribution of semantic processing by comparing sentences containing real verbs (e.g., I cook) versus pseudoverbs (e.g., I spuff). We measured the speed and accuracy of detecting syntactic agreement errors (e.g., I cooks, I spuffs). We found that older adults were slower and less accurate than younger adults in detecting syntactic agreement errors for both real and pseudoverb sentences, suggesting there is age-related decline in syntactic comprehension. The age-related decline in accuracy was smaller for the pseudoverb sentences, and the decline in speed was larger for the pseudoverb sentences, compared to real verb sentences. We suggest that syntactic comprehension decline is stronger in the absence of semantic information, which causes older adults to produce slower responses to make more accurate decisions. In line with these findings, performance for older adults was positively related to a measure of processing speed capacity. Taken together, we found evidence that elementary syntactic processing abilities decline in healthy aging.
•Two experiments examined structural priming in aphasic and unimpaired speakers.•Both groups exhibited structural priming and thematic priming.•Both groups exhibited lasting structural priming (2–4 ...intervening sentences).•Greater aphasia severity was associated with larger priming effects.
The present study addressed open questions about the nature of sentence production deficits in agrammatic aphasia. In two structural priming experiments, 13 aphasic and 13 age-matched control speakers repeated visually- and auditorily-presented prime sentences, and then used visually-presented word arrays to produce dative sentences. Experiment 1 examined whether agrammatic speakers form structural and thematic representations during sentence production, whereas Experiment 2 tested the lasting effects of structural priming in lags of two and four sentences. Results of Experiment 1 showed that, like unimpaired speakers, the aphasic speakers evinced intact structural priming effects, suggesting that they are able to generate such representations. Unimpaired speakers also showed reliable thematic priming effects in all conditions; agrammatic speakers did so as well in most experimental conditions, suggesting that access to thematic representations may be intact. Results of Experiment 2 showed structural priming effects of comparable magnitude for aphasic and unimpaired speakers. In addition, both groups showed lasting structural priming effects in both lag conditions, consistent with implicit learning accounts. In both experiments, aphasic speakers with more severe language impairments exhibited larger priming effects, consistent with the “inverse preference” prediction of implicit learning accounts. The findings indicate that agrammatic speakers are sensitive to structural priming across levels of representation and that such effects are lasting, suggesting that structural priming may be beneficial for the treatment of sentence production deficits in agrammatism.
We report a novel transposed-word effect in speeded grammaticality judgments made about five-word sequences. The critical ungrammatical test sequences were formed by transposing two adjacent words ...from either a grammatical base sequence (e.g., “The white cat was big” became “The white was cat big”) or an ungrammatical base sequence (e.g., “The white cat was slowly” became “The white was cat slowly”). These were intermixed with an equal number of correct sentences for the purpose of the grammaticality judgment task. In a laboratory experiment (N = 57) and an online experiment (N = 94), we found that ungrammatical decisions were harder to make when the ungrammatical sequence originated from a grammatically correct base sequence. This provides the first demonstration that the encoding of word order retains a certain amount of uncertainty. We further argue that the novel transposed-word effect reflects parallel processing of words during written sentence comprehension combined with top-down constraints from sentence-level structures.
This paper develops a model that addresses sentence embedding, a hot topic in current natural language processing research, using recurrent neural networks (RNN) with Long Short-Term Memory (LSTM) ...cells. The proposed LSTM-RNN model sequentially takes each word in a sentence, extracts its information, and embeds it into a semantic vector. Due to its ability to capture long term memory, the LSTM-RNN accumulates increasingly richer information as it goes through the sentence, and when it reaches the last word, the hidden layer of the network provides a semantic representation of the whole sentence. In this paper, the LSTM-RNN is trained in a weakly supervised manner on user click-through data logged by a commercial web search engine. Visualization and analysis are performed to understand how the embedding process works. The model is found to automatically attenuate the unimportant words and detect the salient keywords in the sentence. Furthermore, these detected keywords are found to automatically activate different cells of the LSTM-RNN, where words belonging to a similar topic activate the same cell. As a semantic representation of the sentence, the embedding vector can be used in many different applications. These automatic keyword detection and topic allocation abilities enabled by the LSTM-RNN allow the network to perform document retrieval, a difficult language processing task, where the similarity between the query and documents can be measured by the distance between their corresponding sentence embedding vectors computed by the LSTM-RNN. On a web search task, the LSTM-RNN embedding is shown to significantly outperform several existing state of the art methods. We emphasize that the proposed model generates sentence embedding vectors that are specially useful for web document retrieval tasks. A comparison with a well known general sentence embedding method, the Paragraph Vector, is performed. The results show that the proposed method in this paper significantly outperforms Paragraph Vector method for web document retrieval task.
Writing is one of the important English skills. The process to make good writing is difficult. There were errors in the writing process. Therefore, the researcher was interested in analyzing the ...kinds of errors in writing. The problem of this study is to identify the errors made by the third-semester students of English education department at Universitas PGRI Madiun in the academic year 2021/2022. A descriptive qualitative method is used to analyze this research. The researcher analyzed subject-verb agreement errors, verb tense errors, verb form errors, singular/plural noun ending errors, and word form errors. The steps to finding the data are: collecting the sources of the data, understanding the content of the writing, selecting the test which contains errors, analyzing the collected data, and drawing conclusions. The result of this research, from the lowest to the highest, is as follows: singular/plural noun ending errors (3.40%), subject-verb agreement errors (12.24%), verb form errors (13.61%), verb tense errors (30.61%), and the highest error is word form errors (40.14%). The suggestion of the researcher for the next researcher, there are still many aspects that can be analyzed from other kinds of errors and analyzing other skills of English such as reading, speaking, and listening concerned in the errors aspect.
Massively multilingual sentence representation models, e.g., LASER, SBERT-distill, and LaBSE, help significantly improve cross-lingual downstream tasks. However, the use of a large amount of data or ...inefficient model architectures results in heavy computation to train a new model according to our preferred languages and domains. To resolve this issue, we introduce efficient and effective massively multilingual sentence embedding (EMS), using cross-lingual token-level reconstruction (XTR) and sentence-level contrastive learning as training objectives. Compared with related studies, the proposed model can be efficiently trained using significantly fewer parallel sentences and GPU computation resources. Empirical results showed that the proposed model significantly yields better or comparable results with regard to cross-lingual sentence retrieval, zero-shot cross-lingual genre classification, and sentiment classification. Ablative analyses demonstrated the efficiency and effectiveness of each component of the proposed model.
In the emerging geopolitics of the modern world, English has assumed the undisputed status of the preferred international language of communication. Thus, though cultures across the world are keen on ...self-preservation, allowing English to make inroads into the everyday lives of the people is a bygone conclusion. Albanian and English belong to the same language family (Indo-European) and hence share many commonalities. At the same time, they also exhibit many features of departure from the shared characteristics, and research into these is greatly significant from the language learners’ vantage. This paper has to analyse the compound and the complex sentence between English and Albanian language. Both these languages have the compound sentences. However, between the two languages, the sentences show certain similarities as well as dissimilarities. We have the contrast structure.
La jurisprudencia en Colombia es una fuente auxiliar de derecho, tal como lo consagra el artículo 230 de la Constitución Política de 1991, convirtiéndose en un factor fundamental mediante el cual ...toda clase de juristas intentan entender el espíritu judicial. Es así que de una sentencia puede resultar un resumen, comentario o análisis; siendo sumamente necesario conocer los diferentes modelos de análisis jurisprudencial que se manejan, resaltando la metodología propuesta por el profesor Diego López Medina (2006), en su obra El derecho de los jueces, que permite clasificar las consideraciones de una sentencia, tanto a las que son ratio decidendi como obiter dicta, que aportan fundamentos teóricos y empíricos que van a la par de la solución del problema jurídico planteado. Por ende, el objetivo de este artículo es proponer una reconstrucción del método de análisis jurisprudencial, junto con la esquematización de las maneras de realizar ese análisis, para finalmente proporcionar diversas herramientas que contribuyan a su desarrollo. Para así lograr el acercamiento de académicos, estudiantes, profesores y profesionales del derecho a la construcción del análisis jurisprudencial.
Evidence from 3 experiments reveals interference effects from structural relationships that are inconsistent with any grammatical parse of the perceived input. Processing disruption was observed when ...items occurring between a head and a dependent overlapped with either (or both) syntactic or semantic features of the dependent. Effects of syntactic interference occur in the earliest online measures in the region where the retrieval of a long-distance dependent occurs. Semantic interference effects occur in later online measures at the end of the sentence. Both effects endure in offline comprehension measures, suggesting that interfering items participate in incorrect interpretations that resist reanalysis. The data are discussed in terms of a cue-based retrieval account of parsing, which reconciles the fact that the parser must violate the grammar in order for these interference effects to occur. Broader implications of this research indicate a need for a precise specification of the interface between the parsing mechanism and the memory system that supports language comprehension.
Recently, due to their ability to deal with sequences of different lengths, neural networks have achieved a great success on sentiment classification. It is widely used on sentiment classification. ...Especially long short-term memory networks. However, one of the remaining challenges is to model long texts to exploit the semantic relations between sentences in document-level sentiment classification. Existing Neural network models are not powerful enough to capture enough sentiment messages from relatively long time-steps. To address this problem, we propose a new neural network model (SR-LSTM) with two hidden layers. The first layer learns sentence vectors to represent semantics of sentences with long short term memory network, and in the second layer, the relations of sentences are encoded in document representation. Further, we also propose an approach to improve it which first clean datasets and remove sentences with less emotional polarity in datasets to have a better input for our model. The proposed models outperform the state-of-the-art models on three publicly available document-level review datasets.