Meaning and Grammar of Nouns and Verbs Doris Gerland, Christian Horn, Anja Latrouite, Albert Ortmann / Doris Gerland, Christian Horn, Anja Latrouite, Albert Ortmann
2014, 2021, 2014-11-05
eBook
Odprti dostop
The papers collected in this book cover contemporary and original research on semantic and grammatical issues of nouns and noun phrases, verbs and sentences, and aspects of the combination of nouns ...and verbs, in a great variety of languages. A special focus is put on noun types, tense and aspect semantics, granularity of verb meaning, and subcompositionality. The investigated languages and language groups include Austronesian, East Asian, Slavic, German, English, Hungarian and Lakhota. The collection provided in this book will be of interest to researchers and advanced students specialising in the fields of semantics, morphology, syntax, typology, and cognitive sciences.
Corpus linguistics investigates language using extensive text databases. Tools assist researchers in analyzing, extracting, and interpreting linguistic information efficiently. Furthermore, if ...researchers only use traditional tools in corpus linguistic analysis, they will lack the comprehensiveness and efficiency required to effectively navigate and derive valuable insights from language data. This paper employed the preferred reporting items for systematic reviews and meta-analyses (PRISMA) approach to find the primary data based on a few keywords in corpus linguistic, corpus analysis, computational linguistic, text corpora and tool support. Based on this method, we used advanced searching techniques on Scopus and Web of Science (WoS) and discovered (N=28) data pertinent to the study. Expert scholars decide on a theme based on the problem, which is (i) types of corpus tools and their uses; (ii) their contributions and their capabilities (iii) limitations of corpus tools. All the tools were used in interdisciplinary studies. In summary, this systematic review uncovers pivotal key findings at the intersection of computational tools and corpus analysis, enriching linguistic knowledge. It highlights the interdisciplinary potential of corpus-based analysis in advancing linguistic tools and, their applications, as well as language analysis.
En este trabajo se muestran las nuevas posibilidades de gestión del léxico turístico a partir del análisis de páginas webs de promoción de alojamientos turísticos con la que se ha nutrido una base de ...datos. Se ilustrará la influencia decisiva del canal Internet en la promoción turística y las características con sello propio del nuevo discurso turístico generado en la web 2.0. Expertos en marketing y lingüistas se adaptan al cambio radical de paradigma y de género (cibergénero, macrogénero) con productos competitivos basados en la atribución continua de nuevos significados al léxico. Finalmente se presenta un modelo de trabajo con el corpus resultante de un grupo representativo de alojamientos turísticos de la Comunidad Balear.
In this project, new posibilities of tourist lexical management are supplied by working with a well-stocked database of tourist accommodation promotional web sites. Internet decisive influence will be ilustrated on tourist promotion and on the charasteristics of a new tourist speech generated on 2.0 web. Marketing experts and linguists are getting used to this radical paradigm and genre change (cibergenre and macrogenre) with competitive products based on a continous adjudication of new lexical meanings. Finally a model is introduced as a result of working with a representative group of tourist acommodation in Balear islands.
My focus is the 'logico-rhetorical module' (Sperber, 2000). This mental module, Sperber hypothesizes, is an evolved ability of human beings to examine critically what someone is saying, for example, ...to detect inconsistency or inadequate evidence in an argument. On the assumption that we have this natural ability, Chilton (2005) questions the need for Critical Discourse Analysis; in contrast, on his reading of Sperber's work, Hart (this issue) argues the opposite. In this article, I agree with Chilton's (2005) stance to the extent that the competence of the logico-rhetorical module is, generally speaking, adequate for enabling critical engagement with verbal input. That said, I highlight two (non-competence related) limitations of the logico-rhetorical module for detecting inconsistency in arguments. To address these limitations, I hold a new approach is needed in Critical Discourse Analysis. This is one which draws on the corpus linguistic method; I refer to it as Electronic Deconstruction.
This article tackle multilingual automatic alignment. Alignment refers to the process by which segments that are translation of one another are automatically matched. Instead of comparing only pairs ...of languages at sentence level, as it is usually done to conform to human process in translation. The computer is used here for its capacity to infer semantic alignment from a collection of texts that are translations of the same content. The corpus contains press releases from Europa, the European Community website, available in up to 23 languages. The alignment process takes advantage of frequency similarity between different linguistic versions of a document by computing matching features for each repeated string in all versions. This is done to find reliable anchors in the process of linking versions. The question of the best granularity is raised to bring out some semantic equivalences, when comparing two linguistic versions, character N-grams or word N-grams. The alignment systems are traditionally based on word N-grams splitting. The observation of the morphological variety of languages, even inside a single linguistic family, quickly shows that the word granularity is inadequate to provide a widely multilingual system, i.e. a language independent system able to handle flexional languages as well as positional languages. Instead, when starting from a multilingual collection to focus on pairs of texts, we defend that character N-grams alignment is more efficient than word N-grams alignment.
Contemporary crime novels often contain detailed literary representations of urban life worlds. These stagings can provide access to city-specific patterns and structures of thought, action and ...feeling, as well as locally established bodies of knowledge and processes of sense-making. Therefore, their systematic analysis can generate insights into the intrinsic logic of cities. To grasp such patterns on city level a preferably broad empirical basis is needed, but the study of large amounts of literary works poses a methodological challenge. This article presents a mix of methods that permits the analysis of vast quantities of (literary) texts through combining the classical qualitative close reading with elements from computer-aided qualitative content analysis, basic instruments from corpus linguistics and the methodology of distant reading in an iterative research process. It illustrates how to analyze qualitative data also quantitatively and on different levels with regard to social and spatial aspects of the depicted life worlds, thereby showing how novels could be used as data basis for urban sociology and interdisciplinary research questions about the distinctiveness of cities.