With its diameter of 17m, the MAGIC telescope is the largest Cherenkov
detector for gamma ray astrophysics. It is sensitive to photons above an energy
of 30 GeV. MAGIC started operations in October ...2003 and is currently taking
data. This report summarizes its main characteristics, its rst results and its
potential for physics.
In this article we discuss the possibility of using the observations by GLAST of standard gamma sources, as the Crab Nebula, to calibrate Imaging Air Cherenkov detectors, MAGIC in particular, and ...optimise their energy resolution. We show that at around 100 GeV the absolute energy calibration uncertainty of Cherenkov telescopes can be reduced to <10% by means of such cross-calibration procedure.
With its diameter of 17m, the MAGIC telescope is the largest Cherenkov detector for gamma ray astrophysics. It is sensitive to photons above an energy of 30 GeV. MAGIC started operations in October ...2003 and is currently taking data. This report summarizes its main characteristics, its rst results and its potential for physics.
Argument mining consists in the automatic identification of argumentative structures in texts. In this work we leverage existing discourse-level annotations to facilitate the identification of ...argumentative components and relations in scientific texts, which has been recognized as a particularly challenging task. We propose a new annotation schema and use it to augment a corpus of computational linguistics abstracts that had previously been annotated with discourse units and relations. Our initial experiments with the enriched corpus confirm the potential value of incorporating discourse information in argument mining tasks. In order to tackle the limitations posed by the lack of corpora containing both discourse and argumentative annotations we explore two transfer learning approaches in which discourse parsing is used as an auxiliary task when training argument mining models. In this case, as no discourse information is used as input, the resulting models could be used to predict the argumentative structure of unannotated texts.
Display omitted
•Superhydrophobicity can be achieved by silanization reaction on silica nanoparticles.•High performance oil-water separation meshes present good salt resistance.•Anodization of the ...meshes is necessary to improve the interaction with silica nanoparticles.
To achieve highly efficient separation of oil/water mixtures, superhydrophobic/oleophilic,membranes based oncovalent silanization of silica nanoparticleson metallic meshes were obtained. The membranes were prepared in a scalable two-step process. The characterization of modified nanoparticles as well as the membranes includes XPS, ATR, solid state NMR, SEM, EDS, TGA and tensile strength essays. In addition, several experiments were conducted in order to characterize the superhydrophobic behavior (water contact angle (WCA) measurements, oil flux, maximum water pressure on the membrane, affinity of the particles by organic phase, etc.). Although the silica nanoparticles are highly hydrophilic, after the modification they become strongly hydrophobic providing chemical resistance to hard water. The system allowed the separation of oil-water mixtures, being washable and reusable.
•The use of text summarization and fuzzy logic in automated text assessment.•The use of fuzzy logic summarization in Virtual Learning Environments.•A tool implemented as a prototype to evaluate the ...proposed method.•Experimental results showing improvements over existing methods.
In the last two decades, the text summarization task has gained much importance because of the large amount of online data, and its potential to extract useful information and knowledge in a way that could be easily handled by humans and used for a myriad of purposes, including expert systems for text assessment. This paper presents an automatic process for text assessment that relies on fuzzy rules on a variety of extracted features to find the most important information in the assessed texts. The automatically produced summaries of these texts are compared with reference summaries created by domain experts. Differently from other proposals in the literature, our method summarizes text by investigating correlated features to reduce dimensionality, and consequently the number of fuzzy rules used for text summarization. Thus, the proposed approach for text summarization with a relatively small number of fuzzy rules can benefit development and use of future expert systems able to automatically assess writing. The proposed summarization method has been trained and tested in experiments using a dataset of Brazilian Portuguese texts provided by students in response to tasks assigned to them in a Virtual Learning Environment (VLE). The proposed approach was compared with other methods including a naive baseline, Score, Model and Sentence, using ROUGE measures. The results show that the proposal provides better f-measure (with 95% CI) than aforementioned methods.
In the field of automatic text simplification, assessing whether or not the meaning of the original text has been preserved during simplification is of paramount importance. Metrics relying on n-gram ...overlap assessment may struggle to deal with simplifications which replace complex phrases with their simpler paraphrases. Current evaluation metrics for meaning preservation based on large language models (LLMs), such as BertScore in machine translation or QuestEval in summarization, have been proposed. However, none has a strong correlation with human judgment of meaning preservation. Moreover, such metrics have not been assessed in the context of text simplification research. In this study, we present a meta-evaluation of several metrics we apply to measure content similarity in text simplification. We also show that the metrics are unable to pass two trivial, inexpensive content preservation tests. Another contribution of this study is MeaningBERT (
https://github.com/GRAAL-Research/MeaningBERT
), a new trainable metric designed to assess meaning preservation between two sentences in text simplification, showing how it correlates with human judgment. To demonstrate its quality and versatility, we will also present a compilation of datasets used to assess meaning preservation and benchmark our study against a large selection of popular metrics.
•We built state-of-the-art lexical simplification systems for Spanish.•We produced new resources for lexical simplification for Spanish.•New resources improve grammaticality of simplified output.•New ...resources improve meaning preservation during simplification.•New resources increase the number and correctness of lexical changes.
The current bottleneck of all data-driven lexical simplification (LS) systems is scarcity and small size of parallel corpora (original sentences and their manually simplified versions) used for training. This is especially pronounced for languages other than English. We address this problem, taking Spanish as an example of such a language, by building new simplification-specific datasets of synonyms and paraphrases using freely available resources. We test their usefulness in the LS task by adding them, in various combinations, to the existing text simplification (TS) training dataset in a phrase-based statistical machine translation (PBSMT) approach. Our best systems significantly outperform the state-of-the-art LS systems for Spanish, by the number of transformations performed and the grammaticality, simplicity and meaning preservation of the output sentences. The results of a detailed manual analysis show that some of the newly built TS resources, although they have a good lexical coverage and lead to a high number of transformations, often change the original meaning and do not generate simpler output when used in this PBSMT setup. The good combinations of these additional resources with the TS training dataset and a good choice of language model, in contrast, improve the lexical coverage and produce sentences which are grammatical, simpler than the original, and preserve the original meaning well.