Abstract If trolley problems are accused of simplifying, does this mean anthropologists only ever make things more complex? This afterword focuses on anthropology's own techniques of simplification, ...in particular, comparison.
Making It Simplext Saggion, Horacio; Štajner, Sanja; Bott, Stefan ...
ACM transactions on accessible computing,
06/2015, Letnik:
6, Številka:
4
Journal Article
Recenzirano
The way in which a text is written can be a barrier for many people. Automatic text simplification is a natural language processing technology that, when mature, could be used to produce texts that ...are adapted to the specific needs of particular users. Most research in the area of automatic text simplification has dealt with the English language. In this article, we present results from the Simplext project, which is dedicated to automatic text simplification for Spanish. We present a modular system with dedicated procedures for syntactic and lexical simplification that are grounded on the analysis of a corpus manually simplified for people with special needs. We carried out an automatic evaluation of the system’s output, taking into account the interaction between three different modules dedicated to different simplification aspects. One evaluation is based on readability metrics for Spanish and shows that the system is able to reduce the lexical and syntactic complexity of the texts. We also show, by means of a human evaluation, that sentence meaning is preserved in most cases. Our results, even if our work represents the first automatic text simplification system for Spanish that addresses different linguistic aspects, are comparable to the state of the art in English Automatic Text Simplification.
In the field of automatic text simplification, assessing whether or not the meaning of the original text has been preserved during simplification is of paramount importance. Metrics relying on n-gram ...overlap assessment may struggle to deal with simplifications which replace complex phrases with their simpler paraphrases. Current evaluation metrics for meaning preservation based on large language models (LLMs), such as BertScore in machine translation or QuestEval in summarization, have been proposed. However, none has a strong correlation with human judgment of meaning preservation. Moreover, such metrics have not been assessed in the context of text simplification research. In this study, we present a meta-evaluation of several metrics we apply to measure content similarity in text simplification. We also show that the metrics are unable to pass two trivial, inexpensive content preservation tests. Another contribution of this study is MeaningBERT (
https://github.com/GRAAL-Research/MeaningBERT
), a new trainable metric designed to assess meaning preservation between two sentences in text simplification, showing how it correlates with human judgment. To demonstrate its quality and versatility, we will also present a compilation of datasets used to assess meaning preservation and benchmark our study against a large selection of popular metrics.
A point cloud as an information-intensive 3D representation usually requires a large amount of transmission, storage and computing resources, which seriously hinder its usage in many emerging fields. ...In this paper, we propose a novel point cloud simplification method, Approximate Intrinsic Voxel Structure (AIVS), to meet the diverse demands in real-world application scenarios. The method includes point cloud pre-processing (denoising and down-sampling), AIVS-based realization for isotropic simplification and flexible simplification with intrinsic control of point distance. To demonstrate the effectiveness of the proposed AIVS-based method, we conducted extensive experiments by comparing it with several relevant point cloud simplification methods on three public datasets, including Stanford, SHREC, and RGB-D scene models. The experimental results indicate that AIVS has great advantages over peers in terms of moving least squares (MLS) surface approximation quality, curvature-sensitive sampling, sharp-feature keeping and processing speed. The source code of the proposed method is publicly available1.
An integrative view of foveated rendering Mohanto, Bipul; Islam, ABM Tariqul; Gobbetti, Enrico ...
Computers & graphics,
February 2022, 2022-02-00, 20220201, Letnik:
102
Journal Article
Recenzirano
Odprti dostop
Foveated rendering adapts the image synthesis process to the user’s gaze. By exploiting the human visual system’s limitations, in particular in terms of reduced acuity in peripheral vision, it ...strives to deliver high-quality visual experiences at very reduced computational, storage, and transmission costs. Despite the very substantial progress made in the past decades, the solution landscape is still fragmented, and several research problems remain open. In this work, we present an up-to-date integrative view of the domain from the point of view of the rendering methods employed, discussing general characteristics, commonalities, differences, advantages, and limitations. We cover, in particular, techniques based on adaptive resolution, geometric simplification, shading simplification, chromatic degradation, as well spatio-temporal deterioration. Next, we review the main areas where foveated rendering is already in use today. We finally point out relevant research issues and analyze research trends.
Display omitted
•A comprehensive survey of foveated rendering.•General characteristics, commonalities, differences, advantages, and limitations.•Exploit reduced acuity in peripheral vision for rendering at reduced costs.•Classification and discussion of different techniques.•We discuss application areas where foveated rendering is already in use today.
This paper proposes a new Padé via Arnoldi algorithm with single‐size matrix simplification for electromagnetic (EM) fast frequency sweep. New equations are derived to reduce the double‐size system ...matrix to single‐size system matrix. We also propose a systematic algorithm to calculate S‐parameters using the simplified single‐size system matrix. Using the proposed algorithm, the EM responses can be obtained with the same accuracy while consuming much less time compared with that using the existing double‐size matrix Padé via Lanczos. The proposed algorithm is demonstrated by two microwave examples.
This scientific article is devoted to the identification of current problems of legislative regulation of priority areas of innovative activity in Ukraine and development of proposals for their ...solution. It emphasized the need for rapid development of the state program of objectives for forecasting the scientific, technological and innovative development of Ukraine for 2023-2032, which will contribute to the formation of financial opportunities for the development of the national innovation system. Arguments are given that increasing the level of innovative development of Ukraine in the war and post-war period will contribute to: simplification of review and agreement procedures; aligning the content of legislative acts regulating the determination of priority areas of innovative activity; operational development of the state target program, which would determine the most promising directions for the development of scientific, technological and innovative activities. The obtained results allow us to conclude on the expediency of deploying a single national strategy for the development of innovative activity, which would allow clearly defining its objectives, priorities, resources, mechanisms of implementation and control, etc., as well as the "Strategic plan for overcoming the economic crisis in Ukraine for 2023-2025".
Text is by far the most ubiquitous source of knowledge and information and should be made easily accessible to as many people as possible; however, texts often contain complex words that hinder ...reading comprehension and accessibility. Therefore, suggesting simpler alternatives for complex words without compromising meaning would help convey the information to a broader audience. This paper proposes mTLS, a multilingual controllable Transformer-based Lexical Simplification (LS) system fined-tuned with the T5 model. The novelty of this work lies in the use of language-specific prefixes, control tokens, and candidates extracted from pretrained masked language models to learn simpler alternatives for complex words. The evaluation results on three well-known LS datasets – LexMTurk, BenchLS, and NNSEval – show that our model outperforms the previous state-of-the-art models like LSBert and ConLS. Moreover, further evaluation of our approach on the part of the recent TSAR-2022 multilingual LS shared-task dataset shows that our model performs competitively when compared with the participating systems for English LS and even outperforms the GPT-3 model on several metrics. Moreover, our model obtains performance gains also for Spanish and Portuguese.
Polyline and building simplification remain challenging in cartography. Most proposed algorithms are geometric-based and rely on specific rules. In this study, we propose a deep learning approach to ...simplify polylines and buildings based on a graph autoencoder (GAE). The model receives the coordinates of line vertices as inputs and obtains a simplified representation by reconstructing the original inputs with fewer vertices through pooling, in which the graph convolution based on graph Fourier transform is used for the layer-by-layer feature computation. By adjusting the loss functions, constraints such as area and shape preservation and angle-characteristic enhancement are flexibly configured under a unified learning framework. Our results confirmed the applicability of the GAE approach to the multi-scale simplification of land-cover boundaries and contours by adjusting the number of output nodes. Compared with existing Douglas‒Peukcer, Fourier transform, and Delaunay triangulation approaches, the GAE approach was superior in achieving morphological abstraction while producing reasonably low position, area, and shape changes. Furthermore, we applied it to simplify buildings and demonstrated the potential for preserving the diversified characteristics of different types of lines.
An effective discrete element modelling strategy for triangular mesh represented spherical harmonic particles is proposed. It features: (1) using a golden spiral lattice on the unit sphere to ...generate an initial triangular mesh with any number of vertices/triangles for a star-shaped surface; (2) applying an edge contraction mesh simplification algorithm to reduce the mesh size to any desired level; and (3) adopting an energy-conserving linear normal contact model to compute the contact geometric and force features of contacting particles. In particular, the edge contraction algorithm is applicable to any triangular mesh. It is algorithmically very simple and highly effective, and can be easily incorporated into existing discrete element frameworks. Numerical experiments are conducted to demonstrate that the simplified mesh by the edge contraction can not only have a very low geometric approximation error but also achieve expected mechanical responses. Thus this mesh simplification approach can serve as an ideal pre-processing tool to optimise a large input triangular mesh in order to significantly reduce the computational cost associated with discrete element simulations without compromising the modelling accuracy.
•Established an effective triangular mesh representation procedure for star-shaped particles.•Introduced a highly effective edge-contraction based mesh simplification approach.•Described an energy-conserving contact model for triangulated particles.•Conducted numerical simulations to validate the proposed methodology.