Ocular references on ancient coins Sanchez, Juan Luis
Acta ophthalmologica (Oxford, England),
December 2022, 2022-12-00, 20221201, Letnik:
100, Številka:
S275
Journal Article
Recenzirano
According to the dictionary of the Royal Academy of the Spanish Language, numismatics is the discipline that studies coins and medals, mainly ancient ones. In other places, this definition includes ...the study and collection of paper money or banknotes. The information we can obtain from coins with a minimum study of the aspects that appear on them is surprising. In relation to vision and ophthalmology, they show us important figures in the field, ocular symbology, they tell us about mythology and religion and curious stories that we would hardly have known without looking at the coins. Finally, we will talk about an important 19th century Valencian ophthalmologist, Rafael Cervera y Royo, and the collection of ancient coins that bears his name. This work is not intended to be an exhaustive description of all the coins and medals that speak of vision, but rather a sample of the valuable information that numismatics contributes to our speciality and to stimulate the public's curiosity about this fascinating science.
Visual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance conditions and viewpoint changes and with ...computational constraints. VPR is related to the concepts of localisation, loop closure, image retrieval and is a critical component of many autonomous navigation systems ranging from autonomous vehicles to drones and computer vision systems. While the concept of place recognition has been around for many years, VPR research has grown rapidly as a field over the past decade due to improving camera hardware and its potential for deep learning-based techniques, and has become a widely studied topic in both the computer vision and robotics communities. This growth however has led to fragmentation and a lack of standardisation in the field, especially concerning performance evaluation. Moreover, the notion of viewpoint and illumination invariance of VPR techniques has largely been assessed qualitatively and hence ambiguously in the past. In this paper, we address these gaps through a new comprehensive open-source framework for assessing the performance of VPR techniques, dubbed “VPR-Bench”. VPR-Bench (Open-sourced at:
https://github.com/MubarizZaffar/VPR-Bench
) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance. We apply and analyse popular evaluation metrics for VPR from both the computer vision and robotics communities, and discuss how these different metrics complement and/or replace each other, depending upon the underlying applications and system requirements. Our analysis reveals that no universal SOTA VPR technique exists, since: (a) state-of-the-art (SOTA) performance is achieved by 8 out of the 10 techniques on at least one dataset, (b) SOTA technique in one community does not necessarily yield SOTA performance in the other given the differences in datasets and metrics. Furthermore, we identify key open challenges since: (c) all 10 techniques suffer greatly in perceptually-aliased and less-structured environments, (d) all techniques suffer from viewpoint variance where lateral change has less effect than 3D change, and (e) directional illumination change has more adverse effects on matching confidence than uniform illumination change. We also present detailed meta-analyses regarding the roles of varying ground-truths, platforms, application requirements and technique parameters. Finally, VPR-Bench provides a unified implementation to deploy these VPR techniques, metrics and datasets, and is extensible through templates.
With the rise and development of deep learning, computer vision has been tremendously transformed and reshaped. As an important research area in computer vision, scene text detection and recognition ...has been inevitably influenced by this wave of revolution, consequentially entering the era of deep learning. In recent years, the community has witnessed substantial advancements in mindset, methodology and performance. This survey is aimed at summarizing and analyzing the major changes and significant progresses of scene text detection and recognition in the deep learning era. Through this article, we devote to: (1) introduce new insights and ideas; (2) highlight recent techniques and benchmarks; (3) look ahead into future trends. Specifically, we will emphasize the dramatic differences brought by deep learning and remaining grand challenges. We expect that this review paper would serve as a reference book for researchers in this field. Related resources are also collected in our Github repository (
https://github.com/Jyouhou/SceneTextPapers
).
This paper presents a method for the 3D reconstruction of a piecewise‐planar surface from range images, typically laser scans with millions of points. The reconstructed surface is a watertight ...polygonal mesh that conforms to observations at a given scale in the visible planar parts of the scene, and that is plausible in hidden parts. We formulate surface reconstruction as a discrete optimization problem based on detected and hypothesized planes. One of our major contributions, besides a treatment of data anisotropy and novel surface hypotheses, is a regularization of the reconstructed surface w.r.t. the length of edges and the number of corners. Compared to classical area‐based regularization, it better captures surface complexity and is therefore better suited for man‐made environments, such as buildings. To handle the underlying higher‐order potentials, that are problematic for MRF optimizers, we formulate minimization as a sparse mixed‐integer linear programming problem and obtain an approximate solution using a simple relaxation. Experiments show that it is fast and reaches near‐optimal solutions.
Many causes of vision impairment can be prevented or treated. With an ageing global population, the demands for eye health services are increasing. We estimated the prevalence and relative ...contribution of avoidable causes of blindness and vision impairment globally from 1990 to 2020. We aimed to compare the results with the World Health Assembly Global Action Plan (WHA GAP) target of a 25% global reduction from 2010 to 2019 in avoidable vision impairment, defined as cataract and undercorrected refractive error.
We did a systematic review and meta-analysis of population-based surveys of eye disease from January, 1980, to October, 2018. We fitted hierarchical models to estimate prevalence (with 95% uncertainty intervals UIs) of moderate and severe vision impairment (MSVI; presenting visual acuity from <6/18 to 3/60) and blindness (<3/60 or less than 10° visual field around central fixation) by cause, age, region, and year. Because of data sparsity at younger ages, our analysis focused on adults aged 50 years and older.
Global crude prevalence of avoidable vision impairment and blindness in adults aged 50 years and older did not change between 2010 and 2019 (percentage change −0·2% 95% UI −1·5 to 1·0; 2019 prevalence 9·58 cases per 1000 people 95% IU 8·51 to 10·8, 2010 prevalence 96·0 cases per 1000 people 86·0 to 107·0). Age-standardised prevalence of avoidable blindness decreased by −15·4% –16·8 to −14·3, while avoidable MSVI showed no change (0·5% –0·8 to 1·6). However, the number of cases increased for both avoidable blindness (10·8% 8·9 to 12·4) and MSVI (31·5% 30·0 to 33·1). The leading global causes of blindness in those aged 50 years and older in 2020 were cataract (15·2 million cases 9% IU 12·7–18·0), followed by glaucoma (3·6 million cases 2·8–4·4), undercorrected refractive error (2·3 million cases 1·8–2·8), age-related macular degeneration (1·8 million cases 1·3–2·4), and diabetic retinopathy (0·86 million cases 0·59–1·23). Leading causes of MSVI were undercorrected refractive error (86·1 million cases 74·2–101·0) and cataract (78·8 million cases 67·2–91·4).
Results suggest eye care services contributed to the observed reduction of age-standardised rates of avoidable blindness but not of MSVI, and that the target in an ageing global population was not reached.
Brien Holden Vision Institute, Fondation Théa, The Fred Hollows Foundation, Bill & Melinda Gates Foundation, Lions Clubs International Foundation, Sightsavers International, and University of Heidelberg.
Image representations, from SIFT and bag of visual words to convolutional neural networks (CNNs) are a crucial component of almost all computer vision systems. However, our understanding of them ...remains limited. In this paper we study several landmark representations, both shallow and deep, by a number of complementary visualization techniques. These visualizations are based on the concept of “natural pre-image”, namely a natural-looking image whose representation has some notable property. We study in particular three such visualizations: inversion, in which the aim is to reconstruct an image from its representation, activation maximization, in which we search for patterns that maximally stimulate a representation component, and caricaturization, in which the visual patterns that a representation detects in an image are exaggerated. We pose these as a regularized energy-minimization framework and demonstrate its generality and effectiveness. In particular, we show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.
Rain fills the atmosphere with water particles, which breaks the common assumption that light travels unaltered from the scene to the camera. While it is well-known that rain affects computer vision ...algorithms, quantifying its impact is difficult. In this context, we present a rain rendering pipeline that enables the systematic evaluation of common computer vision algorithms to controlled amounts of rain. We present three different ways to add synthetic rain to existing images datasets: completely physic-based; completely data-driven; and a combination of both. The physic-based rain augmentation combines a physical particle simulator and accurate rain photometric modeling. We validate our rendering methods with a user study, demonstrating our rain is judged as much as 73% more realistic than the state-of-the-art. Using our generated rain-augmented KITTI, Cityscapes, and nuScenes datasets, we conduct a thorough evaluation of object detection, semantic segmentation, and depth estimation algorithms and show that their performance decreases in degraded weather, on the order of 15% for object detection, 60% for semantic segmentation, and 6-fold increase in depth estimation error. Finetuning on our augmented synthetic data results in improvements of 21% on object detection, 37% on semantic segmentation, and 8% on depth estimation.
We present a novel, integrated system for content-aware video retargeting. A simple and interactive framework combines key frame based constraint editing with numerous automatic algorithms for video ...analysis. This combination gives content producers high level control of the retargeting process. The central component of our framework is a non-uniform, pixel-accurate warp to the target resolution which considers automatic as well as interactively defined features. Automatic features comprise video saliency, edge preservation at the pixel resolution, and scene cut detection to enforce bilateral temporal coherence. Additional high level constraints can be added by the producer to guarantee a consistent scene composition across arbitrary output formats. For high quality video display we adopted a 2D version of EWA splatting eliminating aliasing artifacts known from previous work. Our method seamlessly integrates into postproduction and computes the reformatting in real-time. This allows us to retarget annotated video streams at a high quality to arbitary aspect ratios while retaining the intended cinematographic scene composition. For evaluation we conducted a user study which revealed a strong viewer preference for our method.