Interactive and immersive technologies can significantly enhance the fruition of museums and exhibits. Several studies have proved that multimedia installations can attract visitors, presenting ...cultural and scientific information in an appealing way. In this article, we present our workflow for achieving a gaze-based interaction with artwork imagery. We designed both a tool for creating interactive “gaze-aware” images and an eye tracking application conceived to interact with those images with the gaze. Users can display different pictures, perform pan and zoom operations, and search for regions of interest with associated multimedia content (text, image, audio, or video). Besides being an assistive technology for motor impaired people (like most gaze-based interaction applications), our solution can also be a valid alternative to the common touch screen panels present in museums, in accordance with the new safety guidelines imposed by the COVID-19 pandemic. Experiments carried out with a panel of volunteer testers have shown that the tool is usable, effective, and easy to learn.
Hand detection and gesture recognition are two of the most studied topics in human–computer interaction (HCI). The increasing availability of sensors able to provide real-time depth measurements, ...such as time-of-flight cameras or the more recent Kinect, has helped researchers to find more and more efficient solutions for these issues. With the main aim to implement effective gesture-based interaction systems, this study presents an approach to hand detection and tracking that exploits two different video streams: the depth one and the colour one. Both hand and gesture recognition are based only on geometrical and colour constraints, and no learning phase is needed. The use of a Kalman filter to track hands guarantees system robustness also in presence of many persons in the scene. The entire procedure is designed to maintain a low computational cost and is optimised to efficiently execute HCI tasks. As use cases two common applications are described: a virtual keyboard and a three-dimensional object manipulation virtual environment. These applications have been tested with a representative sample of non-trained users to assess the usability and flexibility of the system.
This article aims to illustrate the importance of the contribution of 3D survey and modeling techniques in the framework of movable heritage and museum collection documentation and cataloguing. The ...work reported in this study focuses on digitalization and 3D modeling of some popular music instruments (belonging to the collection of Museo del Paesaggio Sonoro in Italy) in the framework of the SAMIC project (Sound Archives & Musical Instruments Collection). Different sensors and strategies have been applied during this research in order to obtain high-resolution digital replicas of the objects (a LiDAR system and a photogrammetric approach have been tested). The digital datasets modeled with the different techniques were compared in order to evaluate the quality of the models and their metric accuracy.
Eye tracking technology is now mature enough to be exploited in various areas of human–computer interaction. In this paper, we consider the use of gaze-based communication in museums and exhibitions, ...to make the visitor experience more engaging and attractive. While immersive and interactive technologies are now relatively widespread in museums, the use of gaze interaction is still in its infancy—despite the benefits it could provide, for example, to visitors with motor disabilities. Apart from some pioneering early works, only the last few years have seen an increase in gaze-based museum applications. This literature review aims to discuss the state of the art on this topic, highlighting advantages, limitations and current and future trends.
•A large publicly available dataset of simulated fresco fragments.•High variety of subjects and conditions.•Different painters and historical periods.•Missing and spurious elements have been ...considered.
Restoring artworks seriously damaged or completely destroyed is a challenging task. In particular, the reconstruction of frescoes has to deal with problems such as very small fragments, irregular shapes and missing pieces. Several attempts have been done to develop new techniques for helping restorers in the matching process, starting from traditional image processing methods to the more recent deep learning approaches. However, as often happens in the Cultural Heritage field, the availability of labeled data to test new strategies is limited, and publicly available datasets contain only few samples. For this reason, in this paper we introduce DAFNE, a large dataset that includes hundreds of thousands of images of fresco fragments artificially generated to guarantee a high variability in terms of shapes and dimensions. Fragments have been obtained starting from 62 images of famous frescoes of various artists and historical periods, in order to consider different artistic styles, subjects and colors.
Gaze-based text entry is undoubtedly one of the most useful applications of eye-tracking technology for human-machine interaction, both in the assistive context (users with severe motor disabilities ...can exploit such writing modalities to communicate with the world) and as a way to allow touchless text input in everyday life. Different eye-driven text entry methods have been developed to date, and almost all of them require preliminary calibration procedures to work correctly. When a short text, such as a password or a PIN, needs to be entered without using hands or voice, calibration may be perceived as an unnecessary nuisance (and may not be properly maintained in public places due to "ambient noise," caused, for example, by nearby people). Inadequate calibration may also be a problem in case of assistive uses. In this article we present SPEye , a calibration-free eye-controlled writing technique based on smooth pursuit. Although its writing speed is significantly lower than that of ordinary calibrated methods, the absence of an initial calibration makes it suitable for short text entry. The technique has been tested through several experiments, obtaining good performances in terms of key strokes per character and total error rate metrics, and receiving positive feedback from the participants in the tests.
In the automotive field, the development of new vehicles is nowadays a challenge for manufacturers, which try to find a trade-off between "beauty of design" and driver safety. Modern cars often have ...touch displays and hardly any physical buttons. Touch interaction, however, may distract the driver from the road, which is a major safety issue. For the evaluation of the ergonomics of in-vehicle infotainment interfaces, this paper proposes a test protocol and an evaluation method based on eye tracking technology. To the best of our knowledge, there are currently no standard protocols or methodologies for objective assessment of infotainment systems in vehicles. Unlike what often happens, we were able to test our proposals not simply through simulators but implementing actual experiments with real cars, which we believe is an important aspect that characterizes our work.
The adoption of multimedia and multimodal applications inside museums and exhibitions is becoming a common practice. These installations proved to be particularly effective to attract visitors, ...especially the younger ones, and teach them complex information about the exposed artworks. A similar approach can be useful also to explain scientific analyses conducted on artworks, that commonly involve the use of complex analytical techniques, difficult to understand for inexpert people. In this work, we describe a multimodal workflow for the creation of interactive presentations of 360 spin images of historical violins. The workflow involves the acquisition, classification and visualization of the data. In particular, an ad hoc photographic set was built to achieve a fast acquisition of images both under visible and UV illumination, with the aim to study the surface of the instruments. Acquired images are classified and labeled using UVAnalyzer, an interactive application that supplies a set of tools for the analysis of UV fluorescence (UVF) images. Finally, KVN (Kinect Violin Navigator), a Kinect-based application, provides comfortable visualization and navigation within the data. During the development, we adopted a user-driven approach: user studies and suggestions from experts in UVF analysis were taken into account to improve the usability of the various steps of the workflow; then, a qualitative evaluation of the produced presentation was conducted on 22 volunteers. As a test set, we used images of violins exposed in the “Museo del Violino” in Cremona (Italy).