Radar sensing technologies now offer new opportunities for gesturally interacting with a smart environment by capturing microgestures via a chip that is embedded in a wearable device, such as a ...smartwatch, a finger or a ring. Such microgestures are issued at a very small distance from the device, regardless of whether they are contact-based, such as on the skin, or contactless. As this category of microgestures remains largely unexplored, this paper reports the results of a gesture elicitation study that was conducted with twenty-five participants who expressed their preferred user-defined gestures for interacting with a radar-based sensor on nineteen referents that represented frequent Internet-of-things tasks. This study clustered the 25 \times 19=475 initially elicited gestures into four categories of microgestures, namely, micro, motion, combined, and hybrid, and thirty-one classes of distinct gesture types and produced a consensus set of the nineteen most preferred microgestures. In a confirmatory study, twenty new participants selected gestures from this classification for thirty referents that represented tasks of various orders; they reached a high rate of agreement and did not identify any new gestures. This classification of radar-based gestures provides researchers and practitioners with a larger basis for exploring gestural interactions with radar-based sensors, such as for hand gesture recognition.
The expansion of touch-sensitive technologies, ranging from smartwatches to wall screens, triggered a wider use of gesture-based user interfaces and encouraged researchers to invent recognizers that ...are fast and accurate for end-users while being simple enough for practitioners. Since the pioneering work on two-dimensional (2D) stroke gesture recognition based on feature extraction and classification, numerous approaches and techniques have been introduced to classify uni- and multi-stroke gestures, satisfying various properties of articulation-, rotation-, scale-, and translation-invariance. As the domain abounds in different recognizers, it becomes difficult for the practitioner to choose the right recognizer, depending on the application and for the researcher to understand the state-of-the-art. To address these needs, a targeted literature review identified 16 significant 2D stroke gesture recognizers that were submitted to a descriptive analysis discussing their algorithm, performance, and properties, and a comparative analysis discussing their similarities and differences. Finally, some opportunities for expanding 2D stroke gesture recognition are drawn from these analyses.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, SAZU, UL, UM, UPUK
Retailers develop personalized websites with the aim of improving customer experience. However, we still have limited knowledge about the effect of personalization on customer experience and the ...underlying processes. With a lab experiment, this research specifically examines the effect of actual personalization and perceived personalization on playful customer experience using both subjective and objective measures, with the support of eye-tracking techniques. We show that personalization, regardless of whether it is perceived or not, enhance the playful customer experience of a retailing website. In addition, we highlight the presence of two concomitant processes. Content needs to be perceived as personalized to influence the subjective playful customer experience, but actual personalization does influence objective playful customer experience. Although customers spend the same time on the website, they focus more of their attention on their favorite products when content is personalized. Such focused attention leads them to select their favorite products for purchase.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
4.
Gelicit Magrofuoco, Nathan; Vanderdonckt, Jean
Proceedings of the ACM on human-computer interaction,
06/2019, Volume:
3, Issue:
EICS
Journal Article
Peer reviewed
A gesture elicitation study, as originally defined, consists of gathering a sample of participants in a room, instructing them to produce gestures they would use for a particular set of tasks, ...materialized through a representation called referent, and asking them to fill in a series of tests, questionnaires, and feedback forms. Until now, this procedure is conducted manually in a single, physical, and synchronous setup. To relax the constraints imposed by this manual procedure and to support stakeholders in defining and conducting such studies in multiple contexts of use, this paper presents Gelicit, a cloud computing platform that supports gesture elicitation studies distributed in time and space structured into six stages: (1) define a study: a designer defines a set of tasks with their referents for eliciting gestures and specifies an experimental protocol by parameterizing its settings; (2) conduct a study: any participant receiving the invitation to join the study conducts the experiment anywhere, anytime, anyhow, by eliciting gestures and filling forms; (3) classify gestures: an experimenter classifies elicited gestures according to selected criteria and a vocabulary; (4) measure gestures: an experimenter computes gesture measures, like agreement, frequency, to understand their configuration; (5) discuss gestures: a designer discusses resulting gestures with the participants to reach a consensus; (6) export gestures: the consensus set of gestures resulting from the discussion is exported to be used with a gesture recognizer. The paper discusses Gelicit advantages and limitations with respect to three main contributions: as a conceptual model for gesture management, as a method for distributed gesture elicitation based on this model, and as a cloud computing platform supporting this distributed elicitation. We illustrate Gelicit through a study for eliciting 2D gestures executing Internet of Things tasks on a smartphone.
Finger-based gesture input becomes a major interaction modality for surface computing. Due to the low precision of the finger and the variation in gesture production, multistroke gestures are still ...challenging to recognize in various setups. In this paper, we present µV, a multistroke gesture recognizer that addresses the properties of articulation, rotation, scaling, and translation invariance by combining $P+'s cloud-matching for articulation invariance with !FTL's local shape distance for RST-invariance. We evaluate µV against five competitive recognizers on MMG, an existing gesture set, and on two new versions for smartphones and tablets, MMG+ and RMMG+, a randomly rotated version on both platforms. µV is significantly more accurate than its predecessors when rotation invariance is required and not significantly inferior when it is not. µV is also significantly faster than others with many samples and not significantly slower with few samples.
Arm-and-hand tracking by technological means allows gathering data that can be elaborated for determining gesture meaning. To this aim, machine learning (ML) algorithms have been mostly investigated ...looking for a balance between the highest recognition rate and the lowest recognition time. However, this balance comes mainly from statistical models, which are challenging to interpret. In contrast, we present <inline-formula> <tex-math notation="LaTeX">\mu C^{1} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">\mu C^{2} </tex-math></inline-formula>, two geometric model-based approaches to gesture recognition which support the visualization and geometrical interpretation of the recognition process. We compare <inline-formula> <tex-math notation="LaTeX">\mu C^{1} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">\mu C^{2} </tex-math></inline-formula> with respect to two classical ML algorithms, k-nearest neighbor (k-NN) and support vector machine (SVM), and two state-of-the-art (SotA) deep learning (DL) models, bidirectional long short-term memory (BiLSTM) and gated recurrent unit (GRU), on an experimental dataset of ten gesture classes from the Italian Sign Language (LIS), each repeated 100 times by five inexperienced non-native signers, and gathered with wearable technology (a sensory glove and inertial measurement units). As a result, we achieve a compromise between high recognition rates (<inline-formula> <tex-math notation="LaTeX">>90\% </tex-math></inline-formula>) and low recognition times (<inline-formula> <tex-math notation="LaTeX"> < 0.1 {\mathrm{ s}} </tex-math></inline-formula>) that is adequate for human-computer interaction. Moreover, we elaborate on the algorithms' geometric interpretation based on geometric algebra, which supports some understanding of the recognition process.
While end users can acquire full 3D gestures with many input devices, they often capture only 3D trajectories, which are 3D uni-path, uni-stroke single-point gestures performed in thin air. Such ...trajectories with their $(x,y,z)$ coordinates could be interpreted as three 2D stroke gestures projected on three planes,\ie, $XY$, $YZ$, and $ZX$, thus making them admissible for established 2D stroke gesture recognizers. To investigate whether 3D trajectories could be effectively and efficiently recognized, four 2D stroke gesture recognizers, \ie, \$P, \$P+, \$Q, and Rubine, are extended to the third dimension: $\$P^3$, $\$P+^3$, $\$Q^3$, and Rubine-Sheng, an extension of Rubine for 3D with more features. Two new variations are also introduced: $\F for flexible cloud matching and FreeHandUni for uni-path recognition. Rubine3D, another extension of Rubine for 3D which projects the 3D gesture on three orthogonal planes, is also included. These seven recognizers are compared against three challenging datasets containing 3D trajectories, \ie, SHREC2019 and 3DTCGS, in a user-independent scenario, and 3DMadLabSD with its four domains, in both user-dependent and user-independent scenarios, with varying number of templates and sampling. Individual recognition rates and execution times per dataset and aggregated ones on all datasets show a highly significant difference of $\$P+^3$ over its competitors. The potential effects of the dataset, the number of templates, and the sampling are also studied.