This paper reviews the state-of-the art in the field of lock-in time-of-flight (ToF) cameras, their advantages, their limitations, the existing calibration methods, and the way they are being used, ...sometimes in combination with other sensors. Even though lock-in ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight, and reduced power consumption have motivated their increasing usage in several research areas, such as computer graphics, machine vision, and robotics.
In this paper we introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry. In contrast to existing learning-based approaches that require training ...specific models for each type of garment, SMPLicit can represent in a unified manner different garment topologies (e.g. from sleeveless tops to hoodies and to open jackets), while controlling other properties like the garment size or tightness/looseness. We show our model to be applicable to a large variety of garments including T-shirts, hoodies, jackets, shorts, pants, skirts, shoes and even hair. The representation flexibility of SMPLicit builds upon an implicit model conditioned with the SMPL human body parameters and a learnable latent space which is semantically interpretable and aligned with the clothing attributes. The proposed model is fully differentiable, allowing for its use into larger end-to-end trainable systems. In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people. In both cases we are able to go beyond state of the art, by retrieving complex garment geometries, handling situations with multiple clothing layers and providing a tool for easy outfit editing. To stimulate further research in this direction, we will make our code and model publicly available at http://www.iri.upc.edu/people/ecorona/smplicit/.
Recent studies have revealed the key importance of modelling personality in robots to improve interaction quality by empowering them with social-intelligence capabilities. Most research relies on ...verbal and non-verbal features related to personality traits that are highly context-dependent. Hence, analysing how humans behave in a given context is crucial to evaluate which of those social cues are effective. For this purpose, we designed an assistive memory game, in which participants were asked to play the game obtaining support from an introvert or extroverted helper, whether from a human or robot. In this context, we aim to (i) explore whether selective verbal and non-verbal social cues related to personality can be modelled in a robot, (ii) evaluate the efficiency of a statistical decision-making algorithm employed by the robot to provide adaptive assistance, and (iii) assess the validity of the similarity attraction principle. Specifically, we conducted two user studies. In the human–human study (N=31), we explored the effects of helper’s personality on participants’ performance and extracted distinctive verbal and non-verbal social cues from the human helper. In the human–robot study (N=24), we modelled the extracted social cues in the robot and evaluated its effectiveness on participants’ performance. Our findings showed that participants were able to distinguish between robots’ personalities, and not between the level of autonomy of the robot (Wizard-of-Oz vs fully autonomous). Finally, we found that participants achieved better performance with a robot helper that had a similar personality to them, or a human helper that had a different personality.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Abstract
Within the next decades, robots will need to be able to execute a large variety of tasks autonomously in a large variety of environments. To relax the resulting programming effort, a ...knowledge-enabled approach to robot programming can be adopted to organize information in re-usable knowledge pieces. However, for the ease of reuse, there needs to be an agreement on the meaning of terms. A common approach is to represent these terms using ontology languages that conceptualize the respective domain. In this work, we will review projects that use ontologies to support robot autonomy. We will systematically search for projects that fulfill a set of inclusion criteria and compare them with each other with respect to the scope of their ontology, what types of cognitive capabilities are supported by the use of ontologies, and which is their application domain.
Teaching a Robot the Semantics of Assembly Tasks Savarimuthu, Thiusius Rajeeth; Buch, Anders Glent; Schlette, Christian ...
IEEE transactions on systems, man, and cybernetics. Systems,
05/2018, Volume:
48, Issue:
5
Journal Article, Publication
Peer reviewed
Open access
We present a three-level cognitive system in a learning by demonstration context. The system allows for learning and transfer on the sensorimotor level as well as the planning level. The ...fundamentally different data structures associated with these two levels are connected by an efficient mid-level representation based on so-called "semantic event chains." We describe details of the representations and quantify the effect of the associated learning procedures for each level under different amounts of noise. Moreover, we demonstrate the performance of the overall system by three demonstrations that have been performed at a project review. The described system has a technical readiness level (TRL) of 4, which in an ongoing follow-up project will be raised to TRL 6.
Compliant and soft hands have gained a lot of attention in the past decade because of their ability to adapt to the shape of the objects, increasing their effectiveness for grasping. However, when it ...comes to grasping highly flexible objects such as textiles, we face the dual problem: it is the object that will adapt to the shape of the hand or gripper. In this context, the classic grasp analysis or grasping taxonomies are not suitable for describing textile objects grasps. This article proposes a novel definition of textile object grasps that abstracts from the robotic embodiment or hand shape and recovers concepts from the early neuroscience literature on hand prehension skills. This framework enables us to identify what grasps have been used in literature until now to perform robotic cloth manipulation, and allows for a precise definition of all the tasks that have been tackled in terms of manipulation primitives based on regrasps. In addition, we also review what grippers have been used. Our analysis shows how the vast majority of cloth manipulations have relied only on one type of grasp, and at the same time we identify several tasks that need more variety of grasp types to be executed successfully. Our framework is generic, provides a classification of cloth manipulation primitives and can inspire gripper design and benchmark construction for cloth manipulation.
•We propose an algorithm that first, identifies the type of the garment and second, performs a search of the two grasping points that allow a robot to bring the garment to a known pose.•Using Maya, ...we generate a database of depth images from simulated garments. The whole process is automatized by a code we make public.•We combine depth images from real garments with simulated data, to train a Convolutional Neural Network that significantly improves state of the art results in cloth recognition.•To detect the visibility and Cartesian location of the reference points, we use two more Convolutional Neural Networks per garment. The garment manipulation we propose differs from the classical approach based on re-grasping of the lowest hanging parts.
Identification and bi-manual handling of deformable objects, like textiles, is one of the most challenging tasks in the field of industrial and service robotics. Their unpredictable shape and pose makes it very difficult to identify the type of garment and locate the most relevant parts that can be used for grasping. In this paper, we propose an algorithm that first, identifies the type of garment and second, performs a search of the two grasping points that allow a robot to bring the garment to a known pose. We show that using an active search strategy it is possible to grasp a garment directly from predefined grasping points, as opposed to the usual approach based on multiple re-graspings of the lowest hanging parts. Our approach uses a hierarchy of three Convolutional Neural Networks (CNNs) with different levels of specialization, trained both with synthetic and real images. The results obtained in the three steps (recognition, first grasping point, second grasping point) are promising. Experiments with real robots show that most of the errors are due to unsuccessful grasps and not to the localization of the grasping points, thus a more robust grasping strategy is required.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPUK, ZRSKP
The rise of deep learning has brought remarkable progress in estimating hand geometry from images where the hands are part of the scene. This paper focuses on a new problem not explored so far, ...consisting in predicting how a human would grasp one or several objects, given a single RGB image of these objects. This is a problem with enormous potential in e.g. augmented reality, robotics or prosthetic design. In order to predict feasible grasps, we need to understand the semantic content of the image, its geometric structure and all potential interactions with a hand physical model. To this end, we introduce a generative model that jointly reasons in all these levels and 1) regresses the 3D shape and pose of the objects in the scene; 2) estimates the grasp types; and 3) refines the 51-DoF of a 3D hand model that minimize a graspability loss. To train this model we build the YCB-Affordance dataset, that contains more than 133k images of 21 objects in the YCB-Video dataset. We have annotated these images with more than 28M plausible 3D human grasps according to a 33-class taxonomy. A thorough evaluation in synthetic and real images shows that our model can robustly predict realistic grasps, even in cluttered scenes with multiple objects in close contact.
Human-object interaction is of great relevance for robots to operate in human environments. However, state-of-the-art robotic hands are far from replicating humans skills. It is, therefore, essential ...to study how humans use their hands to develop similar robotic capabilities. This article presents a deep dive into hand-object interaction and human demonstrations, highlighting the main challenges in this research area and suggesting desirable future developments. To this extent, the article presents a general definition of the hand-object interaction problem together with a concise review for each of the main subproblems involved, namely: sensing, perception, and learning. Furthermore, the article discusses the interplay between these subproblems and describes how their interaction in learning from demonstration contributes to the success of robot manipulation. In this way, the article provides a broad overview of the interdisciplinary approaches necessary for a robotic system to learn new manipulation skills by observing human behavior in the real world.
Socially assistive robots have the potential to augment and enhance therapist’s effectiveness in repetitive tasks such as cognitive therapies. However, their contribution has generally been limited ...as domain experts have not been fully involved in the entire pipeline of the design process as well as in the automatisation of the robots’ behaviour. In this article, we present aCtive leARning agEnt aSsiStive bEhaviouR (CARESSER), a novel framework that actively learns robotic assistive behaviour by leveraging the therapist’s expertise (knowledge-driven approach) and their demonstrations (data-driven approach). By exploiting that hybrid approach, the presented method enables in situ fast learning, in a fully autonomous fashion, of personalised patient-specific policies. With the purpose of evaluating our framework, we conducted two user studies in a daily care centre in which older adults affected by mild dementia and mild cognitive impairment (
N
= 22) were requested to solve cognitive exercises with the support of a therapist and later on of a robot endowed with CARESSER. Results showed that: (i) the robot managed to keep the patients’ performance stable during the sessions even more so than the therapist; (ii) the assistance offered by the robot during the sessions eventually matched the therapist’s preferences. We conclude that CARESSER, with its stakeholder-centric design, can pave the way to new AI approaches that learn by leveraging human–human interactions along with human expertise, which has the benefits of speeding up the learning process, eliminating the need for the design of complex reward functions, and finally avoiding undesired states.
Full text
Available for:
CEKLJ, EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ