Keyframe-based Learning from Demonstration Akgun, Baris; Cakmak, Maya; Jiang, Karl ...
International journal of social robotics,
11/2012, Volume:
4, Issue:
4
Journal Article
Peer reviewed
We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a Human–Robot Interaction perspective. Our approach—Keyframe-based Learning ...from Demonstration (KLfD)—takes demonstrations that consist of keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
A method to learn and reproduce robot force interactions in a human-robot interaction setting is proposed. The method allows a robotic manipulator to learn to perform tasks that require exerting ...forces on external objects by interacting with a human operator in an unstructured
environment. This is achieved by learning two aspects of a task: positional and force profiles. The positional profile is obtained from task demonstrations via kinesthetic teaching. The force profile is obtained from additional demonstrations via a haptic device. A human teacher
uses the haptic device to input the desired forces that the robot should exert on external objects during the task execution. The two profiles are encoded as a mixture of dynamical systems, which is used to reproduce the task satisfying both the positional and force profiles. An active control
strategy based on task-space control with variable stiffness is then proposed to reproduce the skill. The method is demonstrated with two experiments in which the robot learns an ironing task and a door-opening task.
Full text
Available for:
BFBNIB, DOBA, GIS, IJS, IZUM, KILJ, KISLJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
We present a framework that allows a robot manipulator to learn how to execute structured tasks from human demonstrations. The proposed system combines physical human–robot interaction with ...attentional supervision in order to support kinesthetic teaching, incremental learning, and cooperative execution of hierarchically structured tasks. In the proposed framework, the human demonstration is automatically segmented into basic movements, which are related to a task structure by an attentional system that supervises the overall interaction. The attentional system permits to track the human demonstration at different levels of abstraction and supports implicit non-verbal communication both during the teaching and the execution phase. Attention manipulation mechanisms (e.g. object and verbal cueing) can be exploited by the teacher to facilitate the learning process. On the other hand, the attentional system permits flexible and cooperative task execution. The paper describes the overall system architecture and details how cooperative tasks are learned and executed. The proposed approach is evaluated in a human–robot co-working scenario, showing that the robot is effectively able to rapidly learn and flexibly execute structured tasks.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
In an industrial production context, when a robotic arm is assigned a new task, it must be re-programmed by specialized personnel with the necessary skills and knowledge. In addition, the robot ...remains offline while being re-programmed, negatively affecting productivity. Mixed reality opens up the opportunity for a production worker with no programming skills to teach a robot by interacting with the hologram of its digital twin, thus not interfering with the physical robot during re-programming. We describe the design and implementation of a mixed-reality interface to instruct trajectories for a robot end-effector by moving the hologram of its digital twin by hand. The interface supports the integrality of this interaction so that the user can configure the end effector location and orientation simultaneously. We also report a user study (n = 14) to characterize this interface in terms of usability, user experience, and estimated temporal efficiency that it can provide when programming robot trajectories.
Full text
Available for:
BFBNIB, DOBA, GIS, IJS, IZUM, KILJ, KISLJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
We present an approach for learning sequential robot skills through kinesthetic teaching. In our work, finding the transitions between consecutive movement primitives is treated as multiclass ...classification problem. We show how the goal parameters of linear attractor movement primitives can be learned from manually segmented and labeled demonstrations and how the observed movement primitive order can help to improve the movement reproduction. The improvement is achieved by restricting the classification result to the currently activated movement primitive and its possible successors in a graph representation of the sequence, which is also learned from the demonstrations. The approach is validated with three experiments using a Barrett wam robot.
•We present an approach for learning sequential skills from kinesthetic demonstrations.•A sequential skill has the ability to sequence movement primitives (MPs) correctly.•Learning the transition behavior between MPs is treated as a classification problem.•The goals of the MPs are learned from demonstrations.•The approach is validated in three experiments using a Barrett wam robot.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK
Smart manufacturing requires easily reconfigurable robotic systems to increase the flexibility in presence of market uncertainties by reducing the set-up times for new tasks. One enabler of fast ...reconfigurability is given by intuitive robot programming methods. On the one hand, offline skill-based programming (OSP) allows the definition of new tasks by sequencing pre-defined, parameterizable building blocks termed as skills in a graphical user interface. On the other hand, programming by demonstration (PbD) is a well known technique that uses kinesthetic teaching for intuitive robot programming, where this work presents an approach to automatically recognize skills from the human demonstration and parameterize them using the recorded data. The approach further unifies both programming modes of OSP and PbD with the help of an ontological knowledge base and empowers the end user to choose the preferred mode for each phase of the task. In the experiments, we evaluate two scenarios with different sequences of programming modes being selected by the user to define a task. In each scenario, skills are recognized by a data-driven classifier and automatically parameterized from the recorded data. The fully defined tasks consist of both manually added and automatically recognized skills and are executed in the context of a realistic industrial assembly environment.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Learning from demonstration (LfD) is a well-established method of movement demonstration; however, the performance of different LfD approaches during a fine movement generation is still unknown. In ...this study, we compare kinesthetic teaching, teleoperation, and cooperative robot tool approaches on two different tasks, where a submillimeter accuracy is required. Additionally, we analyze the influence of a visual enhancement feature on each of the approaches and the influence of a spatial scaling feature on the teleoperation approach. The participants are a well-balanced group (regarding age, gender, and expertise), with <inline-formula><tex-math notation="LaTeX">65 \%</tex-math></inline-formula> having no previous experience using robots. In our study, we found that all approaches achieved a submillimeter median positioning error. However, when no additional features are used, the cooperative robot tool (CRT) approach outperforms other approaches since it consistently achieves the lowest positioning error. Besides the positioning error, the generated velocity and the participants' feedback (via a questionnaire) also indicates that it is the most suitable approach for an accurate submillimeter movement generation. We also concludes that the visual enhancement feature and the spatial scaling feature has a significant influence on the performance of all approaches. When the two features are used, the generated positioning error drops considerably. When the visual enhancement feature is used, kinesthetic teaching performs in some cases as good as the CRT approach, while the teleoperation with the spatial scaling feature approach in some cases even outperforms the CRT approach. However, we still consider the CRT to be the best approach for fine movement generation since these features cannot be used in every possible scenario.
Programming by Demonstration (PbD) is used to transfer a task from a human teacher to a robot, where it is of high interest to understand the underlying structure of what has been demonstrated. Such ...a demonstrated task can be represented as a sequence of so-called actions or skills. This work focuses on the recognition part of the task transfer. We propose a framework that recognizes skills online during a kinesthetic demonstration by means of position and force–torque (wrench) sensing. Therefore, our framework works independently of visual perception. The recognized skill sequence constitutes a task representation that lets the user intuitively understand what the robot has learned. The skill recognition algorithm combines symbolic skill segmentation, which makes use of pre- and post-conditions, and data-driven prediction, which uses support vector machines for skill classification. This combines the advantages of both techniques, which is inexpensive evaluation of symbols and usage of data-driven classification of complex observations. The framework is thus able to detect a larger variety of skills, such as manipulation and force-based skills that can be used in assembly tasks. The applicability of our framework is proven in a user study that achieves a 96% accuracy in the online skill recognition capabilities and highlights the benefits of the generated task representation in comparison to a baseline representation. The results show that the task load could be reduced, trust and explainability could be increased, and, that the users were able to debug the robot program using the generated task representation.
•Task segmentation divides a demonstrated task into a sequence of skills•Symbolic skill recognition evaluates predefined pre- and postconditions•Data-driven (sub-symbolic) skill recognition uses a trained classifier•Recognition pipelines run concurrently to improve segmentation accuracy•Online segmentation immediately constructs a visual task representation•User study evaluates the approach and its online task representation
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
In human–robot comanipulation, virtual guides are an important tool used to assist the human worker as they constrain the movement of the robot to improve the task accuracy and to avoid undesirable ...effects, such as collisions with the environment. Consequently, the physical effort and cognitive overload are reduced during accomplishment of comanipulative tasks. However, the construction of virtual guides often requires expert knowledge and modeling of the task, which restricts the usefulness of virtual guides to scenarios with fixed constraints. Moreover, few approaches have addressed the implementation of virtual guides enforcing orientation constraints and, when done, these approaches have treated translation and orientation separately, and consequently there is no synchronization of the translational and rotational motions. To overcome these challenges and enhance the programming flexibility of virtual guides, we present a new framework that allows the user to create 6D virtual guides through XSplines which we define as a combination of Akima splines for the translation component and spherical cubic interpolation of quaternions for the orientation component. For complex tasks, the user is able to initially define a 3D virtual guide and then use this assistance in translational motion to concentrate only on defining the orientations along the path. It is also possible for the user to modify a particular point or portion of a guide while being assisted by it. We demonstrate in an industrial scenario that these innovations provide an intuitive solution to extend the use of virtual guides to 6 degrees of freedom and increase the human worker’s comfort during the programming phase of these guides in an assisted human–robot comanipulation context.
•A controller for the kinesthetic modification of a kinematic behavior encoded by Dynamic Movement Primitives (DMP) is proposed.•The controller enables the human teacher to haptically ”inspect” the ...spatial properties of the learned behavior in SE(3).•The user can significantly modify segments of the behavior.•Experiments on teaching a variant of an emulated milling task with a KUKA LWR4+ manipulator are performed.•Results are compared with the case of using a gravity compensated robot agnostic of the previously learned task.•It is shown that the time duration of teaching and the user’s cognitive load are reduced.
Precise programming of robots for industrial tasks is inflexible to variations and time-consuming. Teaching a kinematic behavior by demonstration and encoding it with dynamical systems that are robust with respect to perturbations, is proposed in order to address this issue. Given a kinematic behavior encoded by Dynamic Movement Primitives (DMP), this work proposes a passive control scheme for assisting kinesthetic modifications of the learned behavior in task variations. It employs the utilization of penetrable spherical Virtual Fixtures (VFs) around the DMP’s virtual evolution that follows the teacher’s motion. The controller enables the user to haptically ‘inspect’ the spatial properties of the learned behavior in SE(3) and significantly modify it at any required segment, while facilitating the following of already learned segments. A demonstration within the VFs could signify that the kinematic behavior is taught correctly and could lead to autonomous execution, with the DMP generating the newly learned reference commands. The proposed control scheme is theoretically proved to be passive and experimentally validated with a KUKA LWR4+ robot. Results are compared with the case of using a gravity compensated robot agnostic of the previously learned task. It is shown that the time duration of teaching and the user’s cognitive load are reduced.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP