Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few ...databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the
add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the
add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
Physical human-robot interaction tasks require robots that can detect and react to external perturbations caused by the human partner. In this contribution, we present a machine learning approach for ...detecting, estimating, and compensating for such external perturbations using only input from standard sensors. This machine learning approach makes use of Dynamic Mode Decomposition (DMD), a data processing technique developed in the field of fluid dynamics, which is applied to robotics for the first time. DMD is able to isolate the dynamics of a nonlinear system and is therefore well suited for separating noise from regular oscillations in sensor readings during cyclic robot movements. In a training phase, a DMD model for behavior-specific parameter configurations is learned. During task execution, the robot must estimate the external forces exerted by a human interaction partner. We compare the DMD-based approach to other interpolation schemes. A variant, sparsity promoting DMD, is particularly well suited for high-noise sensors. Results of a user study show that our DMD-based machine learning approach can be used to design physical human-robot interaction techniques that not only result in robust robot behavior but also enjoy a high usability.
Full text
Available for:
BFBNIB, DOBA, GIS, IJS, IZUM, KILJ, KISLJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
Close physical interaction between robots and humans is a particularly challenging aspect of robot development. For successful interaction and cooperation, the robot must have the ability to adapt ...its behavior to the human counterpart. Based on our earlier work, we present and evaluate a computationally efficient machine learning algorithm that is well suited for such close-contact interaction scenarios. We show that this algorithm helps to improve the quality of the interaction between a robot and a human caregiver. To this end, we present two human-in-the-loop learning scenarios that are inspired by human parenting behavior, namely, an assisted standing-up task and an assisted walking task.
Programming of complex motor skills for humanoid robots can be a time intensive task, particularly within conventional textual or GUI-driven programming paradigms. Addressing this drawback, we ...propose a new programming-by-demonstration method called Kinesthetic Bootstrapping for teaching motor skills to humanoid robots by means of intuitive physical interactions. Here, “programming” simply consists of manually moving the robot’s joints so as to demonstrate the skill in mind. The bootstrapping algorithm then generates a low-dimensional model of the demonstrated postures. To find a trajectory through this posture space that corresponds to a robust robot motion, a learning phase takes place in a physics-based virtual environment. The virtual robot’s motion is optimized via a genetic algorithm and the result is transferred back to the physical robot. The method has been successfully applied to the learning of various complex motor skills such as walking and standing up.
In many settings, e.g. physical human-robot interaction, robotic behavior must be made robust against more or less spontaneous application of external forces. Typically, this problem is tackled by ...means of special purpose force sensors which are, however, not available on many robotic platforms. In contrast, we propose a machine learning approach suitable for more common, although often noisy sensors. This machine learning approach makes use of Dynamic Mode Decomposition (DMD) which is able to extract the dynamics of a nonlinear system. It is therefore well suited to separate noise from regular oscillations in sensor readings during cyclic robot movements under different behavior configurations. We demonstrate the feasibility of our approach with an example where physical forces are exerted on a humanoid robot during walking. In a training phase, a snapshot based DMD model for behavior specific parameter configurations is learned. During task execution the robot must detect and estimate the external forces exerted by a human interaction partner. We compare the DMD-based approach to other interpolation schemes and show that the former outperforms the latter particularly in the presence of sensor noise. We conclude that DMD which has so far been mostly used in other fields of science, particularly fluid mechanics, is also a highly promising method for robotics.
Visualization of the melt immersion process at time step t = 1.2 s in the XSITE CAVE. The image shows the upper half of the filter in its original orientation. Entrapped gas bubbles are visualized as ...blue blobs on the surfaces of the filter and inside pores. Photo credit: Amjad Asad, article number 2100753, Institute of Mechanics and Fluid Dynamics, Henry Lehmann, Virtual Reality and Multimedia Group, TU Bergakademie Freiberg.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
Herein, the immersion process of a ceramic foam filter in a steel melt is investigated by means of numerical simulations, which are mainly based on the volume of fluid approach. The geometry of the ...used filters is modeled using an artificially generated beam model, which is convolved with a Gaussian kernel. The modeling approach enables the generation of filter geometries with, e.g., pore density and strut thickness similar to real ceramic foam filters. The main scope of the article is to show the effect of the immersion velocity of the filter on the formation of gas bubbles inside the pore cavities of the ceramic filter. Moreover, the influence of the contact angle on the volume fraction of gas bubbles, which remain in the filter, is investigated. For better understanding, the numerical results are underlined using 3D visualization and virtual reality.
Herein, the immersion process of a ceramic foam filter is investigated. The main aim of the article is to show the effect of the immersion velocity and the contact angle on the bubble entrapment on the filter walls.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
Objective assessment in long-term rehabilitation under real-life recording conditions is a challenging task. We propose a data-driven method to evaluate changes in motor function under uncontrolled, ...long-term conditions with the low-cost Microsoft Kinect sensor. Instead of using human ratings as ground truth data, we propose kinematic features of hand motion, healthy reference trajectories derived by principal component regression, and methods taken from machine learning to analyze the progression of motor function. We demonstrate the capability of this approach on datasets with repetitive unrestrained bi-manual drumming movements in three-dimensional space of stroke survivors, patients suffering of Parkinson's disease, and a healthy control group. We present processing steps to eliminate the influence of varying recording setups under real-life conditions and offer visualization methods to support clinicians in the evaluation of treatment effects.
The combination of additive manufacturing and replication technique enables the development of new ceramic foam filters (CFFs) for the filtration of metal melts based on computer‐generated templates. ...This article presents a numerical study on the sensitivity of filtration performance with respect to different geometric modifications, applied to an artificial monodisperse base structure. Three different geometric modifications are implemented, namely, elliptical elongation and flattening of the strut cross section with respect to the flow direction, additional finger‐like struts protruding into the pore cavity, and addition of deliberately closed windows. All modifications are implemented for overall porosities of 70–90%. The performance of the new structures is evaluated for continuous casting of aluminum by comparing the hydraulic tortuosity, the permeability, the Forchheimer coefficient, and the filtration coefficient, which are obtained from detailed pore‐scale simulations of the melt flow and inclusions transport using an Euler–Langrange approach. For the fast determination of the permeability coefficients, a novel and extremely simple model for the prediction of the Forchheimer coefficient is described. The investigation shows that geometric modifications to open‐cell foams potentially improve the filtration performance without significant decrease in filter porosity and can be considered as templates for the design of efficient CFFs.
Exploring novel designs of ceramic foam filters for metal melt filtration, the article presents a method to add 3D‐printable geometric modifications to random foams and investigates their performance in numerical simulations.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant ...information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ