MoFTaG - Model-free, Flexible Trajectory Generation

The ever increasing demand for just-in time and custom-tailored production calls for fast and flexible reconfigurable workcells and human-robot co-worker applications. Redundant robot arms - such as the Kuka lightweight robot IV - will play a key role for their realization as they allow for very flexible motion control. However, the gained flexibility comes at increased engineering costs, because there is additional need for more complex control modes and explicit modeling steps e.g. to define criteria for redundancy resolution. For frequently changing tasks and/or environmental constraints, current technology requires costly and tedious re-programming by experts for each of these tasks.

The ECHORD experiment MoFTaG targets to facilitate and speed up this inefficient process by utilizing model-free learning methods that only require non-expert users to record a limitied number of data-points in task-relevant areas of the workspace by kinesthetic interaction. Inverse kinematics including task-specific redundancy resolution and trajectory generation between several parts of the workspace are learned fully automatic. During the experiment we systematically test and evaluate the method in a concrete co-worker scenario with respect to the following aspects:

  • the stability of learning relative to the size of the workspace and the desired amount of generalization,
  • the feasibility of the teaching procedure for non-expert users, and
  • the inclusion of online feedback through a perceptual component for trajectory adaptation for online obstacle avoidance. 

 

FlexIRob at Harting: A user study about physical human-robot interaction

Thanks to our collaboration partner Harting, we were able to access the robustness and feasibility of our method in a large user study with N=49 participants of Harting's production site. It turned out that physical guidance of a redundant robot in form of kinesthetic teaching is much more than just a “teach-in”. It can provide both, a task trajectory and a smart and task-appropriate redundancy resolution. And it appears that the users benefit from the increased flexibility of redundant arms only if they are supported by a structured and helpful interaction design - such as the one proposed in this project. 

 

Online trajectory adaptation with 3D scene interpretation

For online obstacle avoidance we combine previous work on 3D scene interpretation with existing work on dynamical-system based online trajectory modulation. The articulated scene model apprach segments the visable 3D scene into a static background layer and a dynamic obstacle layer. Movement generation is implemented based  on dynamical systems, which can realize stable attractor dynamics with fixed-points or limit cycles learned from human demonstrations as in the above  kinesthetic teaching scenario. During execution, they are able to generalize to new situations and can be modulated to incorporate the detected obstacles by generating smooth movements avoiding these.

 

Publications:

Contact:

Prof. Jochen Steil
Dr.-Ing. Agnes Swadzba
Dr.-Ing. Christian Emmerich