ML for movement modelling

Based on measured data, we create machine-learning based models of kinematic and dynamic properties movement in robots and humans.  Our methodologies focus on recurrent neural networks, Dynamic Motion Primitives, Autoencoders, and deep learning.

Apart from these models, we also create biomechanical models of hands and arms, based on measurement of the human movement system. 

 

 

 

movement in latent space

In the standard formulation, dynamic movement primitives (DMPs) suffer from suboptimal generalisation—when used in configuration space—or are a victim of the curse of dimensionality—when used in task space. To solve this problem we propose a model called autoencoded dynamic movement primitive (AEDMP) which uses deep autoencoders to find a representation of movement in a latent feature space, in which DMP can optimally generalise. The architecture embeds DMP into such an autoencoder and allows the whole to be trained as a unit.

The objectives are:

  • A major strength of our method is caused by integrating the dimensionality reduction with the DMP-based movement representation.
  • Generate new movements which are not in the training data set by simply switching on/off or interpolating one hidden unit.
  • The model facilitates the reconstruction of missing joints and missing sub sequences.

The figure shows movement encoded in two hidden neurons (the value of each on the vertical viz. horizontal axis).  

In 2016, we published a new method based on DVBF.

 

 

grip force measurement

Estimating human fingertip forces is required to understand force distribution in grasping and manipulation.

Human grasping behaviour can then be used to develop force and impedance-based grasping and manipulation strategies for robotic hands. However, estimating human grip force naturally is only possible with instrumented objects or unnatural gloves, thus greatly limiting the type of objects used. In this project we develop approaches which uses images of the human fingertip to reconstruct grip force and torque at the finger.

The approaches include the following steps:

  • Image alignment: develop a method for 2d image to 3d finger model alignment method using convolutional neural networks (CNNs); employ non-rigid image registration method.
  • Force predictor: predict the forces from fingertip images using Gaussian process or neural networks.

The objectives are:

  • Our approach does not use finger-mounted equipment, but instead a steady camera observing the fingers of the hand from a distance. This allows for finger force estimation without any physical interference with the hand or object itself, and is therefore universally applicable.
  • Moving away from a constrained lab setting (e.g., finger brace) with perfect conditions and comfortable restrictions, we deal with an additional constraint that strongly influences measurement results: the bending of the finger. With the (optically measured) angles of the finger joints, we use this as extra parameter for our estimator, leading to much improved accuracies when predicting the finger-exerted force from the nail image.

Another major constraint setting is the contract surface. Our approach is robust for various contact surfaces and furthermore it can predict the types of the contact surface.

 

 

 

Kinematics of the human hand

The amazing manipulation capabilities that we develop show us clearly the versatility of the human hand. But even everyday tasks like picking a coin from a wallet are---from a robotics point of view---utterly impressive. What is so special about our hands?

Pose estimation of a bone. The points are extracted from the MRI images. The bone shown in blue on the left is taken from one MRI image and the one in red on the right from another. The pose estimation algorithm determines the movement that is necessary to match the blue and red points. The blue points on the right show the result of the pose estimation.

In cooperation with Rechts der Isar hospital, Munich, we took a large series (~50 images) of magnetic resonance images (MRI) of a healthy human hand in different postures. MRI allows three-dimensional views of the inside of the human body. The method works by measuring the response of hydrogen atoms inside the body to magnetic stimulation and is – unlike CT imaging – non-ionising.

To derive a kinematic model from the MRI images, we conducted the following steps:

  • Segmentation: Highlight the data that belongs to each individual bone. 
  • Pose estimation: Determine the position and orientation ("pose") of each bone with respect to a reference pose. 
  • Identification of joint axes: Numerically determine the position and orientation of joint axes that optimally incorporate the measured bone poses, using different joint models (one or two axes, non-intersecting or intersecting axes). 
  • Build the hand model: Select joint types that appropriately fulfill the compromise between accuracy and complexity, and combine the joints to so-called kinematic chains.
Resulting hand model with 24 degrees of freedom. The index finger metacarpal bone (marked by a black square) is taken as the base of the model. In joints with two axes, the first axis is shown red and the second one green.

For the pose estimation we used an algorithm that the robot Justin uses to identify the location of objects on a table. (The task is similar: Matching three-dimensional point clouds.) 

The resulting hand model is shown to the right. The base of the model is the index finger metacarpal bone ("palm bone"), marked by a black square. From there, the kinematic chains extend, indicated by black lines. A kinematic chain is a series of joints, where the position of the last link (in this case the fingertip) depends on the joint angles of all joints in the chain.

The first joint of the thumb is modeled by two non-intersecting axes of rotation, connected by a thick line. The second joint of the thumb also exhibits significant side ward movement and is therefore also modeled by two joint axes, in this case intersecting ones.

The four fingers all have one axis of rotation that allows for a side ward movement and three axes for bending and stretching. The arching of the palm takes place around three axes pointing roughly in the direction of the long axes of the palm bones.

Apart from kinematics, other aspects of the human hand are also important for its fine manipulation abilities, for example touch sensing, motion planning and motion control.

 

 

Impedance of the human arm

Defining the Cartesian stiffness matrix of variable-impedance robots is a quite heuristic task. Furthermore, depending on the desired task the stiffness behaviour must be adapted during movement. Humans learn to control limb stiffness from interaction, and we indeed exhibit fine variation of impedance depending on the task and environment. But how? We want to understand the mechanisms for setting and varying impedance in the human arm and hand, and transfer such models to the robotic domain.

Our main goals are

  • to understand according to which cost functions biological systems adjust their impedance, and how does intrinsic---defined by the skeletomuscular structure---impedance play a role or, conversely, how and why does the nervous system fluctuate impedance;
  • use the gained knowledge to improve body-machine interfaces and to pave the way towards modern impedance teleoperated systems (including prosthetic devices, rehabilitation devices, tele-surgical robotic systems, and so on).

We have developed different impedance measurement methods for identifying impedance of the human fingers, arms, and legs.  We combine classical perturbation approaches with EMG-based identification, using force-torque-sensors and optical tracking systems.


Picture of  Daniela Korhammer

Daniela Korhammer

alumni
Picture of  Hannes Höppner

Hannes Höppner

DLR: postdoc
human impedance
hannes.hoeppnerdlrde, +49 8153 28-1062
Picture of  Jörn Vogel

Jörn Vogel

DLR: PhD candidate
BCI robot control
joern.vogeldlrde, +49 8153 28-2166
Picture of  Justin Bayer

Justin Bayer

alumni
bayersensedio
Picture of  Markus Kühne

Markus Kühne

TUM: PhD candidate
MR-compatible haptic interfaces
markus.kuehnetumde
Picture of  Marvin Ludersdorfer

Marvin Ludersdorfer

fortiss: PhD candidate
anomaly detection
ludersdorferfortissorg
Picture of  Nutan Chen

Nutan Chen

TUM: PhD candidate
hand modelling
nutanin.tumde
Picture of  Rachel Hornung

Rachel Hornung

DLR: PhD candidate
rehabilitation robotics
rachel.hornungdlrde
Picture of  Patrick van der Smagt

Patrick van der Smagt

current: Head of AI Research, data lab, VW Group

Previous: Director of BRML labs
fortiss, An-Institut der Technischen Universität München
Professor for Biomimetic Robotics and Machine Learning, TUM

Chairman of Assistenzrobotik e.V.
smagtbrmlorg



[20]
Title: A one-eyed self-learning robot manipulator Neural networks in robotics
Written by: Kr{\"o}se B, Smagt P van der, Groen F
in: 1993
Volume: Number:
on pages: 19-28
Chapter:
Editor: G. Bekey and K. Goldberg
Publisher: Kluwer Academic Publishers, Dordrecht
Series:
Address:
Edition:
ISBN:
how published:
Organization:
School:
Institution:
Type:
DOI:
URL:
ARXIVID:
PMID:

bibtex

Note:

Abstract: A self-learning, adaptive control system for a robot arm using a vision system in a feedback loop is described. The task of the control system is to position the end-effector as accurate as possible directly above a target object, so that it can be grasped. The camera of the vision system is positioned in the end-effector and the visual information is used directly to control the robot. Two strategies are presented to solve the problem of obtaining 3D information from a single camera: a) using the size of the target object and b) using information from a sequence of images from the moving camera. In both cases a neural network is trained to perform the desired mapping.