ML for movement modelling

Based on measured data, we create machine-learning based models of kinematic and dynamic properties movement in robots and humans.  Our methodologies focus on recurrent neural networks, Dynamic Motion Primitives, Autoencoders, and deep learning.

Apart from these models, we also create biomechanical models of hands and arms, based on measurement of the human movement system. 




movement in latent space

In the standard formulation, dynamic movement primitives (DMPs) suffer from suboptimal generalisation—when used in configuration space—or are a victim of the curse of dimensionality—when used in task space. To solve this problem we propose a model called autoencoded dynamic movement primitive (AEDMP) which uses deep autoencoders to find a representation of movement in a latent feature space, in which DMP can optimally generalise. The architecture embeds DMP into such an autoencoder and allows the whole to be trained as a unit.

The objectives are:

  • A major strength of our method is caused by integrating the dimensionality reduction with the DMP-based movement representation.
  • Generate new movements which are not in the training data set by simply switching on/off or interpolating one hidden unit.
  • The model facilitates the reconstruction of missing joints and missing sub sequences.

The figure shows movement encoded in two hidden neurons (the value of each on the vertical viz. horizontal axis).  

In 2016, we published a new method based on DVBF.



grip force measurement

Estimating human fingertip forces is required to understand force distribution in grasping and manipulation.

Human grasping behaviour can then be used to develop force and impedance-based grasping and manipulation strategies for robotic hands. However, estimating human grip force naturally is only possible with instrumented objects or unnatural gloves, thus greatly limiting the type of objects used. In this project we develop approaches which uses images of the human fingertip to reconstruct grip force and torque at the finger.

The approaches include the following steps:

  • Image alignment: develop a method for 2d image to 3d finger model alignment method using convolutional neural networks (CNNs); employ non-rigid image registration method.
  • Force predictor: predict the forces from fingertip images using Gaussian process or neural networks.

The objectives are:

  • Our approach does not use finger-mounted equipment, but instead a steady camera observing the fingers of the hand from a distance. This allows for finger force estimation without any physical interference with the hand or object itself, and is therefore universally applicable.
  • Moving away from a constrained lab setting (e.g., finger brace) with perfect conditions and comfortable restrictions, we deal with an additional constraint that strongly influences measurement results: the bending of the finger. With the (optically measured) angles of the finger joints, we use this as extra parameter for our estimator, leading to much improved accuracies when predicting the finger-exerted force from the nail image.

Another major constraint setting is the contract surface. Our approach is robust for various contact surfaces and furthermore it can predict the types of the contact surface.




Kinematics of the human hand

The amazing manipulation capabilities that we develop show us clearly the versatility of the human hand. But even everyday tasks like picking a coin from a wallet are---from a robotics point of view---utterly impressive. What is so special about our hands?

Pose estimation of a bone. The points are extracted from the MRI images. The bone shown in blue on the left is taken from one MRI image and the one in red on the right from another. The pose estimation algorithm determines the movement that is necessary to match the blue and red points. The blue points on the right show the result of the pose estimation.

In cooperation with Rechts der Isar hospital, Munich, we took a large series (~50 images) of magnetic resonance images (MRI) of a healthy human hand in different postures. MRI allows three-dimensional views of the inside of the human body. The method works by measuring the response of hydrogen atoms inside the body to magnetic stimulation and is – unlike CT imaging – non-ionising.

To derive a kinematic model from the MRI images, we conducted the following steps:

  • Segmentation: Highlight the data that belongs to each individual bone. 
  • Pose estimation: Determine the position and orientation ("pose") of each bone with respect to a reference pose. 
  • Identification of joint axes: Numerically determine the position and orientation of joint axes that optimally incorporate the measured bone poses, using different joint models (one or two axes, non-intersecting or intersecting axes). 
  • Build the hand model: Select joint types that appropriately fulfill the compromise between accuracy and complexity, and combine the joints to so-called kinematic chains.
Resulting hand model with 24 degrees of freedom. The index finger metacarpal bone (marked by a black square) is taken as the base of the model. In joints with two axes, the first axis is shown red and the second one green.

For the pose estimation we used an algorithm that the robot Justin uses to identify the location of objects on a table. (The task is similar: Matching three-dimensional point clouds.) 

The resulting hand model is shown to the right. The base of the model is the index finger metacarpal bone ("palm bone"), marked by a black square. From there, the kinematic chains extend, indicated by black lines. A kinematic chain is a series of joints, where the position of the last link (in this case the fingertip) depends on the joint angles of all joints in the chain.

The first joint of the thumb is modeled by two non-intersecting axes of rotation, connected by a thick line. The second joint of the thumb also exhibits significant side ward movement and is therefore also modeled by two joint axes, in this case intersecting ones.

The four fingers all have one axis of rotation that allows for a side ward movement and three axes for bending and stretching. The arching of the palm takes place around three axes pointing roughly in the direction of the long axes of the palm bones.

Apart from kinematics, other aspects of the human hand are also important for its fine manipulation abilities, for example touch sensing, motion planning and motion control.



Impedance of the human arm

Defining the Cartesian stiffness matrix of variable-impedance robots is a quite heuristic task. Furthermore, depending on the desired task the stiffness behaviour must be adapted during movement. Humans learn to control limb stiffness from interaction, and we indeed exhibit fine variation of impedance depending on the task and environment. But how? We want to understand the mechanisms for setting and varying impedance in the human arm and hand, and transfer such models to the robotic domain.

Our main goals are

  • to understand according to which cost functions biological systems adjust their impedance, and how does intrinsic---defined by the skeletomuscular structure---impedance play a role or, conversely, how and why does the nervous system fluctuate impedance;
  • use the gained knowledge to improve body-machine interfaces and to pave the way towards modern impedance teleoperated systems (including prosthetic devices, rehabilitation devices, tele-surgical robotic systems, and so on).

We have developed different impedance measurement methods for identifying impedance of the human fingers, arms, and legs.  We combine classical perturbation approaches with EMG-based identification, using force-torque-sensors and optical tracking systems.

Picture of  Daniela Korhammer

Daniela Korhammer

Picture of  Hannes Höppner

Hannes Höppner

DLR: postdoc
human impedance
hannes.hoeppnerdlrde, +49 8153 28-1062
Picture of  Jörn Vogel

Jörn Vogel

DLR: PhD candidate
BCI robot control
joern.vogeldlrde, +49 8153 28-2166
Picture of  Justin Bayer

Justin Bayer

Picture of  Markus Kühne

Markus Kühne

TUM: PhD candidate
MR-compatible haptic interfaces
Picture of  Marvin Ludersdorfer

Marvin Ludersdorfer

fortiss: PhD candidate
anomaly detection
Picture of  Nutan Chen

Nutan Chen

TUM: PhD candidate
hand modelling
Picture of  Rachel Hornung

Rachel Hornung

DLR: PhD candidate
rehabilitation robotics
Picture of  Patrick van der Smagt

Patrick van der Smagt

current: Head of AI Research, data lab, VW Group

Previous: Director of BRML labs
fortiss, An-Institut der Technischen Universität München
Professor for Biomimetic Robotics and Machine Learning, TUM

Chairman of Assistenzrobotik e.V.


    Joern Vogel, N Takemura, Hannes Höppner, Patrick van der Smagt, Gowrishankar Ganesh (2017). Hitting the sweet spot: Automatic optimization of energy transfer during tool-held hits. IEEE International Conference on Robotics and Automation (ICRA) 1549-1556 .
    Rui Zhao, Ali Haider, Patrick van der Smagt (2017). Two-Stream RNN/CNN for Action Recognition in 3D Videos. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)


    Maximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der Smagt (2016). Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data. arxiv.
    Nutan Chen, Maximilian Karl, Patrick van der Smagt (2016). Dynamic Movement Primitives in Latent Space of Time-Dependent Variational Autoencoders. Proc. 16th IEEE-RAS International Conference on Humanoid Robots
    Herke van Hoof, Nutan Chen, Maximilian Karl, Patrick van der Smagt, Jan Peters (2016). Stable Reinforcement Learning with Autoencoders for Tactile and Visual Data. Proc. IEEE International Conference on Intelligent Robots and Systems (IROS)
    Maximilian Sölch, Justin Bayer, Marvin Ludersdorfer, Patrick van der Smagt (2016). Variational Inference for On-line Anomaly Detection in High-Dimensional Time Series. International Conference on Learning Representations (ICLR)


    Nutan Chen, Justin Bayer, Sebastian Urban, Patrick van der Smagt (2015). Efficient movement representation by embedding Dynamic Movement Primitives in Deep Autoencoders. Proc. 2015 IEEE-RAS International Conference on Humanoid Robots
    Nutan Chen, Sebastian Urban, Justin Bayer, Patrick van der Smagt (2015). Measuring Fingertip Forces from Camera Images for Random Finger Poses. Proc. IEEE International Conference on Robotics and System (IROS)
    Hannes Höppner, Markus Grebenstein, Patrick van der Smagt (2015). Two-dimensional orthoglide mechanism for revealing areflexive human arm mechanical properties. Proc. IEEE International Conference on Intelligent Robots and Systems (IROS 2015)


    Nutan Chen, Sebastian Urban, Christian Osendorfer, Justin Bayer, Patrick van der Smagt (2014). Estimating finger grip force from an image of the hand using Convolutional Neural Networks and Gaussian Processes. Robotics and Automation (ICRA), 2014 IEEE International Conference on
    Hannes Höppner, Wolfgang Wiedmeyer, Patrick van der Smagt (2014). A new biarticular joint mechanism to extend stiffness ranges. Robotics and Automation (ICRA), 2014 IEEE International Conference on
    Georg Stillfried, Ulrich Hillenbrand, Marcus Settles, Patrick van der Smagt (2014). MRI-based skeletal hand movement model. In Ravi Balasubramanian and Veronica J. Santos (Eds.) The Human Hand as an Inspiration for Robot Hand Development 95 49-75.


    Hannes Höppner, Joseph McIntyre, Patrick van der Smagt (2013). Task Dependency of Grip Stiffness---A Study of Human Grip Force and Grip Stiffness Dependency during Two Different Tasks with Same Grip Forces. PLOS ONE. 8 (12), e80889.
    N. Fligge, H. Urbanek, P. van der Smagt (2013). Relation between object properties and EMG during reaching to grasp. Journal of Electromyography and Kinesiology. 23 (2), 402-410.
    David J. Braun, Florian Petit, Felix Huber, Sami Haddadin, Patrick van der Smagt, Alin Albu-Schaffer, Sethu Vijayakumar (2013). Robots Driven by Compliant Actuators: Optimal Control under Actuation Constraints. IEEE Transactions on Robotics. 99 (5), 1--17.
    Dominic Lakatos, Daniel Rüschen, Justin Bayer, Jörn Vogel, Patrick van der Smagt (2013). Identification of Human Limb Stiffness in 5 DoF and Estimation via EMG. In Desai, Jaydev P. and Dudek, Gregory and Khatib, Oussama and Kumar, Vijay (Eds.) Experimental Robotics 88 89-99.
    Sebastian Urban, Justin Bayer, Christian Osendorfer, Göran Wesling, Benoni B. Edin, Patrick van der Smagt (2013). Computing grip force and torque from finger nail images using Gaussian processes. Proc. 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 4034--4039.


    Leigh R. Hochberg, Daniel Bacher, Beata Jarosiewicz, Nicolas Y. Masse, John D. Simeral, Joern Vogel, Sami Haddadin, Jie Liu, Sydney S. Cash, Patrick van der Smagt, John P. Donoghue (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 485 372-377.
    Agneta Gustus, Georg Stillfried, Judith Visser, Henrik Jorntell, Patrick van der Smagt (2012). Human hand modelling: kinematics, dynamics, applications. Biological Cybernetics. 106 (11-12), 741-755.
    D. J. Braun, F. Petit, S. Haddadin, P. van der Smagt, A. Albu-Schäffer, S. Vijayakumar (2012). Optimal Torque and Stiffness Control in Compliantly Actuated Robots. Proc. 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems 2801--2808.
    Francesca Cordella, Francesco Di Corato, Loredana Zollo, Bruno Siciliano, Patrick van der Smagt (2012). Patient performace evaluation using kinect and Monte Carlo-based finger tracking. IEEE International Conference on Biomedical Robotics and Biomechatronics 1967--1972.
    Dominikus Gierlach, Agneta Gustus, Patrick van der Smagt (2012). Generating marker stars for 6D optical tracking. IEEE International Conference on Biomedical Robotics and Biomechatronics 147--152.
    Nadine Fligge, Joe McIntyre, Patrick van der Smagt (2012). Minimum jerk for human catching movements in 3D. Proc. IEEE International Conference on Biomedical Robotics and Biomechatronics 581--586.


    Smagt P van der (2011). Neue Entwicklungen in der Rehabilitation von Handfunktionsstörungen: Humanrobotik. In Dennis A. Novak (Eds.) Handfunktionsstörungen in der Neurologie: Klinik und Rehabilitation 433-451.
    Vogel J, Castellini C, Smagt P van der (2011). EMG-Based Teleoperation and Manipulation with the DLR LWR-III. Proc. IROS---International Conference on Intelligent Robots and Systems 672-678.
    Lakatos D, Petit F, Smagt P van der (2011). Conditioning vs. Excitation Time for Estimating Impedance Parameters of the Human Arm. Proceedings of the 11th IEEE-RAS International Conference on Humanoid Robots 636-642.
    Höppner H, Lakatos D, Urbanek H, Castellini C, Smagt P van der (2011). The Grasp Perturbator: Calibrating human grasp stiffness during a graded force task. Proc. ICRA---International Conference on Robotics and Automation 3312-3316 .
    Castellini C, Smagt P van der (2011). Preliminary evidence of dynamic muscular synergies in human grasping. Proceedings of ICAR - International Conference on Advanced Robotics 28-33.


    Höppner H, Lakatos D, Urbanek H, Smagt P van der (2010). The Arm-Perturbator: Design of a Wearable Perturbation Device to measure Limb Impedance. International Conference on Applied Bionics and Biomechanics (ICABB)
    Stillfried G, Smagt P van der (2010). Movement model of a human hand based on magnetic resonance imaging (MRI). International Conference on Applied Bionics and Biomechanics (ICABB)


    Gruijl J de, Smagt P van der, Zeeuw C de (2009). Anticipatory grip force control using a cerebellar model. Neuroscience. 162 (3), 777-786.
    Smagt P van der, Grebenstein M, Urbanek H, Fligge N, Strohmayr M, Stillfried G, Parrish J, Gustus A (2009). Robotics of human movements. Journal of physiology, Paris. 103 (3-5), 119-132.
    Castellini C, Smagt P van der (2009). Surface EMG in Advanced Hand Prosthetics. Biological Cybernetics. 100 (1), 35--47.


    Grebenstein M, Smagt P van der (2008). Antagonism for a highly anthropomorphic hand-arm system. Advanced Robotics. 22 (1), 39-55.
    Smagt P van der, Stillfried G (2008). Using MRT data to compute a hand kinematic model. Proc. 9th International Conference on Motion and Vibration Control (MOVIC)
    Panzer H, Eiberger O, Grebenstein M, Wolf S, Schaefer P, Smagt P van der (2008). Human motion range data optimizes anthropomorphic robotic hand-arm system design. Proc. 9th International Conference on Motion and Vibration Control (MOVIC)
    Arbib M, Metta G, Smagt P van der (2008). Neurorobotics: From Vision to Action. In B. Siciliano and O. Khatib (Eds.) Springer Handbook of Robotics 1453--1480.


    Bitzer S, Smagt P van der (2006). Learning EMG control of a robotic hand: towards active prostheses. Proceedings 2006 IEEE International Conference on Robotics and Automation 2819-2823.


    Urbanek H, Albu-Schäffer A, Smagt P van der (2004). Learning from demonstration: Repetitive movements for autonomous service robotics. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 3495-3500.


    Peters J, Smagt P van der (2002). Searching a Scalable Approach to Cerebellar Based Control. Applied Intelligence. 17 (1), 11-33.


    Smagt P van der (2000). Benchmarking Cerebellar Control. Robotics and Autonomous Systems. 32 237--251.


    Smagt P van der (1998). Cerebellar Control of Robot Arms. Connection Science. 10 301--320.
    Fischer M, Smagt P van der, Hirzinger G (1998). Learning Techniques in a Dataglove Based Telemanipulation System for the DLR Hand. Transactions of the IEEE International Conference on Robotics and Automation 1603--1608.


    Smagt P van der (1997). Teaching a robot to see how it moves. In Antony Browne (Eds.) Neural Network Perspectives on Cognition and Adaptive Robotics 195--219.


    Smagt P van der (1995). Visual Robot Arm Guidance using Neural Networks. Ph.D. thesis: Dept of Computer Systems, University of Amsterdam


    Hesselroth T, Sarkar K, Smagt P van der, Schulten K (1994). Neural network control of a pneumatic robot arm. IEEE Transactions on Systems, Man, and Cybernetics. 24 (1), 28--38.


    Kröse B, Smagt P van der, Groen F (1993). A one-eyed self-learning robot manipulator. In G. Bekey and K. Goldberg (Eds.) Neural networks in robotics 19-28.