machine learning

Natural data as found in biological signals or images is usually highly redundant and noisy. Classical models for the stochasticity in such processes break down in many such cases. For example, due to the presence of edges in images, the gradients are giving rise to fat tailed distributions. On the other hand, it can easily be seen that multiple EMG signals are highly non-Gaussian.

In machine learning, we investigate methods for finding useful representations of natural data. For this, we use non-linear parametric models. These are combined into deep and recurrent architectures which are subsequently optimised with classical and novel optimisation techniques on a wide variety of objectives.

The objectives typically encourage the representations to fulfil some numerical criterion: sparsity, independence, clustering of similar items or the ability to reconstruct the input. The models we use include but are not limited to deep belief networks, recurrent neural networks, convolutional neural networks, variational autoencoders, and Gaussian Processes.

 

 

Fast Adaptive Weight Noise

We developed an efficient calculation of the marginal likelihood of a distribution over the weights for neural networks. We use a technique called Variance Propagation for computing mean and variance when propagating a Gaussian distribution through a neural network. (Wang & Manning 2013) are providing rules for propagation mean and variance through a set of linear transformations and nonlinear transfer functions. By choosing Gaussian distributions for the network weights and propagating this uncertainty through the network we can efficiently calculate the marginal likelihood. Optimising it directly with respect to the parameters of the distribution of the weights will lead to a maximum likelihood approach. By adding a KL-divergence between the distribution over the weights and a prior we prevent the model from overfitting the data.

A slight variant of this is to use variance propagation to approximate Bayesian learning of neural networks: we can optimise the variational upper bound on the negative log-likelihood of the data. This allows to exploit model uncertainty in a wide range of scenarios, such as active learning or reinforcement learning. Apart from being able to model uncertainty it also requires very few data.

 

 

Hybrid addition-multiplication networks using parameterisable transfer functions

Can the performance of neural networks be improved by the use of a novel, parameterizable transfer function that allows each neuron to smoothly adjust the operation it performs on its inputs between summation and multiplication?

In artificial neural networks the value of a neuron is given by a weighted sum of its inputs propagated through a non-linear transfer function; however some tasks greatly benefit from units that compute the product instead of the sum of their inputs.

To allow neurons to autonomously determine whether they are additive or multiplicative, we propose a parameterisable transfer function based on the fractionally iterated exponential function generated from a solution to Schröder’s functional equation. This class of transfer functions allows to continuously interpolate the operation a neuron performs between addition and multiplication. Since it is also differentiable, the operation can be determined using standard backpropagation training for neural networks.

So far the mathematical theory has been established (Urban & van der Smagt, 2015) and an implementation effort is currently being made. Next steps will include testing of this novel transfer function on regression networks.

Picture of  Daniela Korhammer

Daniela Korhammer

alumni
Picture of  Justin Bayer

Justin Bayer

alumni
bayersensedio
Picture of  Maximilian Karl

Maximilian Karl

TUM: PhD candidate
efficient inference
Picture of  Nutan Chen

Nutan Chen

TUM: PhD candidate
hand modelling
nutanin.tumde
Picture of  Patrick van der Smagt

Patrick van der Smagt

current: Head of AI Research, data lab, VW Group

Previous: Director of BRML labs
fortiss, An-Institut der Technischen Universität München
Professor for Biomimetic Robotics and Machine Learning, TUM

Chairman of Assistenzrobotik e.V.
smagtbrmlorg
Picture of  Sebastian Urban

Sebastian Urban

TUM: PhD candidate
learning skin data
surbantumde, +49 89 289-25794



[20]
Title: A one-eyed self-learning robot manipulator Neural networks in robotics
Written by: Kr{\"o}se B, Smagt P van der, Groen F
in: 1993
Volume: Number:
on pages: 19-28
Chapter:
Editor: G. Bekey and K. Goldberg
Publisher: Kluwer Academic Publishers, Dordrecht
Series:
Address:
Edition:
ISBN:
how published:
Organization:
School:
Institution:
Type:
DOI:
URL:
ARXIVID:
PMID:

bibtex

Note:

Abstract: A self-learning, adaptive control system for a robot arm using a vision system in a feedback loop is described. The task of the control system is to position the end-effector as accurate as possible directly above a target object, so that it can be grasped. The camera of the vision system is positioned in the end-effector and the visual information is used directly to control the robot. Two strategies are presented to solve the problem of obtaining 3D information from a single camera: a) using the size of the target object and b) using information from a sequence of images from the moving camera. In both cases a neural network is trained to perform the desired mapping.