machine learning

Natural data as found in biological signals or images is usually highly redundant and noisy. Classical models for the stochasticity in such processes break down in many such cases. For example, due to the presence of edges in images, the gradients are giving rise to fat tailed distributions. On the other hand, it can easily be seen that multiple EMG signals are highly non-Gaussian.

In machine learning, we investigate methods for finding useful representations of natural data. For this, we use non-linear parametric models. These are combined into deep and recurrent architectures which are subsequently optimised with classical and novel optimisation techniques on a wide variety of objectives.

The objectives typically encourage the representations to fulfil some numerical criterion: sparsity, independence, clustering of similar items or the ability to reconstruct the input. The models we use include but are not limited to deep belief networks, recurrent neural networks, convolutional neural networks, variational autoencoders, and Gaussian Processes.



Fast Adaptive Weight Noise

We developed an efficient calculation of the marginal likelihood of a distribution over the weights for neural networks. We use a technique called Variance Propagation for computing mean and variance when propagating a Gaussian distribution through a neural network. (Wang & Manning 2013) are providing rules for propagation mean and variance through a set of linear transformations and nonlinear transfer functions. By choosing Gaussian distributions for the network weights and propagating this uncertainty through the network we can efficiently calculate the marginal likelihood. Optimising it directly with respect to the parameters of the distribution of the weights will lead to a maximum likelihood approach. By adding a KL-divergence between the distribution over the weights and a prior we prevent the model from overfitting the data.

A slight variant of this is to use variance propagation to approximate Bayesian learning of neural networks: we can optimise the variational upper bound on the negative log-likelihood of the data. This allows to exploit model uncertainty in a wide range of scenarios, such as active learning or reinforcement learning. Apart from being able to model uncertainty it also requires very few data.



Hybrid addition-multiplication networks using parameterisable transfer functions

Can the performance of neural networks be improved by the use of a novel, parameterizable transfer function that allows each neuron to smoothly adjust the operation it performs on its inputs between summation and multiplication?

In artificial neural networks the value of a neuron is given by a weighted sum of its inputs propagated through a non-linear transfer function; however some tasks greatly benefit from units that compute the product instead of the sum of their inputs.

To allow neurons to autonomously determine whether they are additive or multiplicative, we propose a parameterisable transfer function based on the fractionally iterated exponential function generated from a solution to Schröder’s functional equation. This class of transfer functions allows to continuously interpolate the operation a neuron performs between addition and multiplication. Since it is also differentiable, the operation can be determined using standard backpropagation training for neural networks.

So far the mathematical theory has been established (Urban & van der Smagt, 2015) and an implementation effort is currently being made. Next steps will include testing of this novel transfer function on regression networks.

Picture of  Daniela Korhammer

Daniela Korhammer

Picture of  Justin Bayer

Justin Bayer

Picture of  Maximilian Karl

Maximilian Karl

TUM: PhD candidate
efficient inference
Picture of  Nutan Chen

Nutan Chen

TUM: PhD candidate
hand modelling
Picture of  Patrick van der Smagt

Patrick van der Smagt

current: Head of AI Research, data lab, VW Group

Previous: Director of BRML labs
fortiss, An-Institut der Technischen Universität München
Professor for Biomimetic Robotics and Machine Learning, TUM

Chairman of Assistenzrobotik e.V.
Picture of  Sebastian Urban

Sebastian Urban

TUM: PhD candidate
learning skin data
surbantumde, +49 89 289-25794

Title: The locally linear nested network for robot manipulation Proceedings of the IEEE International Conference on Neural Networks
Written by: Smagt P van der, Groen F, Groenewoud F van het
in: 1994
Volume: Number:
on pages: 2787--2792
how published:

pdf bibtex


Abstract: We present a method for accurate representation of high-dimensional unknown functions from random samples drawn from its input space. The method builds representations of the function by recursively splitting the input space in smaller subspaces, while in each of these subspaces a linear approximation is computed. The representations of the function at all levels (i.e., depths in the tree) are retained during the learning process, such that a good generalisation is available as well as more accurate representations in some subareas. Therefore, fast and accurate learning are combined in this method. The method, which is applied to hand-eye coordination of a robot arm, is shown to be superior to other neural networks.