machine learning

Natural data as found in biological signals or images is usually highly redundant and noisy. Classical models for the stochasticity in such processes break down in many such cases. For example, due to the presence of edges in images, the gradients are giving rise to fat tailed distributions. On the other hand, it can easily be seen that multiple EMG signals are highly non-Gaussian.

In machine learning, we investigate methods for finding useful representations of natural data. For this, we use non-linear parametric models. These are combined into deep and recurrent architectures which are subsequently optimised with classical and novel optimisation techniques on a wide variety of objectives.

The objectives typically encourage the representations to fulfil some numerical criterion: sparsity, independence, clustering of similar items or the ability to reconstruct the input. The models we use include but are not limited to deep belief networks, recurrent neural networks, convolutional neural networks, variational autoencoders, and Gaussian Processes.

 

 

Fast Adaptive Weight Noise

We developed an efficient calculation of the marginal likelihood of a distribution over the weights for neural networks. We use a technique called Variance Propagation for computing mean and variance when propagating a Gaussian distribution through a neural network. (Wang & Manning 2013) are providing rules for propagation mean and variance through a set of linear transformations and nonlinear transfer functions. By choosing Gaussian distributions for the network weights and propagating this uncertainty through the network we can efficiently calculate the marginal likelihood. Optimising it directly with respect to the parameters of the distribution of the weights will lead to a maximum likelihood approach. By adding a KL-divergence between the distribution over the weights and a prior we prevent the model from overfitting the data.

A slight variant of this is to use variance propagation to approximate Bayesian learning of neural networks: we can optimise the variational upper bound on the negative log-likelihood of the data. This allows to exploit model uncertainty in a wide range of scenarios, such as active learning or reinforcement learning. Apart from being able to model uncertainty it also requires very few data.

 

 

Hybrid addition-multiplication networks using parameterisable transfer functions

Can the performance of neural networks be improved by the use of a novel, parameterizable transfer function that allows each neuron to smoothly adjust the operation it performs on its inputs between summation and multiplication?

In artificial neural networks the value of a neuron is given by a weighted sum of its inputs propagated through a non-linear transfer function; however some tasks greatly benefit from units that compute the product instead of the sum of their inputs.

To allow neurons to autonomously determine whether they are additive or multiplicative, we propose a parameterisable transfer function based on the fractionally iterated exponential function generated from a solution to Schröder’s functional equation. This class of transfer functions allows to continuously interpolate the operation a neuron performs between addition and multiplication. Since it is also differentiable, the operation can be determined using standard backpropagation training for neural networks.

So far the mathematical theory has been established (Urban & van der Smagt, 2015) and an implementation effort is currently being made. Next steps will include testing of this novel transfer function on regression networks.

Picture of  Daniela Korhammer

Daniela Korhammer

alumni
Picture of  Justin Bayer

Justin Bayer

alumni
bayersensedio
Picture of  Maximilian Karl

Maximilian Karl

TUM: PhD candidate
efficient inference
Picture of  Nutan Chen

Nutan Chen

TUM: PhD candidate
hand modelling
nutanin.tumde
Picture of  Patrick van der Smagt

Patrick van der Smagt

current: Head of AI Research, data lab, VW Group

Previous: Director of BRML labs
fortiss, An-Institut der Technischen Universität München
Professor for Biomimetic Robotics and Machine Learning, TUM

Chairman of Assistenzrobotik e.V.
smagtbrmlorg
Picture of  Sebastian Urban

Sebastian Urban

TUM: PhD candidate
learning skin data
surbantumde, +49 89 289-25794



[17]
Title: High-precision robot control: The nested network Artificial Neural Networks 2
Written by: Jansen A, Smagt P van der, Groen F
in: Sep. 1992
Volume: Number:
on pages: 583-586
Chapter:
Editor: I. Aleksander and J. Taylor
Publisher: North-Holland/Elsevier Science Publishers
Series:
Address: Amsterdam, The Netherlands
Edition:
ISBN:
how published:
Organization:
School:
Institution:
Type:
DOI:
URL:
ARXIVID:
PMID:

bibtex

Note:

Abstract: Traditional robot control is based on precise models of sensors and manipulators. Increasing complexity of required tasks, sensors, and manipulators result in models which are extremely complex to build at the required precision. In the realm of pick-and-place problems, we aim at designing highly adaptive controllers which require minimum knowledge of the manipulator and its sensors. In this area, several models have proven successful in one area or another. The use of a single feed-forward network trained with conjugate gradient back-propagation gives fast and highly adaptive approximation, but needs up to ten feedback steps to get high-precision results. Kohonen networks give a precision up to 0.5cm.~with only two steps, but need thousands of iterations to attain reasonable results. In this paper we introduce the nested network method based on search trees which adapts in real-time and reaches a grasping precision of up to 1mm.~in only three steps.