machine learning

Natural data as found in biological signals or images is usually highly redundant and noisy. Classical models for the stochasticity in such processes break down in many such cases. For example, due to the presence of edges in images, the gradients are giving rise to fat tailed distributions. On the other hand, it can easily be seen that multiple EMG signals are highly non-Gaussian.

In machine learning, we investigate methods for finding useful representations of natural data. For this, we use non-linear parametric models. These are combined into deep and recurrent architectures which are subsequently optimised with classical and novel optimisation techniques on a wide variety of objectives.

The objectives typically encourage the representations to fulfil some numerical criterion: sparsity, independence, clustering of similar items or the ability to reconstruct the input. The models we use include but are not limited to deep belief networks, recurrent neural networks, convolutional neural networks, variational autoencoders, and Gaussian Processes.



Fast Adaptive Weight Noise

We developed an efficient calculation of the marginal likelihood of a distribution over the weights for neural networks. We use a technique called Variance Propagation for computing mean and variance when propagating a Gaussian distribution through a neural network. (Wang & Manning 2013) are providing rules for propagation mean and variance through a set of linear transformations and nonlinear transfer functions. By choosing Gaussian distributions for the network weights and propagating this uncertainty through the network we can efficiently calculate the marginal likelihood. Optimising it directly with respect to the parameters of the distribution of the weights will lead to a maximum likelihood approach. By adding a KL-divergence between the distribution over the weights and a prior we prevent the model from overfitting the data.

A slight variant of this is to use variance propagation to approximate Bayesian learning of neural networks: we can optimise the variational upper bound on the negative log-likelihood of the data. This allows to exploit model uncertainty in a wide range of scenarios, such as active learning or reinforcement learning. Apart from being able to model uncertainty it also requires very few data.



Hybrid addition-multiplication networks using parameterisable transfer functions

Can the performance of neural networks be improved by the use of a novel, parameterizable transfer function that allows each neuron to smoothly adjust the operation it performs on its inputs between summation and multiplication?

In artificial neural networks the value of a neuron is given by a weighted sum of its inputs propagated through a non-linear transfer function; however some tasks greatly benefit from units that compute the product instead of the sum of their inputs.

To allow neurons to autonomously determine whether they are additive or multiplicative, we propose a parameterisable transfer function based on the fractionally iterated exponential function generated from a solution to Schröder’s functional equation. This class of transfer functions allows to continuously interpolate the operation a neuron performs between addition and multiplication. Since it is also differentiable, the operation can be determined using standard backpropagation training for neural networks.

So far the mathematical theory has been established (Urban & van der Smagt, 2015) and an implementation effort is currently being made. Next steps will include testing of this novel transfer function on regression networks.

Picture of  Daniela Korhammer

Daniela Korhammer

Picture of  Justin Bayer

Justin Bayer

Picture of  Maximilian Karl

Maximilian Karl

TUM: PhD candidate
efficient inference
Picture of  Nutan Chen

Nutan Chen

TUM: PhD candidate
hand modelling
Picture of  Patrick van der Smagt

Patrick van der Smagt

current: Head of AI Research, data lab, VW Group

Previous: Director of BRML labs
fortiss, An-Institut der Technischen Universität München
Professor for Biomimetic Robotics and Machine Learning, TUM

Chairman of Assistenzrobotik e.V.
Picture of  Sebastian Urban

Sebastian Urban

TUM: PhD candidate
learning skin data
surbantumde, +49 89 289-25794

Title: Minimisation methods for training feed-forward networks
Written by: Smagt P van der
in: Neural Networks 1994
Volume: 7 Number: 1
on pages: 1--11
how published:

pdf bibtex


Abstract: Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-forward neural network training is a special case of function minimisation, where no explicit model of the data is assumed. Therefore, and due to the high dimensionality of the data, linearisation of the training problem through use of orthogonal basis functions is not desirable. The focus is on function minimisation on any basis. Quasi-Newton and conjugate gradient methods are reviewed, and the latter are shown to be a special case of error back-propagation with momentum term. Three feed-forward learning problems are tested with five methods. It is shown that, due to the fixed stepsize, standard error back-propagation performs well in avoiding local minima. However, by using not only the local gradient but also the second derivative of the error function a much shorter training time is required. Conjugate gradient with Powell restarts shows to be the superior method.