Picture of  Patrick van der Smagt

Patrick van der Smagt

current: Head of AI Research, data lab, VW Group

Previous: Director of BRML labs
fortiss, An-Institut der Technischen Universität München
Professor for Biomimetic Robotics and Machine Learning, TUM

Chairman of Assistenzrobotik e.V.
addressGermany
emailsmagtbrmlorg

In heading a research lab focussing on machine learning and its application in robotics, biomimetics and sensory data processing, my goal is to develop the techniques to model and use (human) movement.

The slides of my Keynote on end-to-end learning at the 2015 IROS conference are available here.

Visit my blog.

awards

Best Paper Award, Int Conf on Neural Information Processing (ICONIP 2014)
King-Sun Fu Best 2013 Transactions on Robotics Paper Award (2014)
Harvard Medical School/MGH Martin Research Prize (2013)
Erwin Schrödinger Award, Helmholtz Gesellschaft (2012)
SfN BCI Award Finalist (2012)
TUM Leonardo da Vinci Award (2008)
IEEE Best Paper Awards
Beckmann Institute Fellowship (1995)
NACEE Fellowship

in the press

various sources, e.g. NY Times, May 16, 2012: on a brain-controlled robotics experiment
Bayerische Rundfunk, May, 2012: "Wenn Rechner immer intelligenter werden", Radio Wissen
n-tv, June 10, 2010: EMG-controlled robotics
pinc, May 18, 2009, "biorobotics"
Het Financieele Dagblad, May, 2009
Discovery Channel, April 2009: "Future Homes"
3Sat, April 22, 2007: "Z wie Zukunft"
RTLII, March, 2007: "Welt der Wunder"
Pro7, Nov. 5, 2006: "Wunderwelt Wissen"
Abendzeitung, Oct. 28, 2006: Bestnoten für Forscher und Unternehmen
ZDF: Heute Journal, Sep. 15, 2006: Interview
ORF: "Newton" Science report, April 30, 2006: report on advanced prostheses
Süddeutsche Zeitung, Mar. 03, 2006: "Künstliche Hand am Computer entwickelt"
Süddeutsche Zeitung, Jan. 26, 2006: "Direkter Draht zum Hirn"

We were and are funded by various sources, including:
DFG project "SPP autonomous learning" (2012-2015)
NEUROBOTICS (EC project, 2005-2009)
NINAPRO (Swiss project, 2010-2013)
SENSOPAC (EC project, 2006-2010)
STIFF (EC project, 2009-2011)
THE (EC project, 2010-2014)
VIACTORS (EC project, 2009-2012)

on publishing

In June 2012 I resigned as editor of Neural Networks (Elsevier). Having worked with Neural Networks for almost 20 years, I have come to realise that the publication model propagated by behind-paywall publishers no longer combines with my own views of publication DOs and DONTs.  In particular, I have decided to move away from classical publication methods towards open access publishing, now that such alternatives are maturing.

After so many years of research and publishing, it is clear that only a peer-to-peer (double-open) review system with open access to the publications can be fair and unbiased.

I am currently still editor of Biological Cybernetics, as the open access model is being supported by Springer. However, also that large publishing house will have to rethink their approach to scientific publication before long.

Reviewing is good.  But open publication is an alternative.  My blog is an attempt to solve, for my own benefit, this publication issue.




[27]
Title: Minimisation methods for training feed-forward networks
Written by: Smagt P van der
in: Neural Networks 1994
Volume: 7 Number: 1
on pages: 1--11
Chapter:
Editor:
Publisher:
Series:
Address:
Edition:
ISBN:
how published:
Organization:
School:
Institution:
Type:
DOI:
URL:
ARXIVID:
PMID:

pdf bibtex

Note:

Abstract: Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-forward neural network training is a special case of function minimisation, where no explicit model of the data is assumed. Therefore, and due to the high dimensionality of the data, linearisation of the training problem through use of orthogonal basis functions is not desirable. The focus is on function minimisation on any basis. Quasi-Newton and conjugate gradient methods are reviewed, and the latter are shown to be a special case of error back-propagation with momentum term. Three feed-forward learning problems are tested with five methods. It is shown that, due to the fixed stepsize, standard error back-propagation performs well in avoiding local minima. However, by using not only the local gradient but also the second derivative of the error function a much shorter training time is required. Conjugate gradient with Powell restarts shows to be the superior method.