[41]
Title: Why Feed-Forward Networks are in a Bad Shape Proceedings of the 8th International Conference on Artificial Neural Networks
Written by: Smagt P van der, Hirzinger G
in: 1998
Volume: Number:
on pages: 159--164
Chapter:
Editor: L. Niklasson and M. Bod{\'e}n and T. Ziemke
Publisher: Springer
Series:
Address:
Edition:
ISBN:
how published:
Organization:
School:
Institution:
Type:
DOI:
URL:
ARXIVID:
PMID:

pdf bibtex

Note:

Abstract: It has often been noted that the learning problem in feed-forward neural networks is very badly conditioned. Although, generally, the special form of the transfer function is usually taken to be the cause of this condition, we show that it is caused by the manner in which neurons are connected. By analyzing the expected values of the Hessian in a feed-forward network it is shown that, even in a network where all the learning samples are well chosen and the transfer function is not in its saturated state, the system has a non-optimal condition. We subsequently propose a change in the feed-forward network structure which alleviates this problem. We finally demonstrate the positive influence of this approach.