Abstract
The derivation of a supervised training algorithm for a neural network implies the selection of a norm criterion which gives a suitable global measure of the particular distribution of errors. The author addresses this problem and proposes a correspondence between error distribution at the output of a layered feedforward neural network and L(p) norms. The generalized delta rule is investigated in order to verify how its structure can be modified in order to perform a minimization in the generic L(p) norm. The particular case of the Chebyshev norm is developed and tested.

This publication has 6 references indexed in Scilit: