Abstract
The author reports on how information can be loaded in a multilayer perceptron using methods of optimal estimation theory. Initial results indicate that optimal estimate training (OET) is a supervised learning technique that is faster and more accurate than backward error propagation. Further, because optimal estimation is well-characterized mathematically, the information content loaded into a set of network interconnection weights is also characterized well. Starting with a multilayer network and a set of (input/desired output) correlation vectors, the data is expressed in matrix form. Training occurs as a simultaneous calculation where an optimal set of interconnection weights are determined, using a least-squares criterion, by standard pseudoinverse matrix techniques. This technique has been applied previously on a single-layer network. The author has extended the pseudoinverse method to multiple-layer networks and the results are insignificant. Initial results show that optimal estimation methods are promising techniques for loading information in perceptrons.

This publication has 1 reference indexed in Scilit: