Abstract
In this contribution, a new stochastically motivated random weight initialization scheme for pattern classifying Multi-Layer Perceptrons (MLPs) is presented. Its first aim is to ensure that all training examples and all nodes have an equal opportunity to contribute to the improvement of the network during the Error Back-Propagation (EBP) training. In addition, it pursues input scale invariance: if the network inputs were substituted by rescaled inputs, the initialization procedure should provide an equally well performing network. Finally, the new algorithm can initialize MLPs comprising both concentric (e.g., Gaussian) and squashing (e.g., sigmoidal) nodes. Experiments demonstrate that networks initialized using the proposed method train better than networks initialized using a standard random initialization scheme.

This publication has 5 references indexed in Scilit: