Training neural networks with additive noise in the desired signal

Abstract
A new global optimization strategy for training adaptive systems such as neural networks and adaptive filters [finite or infinite impulse response (FIR or IIR)] is proposed in this paper. Instead of adding random noise to the weights as proposed in the past, additive random noise is injected directly into the desired signal. Experimental results show that this procedure also speeds up greatly the backpropagation algorithm. The method is very easy to implement in practice, preserving the backpropagation algorithm and requiring a single random generator with a monotonically decreasing step size per output channel. Hence, this is an ideal strategy to speed up supervised learning, and avoid local minima entrapment when the noise variance is appropriately scheduled.

This publication has 22 references indexed in Scilit: