Implementing Neural Network Models on Parallel Computers

Abstract
The remarkable processing capabilities of the nervous system must derive from the large numbers of neurons participating (roughly 1010), since the time-scales involved are of the order of a millisecond, rather than the nanoseconds of modern computers. The neural network models which attempt to capture this behaviour are inherently parallel. We review the implementation of a range of neural network models on SIMD and MIMD computers. On the ICL Distributed Array Processor (DAP), a 4096-processor SIMD machine, we have studied training algorithms in the context of the Hopfield net, with specific applications including the storage of words and continuous text in content-addressable memory. The Hopfield and Tank analogue neural net has been used for image restoration with the Geman and Geman algorithm. We compare the performance of this scheme on the DAP and on a Meiko Computing Surface, a reconfigurable MIMD array of transputers. We describe also the strategies which we have used to implement the Durbin and Willshaw elastic net model on the Computing Surface.