Abstract
Nerve net models are designed that can learn up to $\gamma $ of $\alpha ^{n}$ possible sequences of stimuli, where $\gamma $ is large but $\alpha ^{n}$ much larger still. The models proposed store information in modifiable synapses. Their connexions need to be specified only in a general way, a large part being random. They resist destruction of a good many cells. When built with Hebb synapses (or any other class B or C synapses whose modification depends on the conjunction of activities in two cells) they demand a number of inputs to each cell that agrees well with known anatomy. The number of cells required, for performing tasks of the kind considered as well as the human brain can perform them, is only a small fraction of the number of cells in the brain. It is suggested that the models proposed are likely to be the most economical possible for their tasks, components and constructional constraints, and that any others that approach them in economy must share with them certain observable features, in particular an abundance of cells with many independent inputs and low thresholds.

This publication has 2 references indexed in Scilit: