THE EFFECTS OF QUANTIZATION ON MULTI-LAYER FEEDFORWARD NEURAL NETWORKS
- 1 June 2003
- journal article
- Published by World Scientific Pub Co Pte Ltd in International Journal of Pattern Recognition and Artificial Intelligence
- Vol. 17 (4), 637-661
- https://doi.org/10.1142/s0218001403002514
Abstract
In this paper we investigate the combined effect of quantization and clipping on multi-layer feedforward neural networks (MLFNN). Statistical models are used to analyze the effects of quantization in a digital implementation. We analyze the performance degradation caused as a function of the number of fixed-point and floating-point quantization bits in the MLFNN. To analyze a true nonlinear neuron, we adopt the uniform and normal probability distributions, compare the training performances with and without weight clipping, and derive in detail the effect of the quantization error on forward and backward propagation. No matter what distribution the initial weights comply with, the weights distribution will approximate a normal distribution for the training of floating-point or high-precision fixed-point quantization. Only when the number of quantization bits is very low, the weights distribution may cluster to ± 1 for the training with fixed-point quantization. We establish and analyze the relationships for a true nonlinear neuron between inputs and outputs bit resolution, the number of network layers and the performance degradation, based on statistical models of on-chip and off-chip training. Our experimental simulation results verify the presented theoretical analysis.Keywords
This publication has 15 references indexed in Scilit:
- ANNSyS: an Analog Neural Network Synthesis SystemNeural Networks, 1999
- Toward a general-purpose analog VLSI neural network with on-chip learningIEEE Transactions on Neural Networks, 1997
- Optimal convergence of on-line backpropagationIEEE Transactions on Neural Networks, 1996
- Tolerance to analog hardware of on-chip learning in backpropagation networksIEEE Transactions on Neural Networks, 1995
- Efficient classification for multiclass problems using modular neural networksIEEE Transactions on Neural Networks, 1995
- The effects of quantization on multilayer neural networksIEEE Transactions on Neural Networks, 1995
- Analog CMOS implementation of a multilayer perceptron with nonlinear synapsesIEEE Transactions on Neural Networks, 1992
- An analysis on the performance of silicon implementations of backpropagation algorithms for artificial neural networksIEEE Transactions on Computers, 1991
- Artificial neural networks using MOS analog multipliersIEEE Journal of Solid-State Circuits, 1990
- Approximation by superpositions of a sigmoidal functionMathematics of Control, Signals, and Systems, 1989