Energy-efficient ConvNets through approximate computing
- 1 March 2016
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE) in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV)
Abstract
Recently convolutional neural networks (ConvNets) have come up as state-of-the-art classification and detection algorithms, achieving near-human performance in visual detection. However, ConvNet algorithms are typically very computation and memory intensive. In order to be able to embed ConvNet-based classification into wearable platforms and embedded systems such as smartphones or ubiquitous electronics for the internet-of-things, their energy consumption should be reduced drastically. This paper proposes methods based on approximate computing to reduce energy consumption in state-of-the-art ConvNet accelerators. By combining techniques both at the system- and circuit level, we can gain energy in the systems arithmetic: up to 30× without losing classification accuracy and more than 100× at 99% classification accuracy, compared to the commonly used 16-bit fixed point number format.Keywords
This publication has 14 references indexed in Scilit:
- DVAS: Dynamic Voltage Accuracy Scaling for increased energy-efficiency in approximate computingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- A Ultra-Low-Energy Convolution Engine for Fast Brain-Inspired Vision in Multicore ClustersPublished by EDAA ,2015
- CaffePublished by Association for Computing Machinery (ACM) ,2014
- AxNNPublished by Association for Computing Machinery (ACM) ,2014
- DianNaoACM SIGPLAN Notices, 2014
- Quality programmable vector processors for approximate computingPublished by Association for Computing Machinery (ACM) ,2013
- SALSAPublished by Association for Computing Machinery (ACM) ,2012
- NeuFlow: A runtime reconfigurable dataflow processor for visionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2011
- THE EFFECTS OF QUANTIZATION ON MULTI-LAYER FEEDFORWARD NEURAL NETWORKSInternational Journal of Pattern Recognition and Artificial Intelligence, 2003
- The effects of quantization on multilayer neural networksIEEE Transactions on Neural Networks, 1995