A Runtime Reconfigurable Design of Compute-in-Memory–Based Hardware Accelerator for Deep Learning Inference
- 28 June 2021
- journal article
- research article
- Published by Association for Computing Machinery (ACM) in ACM Transactions on Design Automation of Electronic Systems
- Vol. 26 (6), 1-18
- https://doi.org/10.1145/3460436
Abstract
Compute-in-memory (CIM) is an attractive solution to address the “memory wall” challenges for the extensive computation in deep learning hardware accelerators. For custom ASIC design, a specific chip instance is restricted to a specific network during runtime. However, the development cycle of the hardware is normally far behind the emergence of new algorithms. Although some of the reported CIM-based architectures can adapt to different deep neural network (DNN) models, few details about the dataflow or control were disclosed to enable such an assumption. Instruction set architecture (ISA) could support high flexibility, but its complexity would be an obstacle to efficiency. In this article, a runtime reconfigurable design methodology of CIM-based accelerators is proposed to support a class of convolutional neural networks running on one prefabricated chip instance with ASIC-like efficiency. First, several design aspects are investigated: (1) the reconfigurable weight mapping method; (2) the input side of data transmission, mainly about the weight reloading; and (3) the output side of data processing, mainly about the reconfigurable accumulation. Then, a system-level performance benchmark is performed for the inference of different DNN models, such as VGG-8 on a CIFAR-10 dataset and AlexNet GoogLeNet, ResNet-18, and DenseNet-121 on an ImageNet dataset to measure the trade-offs between runtime reconfigurability, chip area, memory utilization, throughput, and energy efficiency.Keywords
This publication has 33 references indexed in Scilit:
- Analysis of the reconfiguration latency and energy overheads for a Xilinx Virtex‐5 field‐programmable gate arrayIET Computers & Digital Techniques, 2018
- Training Deep Convolutional Neural Networks with Resistive Cross-Point DevicesFrontiers in Neuroscience, 2017
- Densely Connected Convolutional NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep LearningPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- Cambricon: An Instruction Set Architecture for Neural NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in CrossbarsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- ISAACACM SIGARCH Computer Architecture News, 2016
- CambriconACM SIGARCH Computer Architecture News, 2016
- Deep Residual Learning for Image RecognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight ElementIEEE Transactions on Electron Devices, 2015