Scale-CIM: Precision-scalable computing-in-memory for energy-efficient quantized neural networks
- 1 January 2023
- journal article
- research article
- Published by Elsevier BV in Journal of Systems Architecture
Abstract
No abstract availableKeywords
Funding Information
- National Research Foundation of Korea
- Institute for Information and Communications Technology Promotion (2022-0-00441-001)
- Ministry of Science, ICT and Future Planning (2020R1A2C2003500, 2020R1A6A3A13064398, 2020R1G1A1100040)
This publication has 9 references indexed in Scilit:
- BitBlade: Energy-Efficient Variable Bit-Precision Hardware Accelerator for Quantized Neural NetworksIEEE Journal of Solid-State Circuits, 2022
- ±CIM SRAM for Signed In-Memory Broad-Purpose Computing From DSP to Neural ProcessingIEEE Journal of Solid-State Circuits, 2021
- Quant-PIM: An Energy-Efficient Processing-in-Memory Accelerator for Layerwise Quantized Neural NetworksIEEE Embedded Systems Letters, 2021
- BitBladePublished by Association for Computing Machinery (ACM) ,2019
- A Configurable Multi-Precision CNN Computing Framework Based on Single Bit RRAMPublished by Association for Computing Machinery (ACM) ,2019
- BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable ComputingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2018
- Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural NetworkPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2018
- ODESYPublished by Association for Computing Machinery (ACM) ,2016
- NVSim: A Circuit-Level Performance, Energy, and Area Model for Emerging Nonvolatile MemoryIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2012