Towards ADC-Less Compute-In-Memory Accelerators for Energy Efficient Deep Learning

Utkarsh Saxenaa, Indranil Chakrabortyb and Kaushik Royc
School of Electrical and Computer Engineering, Purdue University
asaxenau@purdue.edu
bichakra@purdue.edu
ckaushik@purdue.edu

ABSTRACT


Compute-in-Memory (CiM) hardware has shown great potential in accelerating Deep Neural Networks (DNNs). However, most CiM accelerators for matrix vector multiplication rely on costly analog to digital converters (ADCs) which becomes a bottleneck in achieving high energy efficiency. In this work, we propose a hardware-software co-design approach to reduce the aforementioned ADC costs through partial-sum quantization. Specifically, we replace ADCs with 1-bit sense amplifiers and develop a quantization aware training methodology to compensate for the loss in representation ability. We show that the proposed ADC-less DNN model achieves 1.1x-9.6x reduction in energy consumption while maintaining accuracy within 1% of the DNN model without partial-sum quantization.

Keywords: Compute-In-Memory, Dnn Acceleration, Analog Computing, Quantization, Hardware-Software Co-Design.



Full Text (PDF)