Ternary Compute-Enabled Memory using Ferroelectric Transistors for Accelerating Deep Neural Networks

Sandeep Krishna Thirumalaa, Shubham Jainb, Sumeet Kumar Guptac and Anand Raghunathand

School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
asthirum@purdue.edu
bjain130@purdue.edu
cguptask@purdue.edu
draghunathan@purdue.edu

ABSTRACT

Ternary Deep Neural Networks (DNNs), which employ ternary precision for weights and activations, have recently been shown to attain accuracies close to full-precision DNNs, raising interest in their efficient hardware realization. In this work we propose a Non-Volatile Ternary Compute-Enabled memory cell (TeC-Cell) based on ferroelectric transistors (FEFETs) for inmemory computing in the signed ternary regime. In particular, the proposed cell enables storage of ternary weights and employs multi-word-line assertion to perform massively parallel signed dot-product computations between ternary weights and ternary inputs. We evaluate the proposed design at the array level and show 72% and 74% higher energy efficiency for multiply-andaccumulate (MAC) operations compared to standard nearmemory computing designs based on SRAM and FEFET, respectively. Furthermore, we evaluate the proposed TeC-Cell in an existing ternary in-memory DNN accelerator. Our results show 3.3⨯-3.4⨯ reduction in system energy and 4.3⨯-7⨯ improvement in system performance over SRAM and FEFET based nearmemory accelerators, across a wide range of DNN benchmarks including both deep convolutional and recurrent neural networks

Keywords: Deep Neural Networks, Dot-Product, Ferroelectric Transistors, In-Memory Computing, Low-Precision, Multiply-and- Accumulate, Ternary DNN.



Full Text (PDF)