Gradient-based Bit Encoding Optimization for Noise-Robust Binary Memristive Crossbar

Youngeun Kim1,a, Hyunsoo Kim2,c, Seijoon Kim2,d, Sang Joon Kim2,e and Priyadarshini Panda1,b
1Department of Electrical Engineering, Yale University, USA
ayoungeun.kim@yale.edu
bpriya.panda@yale.edu
2Samsung Advanced Institute of Technology, South Korea
chs0128.kim@samsung.com
dseijoon.kim@samsung.com
esangjoon0919.kim@samsung.com

ABSTRACT


Binary memristive crossbars have gained huge attention as an energy-efficient deep learning hardware accelerator. Nonetheless, they suffer from various noises due to the analog nature of the crossbars. To overcome such limitations, most previous works train weight parameters with noise data obtained from a crossbar. These methods are, however, ineffective because it is difficult to collect noise data in large-volume manufacturing environment where each crossbar has a large device/circuit level variation. Moreover, we argue that there is still room for improvement even though these methods somewhat improve accuracy. This paper explores a new perspective on mitigating crossbar noise in a more generalized way by manipulating input binary bit encoding rather than training the weight of networks with respect to noise data. We first mathematically show that the noise decreases as the number of binary bit encoding pulses increases when representing the same amount of information. In addition, we propose Gradient-based Bit Encoding Optimization (GBO) which optimizes a different number of pulses at each layer, based on our in-depth analysis that each layer has a different level of noise sensitivity. The proposed heterogeneous layer-wise bit encoding scheme achieves high noise robustness with low computational cost. Our experimental results on public benchmark datasets show that GBO improves the classification accuracy by ∼ 5 – 40% in severe noise scenarios.

Keywords: Memristive Crossbar, Binary Input Encoding, Deep Neural Network, Binary Neural Network.



Full Text (PDF)


3