Accelerator-friendly Neural-network Training: Learning Variations and Defects in RRAM Crossbar

Lerong Chen1,a, Jiawen Li1,b, Yiran Chen2, Qiuping Deng3, Jiyuan Shen1,c, Xiaoyao Liang1,d and Li Jiang1,e
1Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.
aclion0003@sjtu.edu.cn
bmyloveys@sjtu.edu.cn
cshenjiyuan@sjtu.edu.cn
dliang-xy@sjtu.edu.cn
eljiang_cs@sjtu.edu.cn
2Department of Electrical and Computer Engineering, University of Pittsburgh, PA.
yiran.chen@pitt.edu
3Lynmax Research, Beijing, China.
dengqiuping@lynmaxtech.com

ABSTRACT


RRAM crossbar consisting of memristor devices can naturally carry out the matrix-vector multiplication; it thereby has gained a great momentum as a highly energy-efficient accelerator for neuromorphic computing. The resistance variations and stuck-at faults in the memristor devices, however, dramatically degrade not only the chip yield, but also the classification accuracy of the neural-networks running on the RRAM crossbar. Existing hardware-based solutions cause enormous overhead and power consumption, while software-based solutions are less efficient in tolerating stuck-at faults and large variations. In this paper, we propose an accelerator-friendly neural-network training method, by leveraging the inherent self-healing capability of the neural-network, to prevent the large-weight synapses from being mapped to the abnormal memristors based on the fault/variation distribution in the RRAM crossbar. Experimental results show the proposed method can pull the classification accuracy (10%-45% loss in previous works) up close to ideal level with ≤ 1% loss.



Full Text (PDF)