An On-chip Layer-wise Training Method for RRAM based Computing-in-memory Chips
Yiwen Geng, Bin Gaoa, Qingtian Zhangb, Wenqiang Zhang, Peng Yao, Yue Xi, Yudeng Lin, Junren Chen, Jianshi Tang, Huaqiang Wu and He Qian
Institute of Microelectronic, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, Beijing, China
agaob1@tsinghua.edu.cn
bzhangqt0103@tsinghua.edu.cn
ABSTRACT
RRAM-based computing-in-memory (CIM) chips have shown great potentials to accelerate deep neural networks on edge devices by reducing data transfer between the memory and the computing unit. However, due to the non-ideal characteristics of RRAM, the accuracy of the neural network on the RRAM chip is usually lower than the software. Here we propose an on-chip layer-wise training (LWT) method to alleviate the adverse effect of RRAM imperfections and improve the accuracy of the chip. Using a locally validated dataset, LWT can reduce the communication between the edge and the cloud, which benefits personalized data privacy. The simulation results on the CIFAR-10 dataset show that the LWT method can improve the accuracy of VGG-16 and ResNet-18 by more than 5% and 10%, respectively, with only 25% operations and 35% buffer compared with the back-propagation method. Moreover, the pipe-LWT method is presented to improve the throughput by three times further.
Keywords: RRAM, On-chip Training, Computing in Memory.