XST: A Crossbar Column-wise Sparse Training for Efficient Continual Learning

Fan Zhanga, Li Yangb, Jian Mengc, Jae-sun Seod, Yu (Kevin) Caoe and Deliang Fanf
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe Arizona 85281
afzhang95@asu.edu
blyang166@asu.edu
cjmeng15@asu.edu
djseo28@asu.edu
eycao17@asu.edu
fdfan12@asu.edu

ABSTRACT


Leveraging the ReRAM crossbar-based In-Memory- Computing (IMC) to accelerate single task DNN inference has been widely studied. However, using the ReRAM crossbar for continual learning has not been explored yet. In this work, we propose XST, a novel crossbar column-wise sparse training framework for continual learning. XST significantly reduces the training cost and saves inference energy. More importantly, it is friendly to existing crossbar-based convolution engine with almost no hardware overhead. Compared with the state-of-the-art CPG method, the experiments show that XST's accuracy achieves 4.95% higher accuracy. Furthermore, XST demonstrates ∼5.59× training speedup and 1.5× inference energy-saving.

Keywords: Continual Learning, In-Memory-Computing, Sparse Learning.



Full Text (PDF)