PCM: Precision-Controlled Memory System for Energy Efficient Deep Neural Network Training
Boyeal Kim1,a, Sang Hyun Lee1,b, Hyun Kim2, Duy-Thanh Nguyen3,d, Minh-Son Le3,e, Ik Joon Chang3,f, Dohun Kwon4,g, Jin Hyeok Yoo4,h, Jun Won Choi4,i and Hyuk-Jae Lee1,c
1Department of Electrical and Computer Engineering, Seoul National University
abykim@capp.snu.ac.kr
bshleemark@capp.snu.ac.kr
chyuk_jae_lee@capp.snu.ac.kr
2Department of Electrical and Information Engineering, Seoul National University of Science and Technology
hyunkim@seoultech.ac.kr
3Department of Electronics Engineering, Kyung Hee University
ddtnguyen@khu.ac.kr
esonlm@khu.ac.kr
fichang@khu.ac.kr
4Department of Electrical Engineering, Hanyang University
gdhkwon@spa.hanyang.ac.kr
hjhyoo@spa.hanyang.ac.kr
ijunwchoi@hanyang.ac.kr
ABSTRACT
Deep neural network (DNN) training suffers from the significant energy consumption in memory system, and most existing energy reduction techniques for memory system have focused on introducing low precision that is compatible with computing unit (e.g., FP16, FP8). These researches have shown that even in learning the networks with FP16 data precision, it is possible to provide training accuracy as good as FP32, de facto standard of the DNN training. However, our extensive experiments show that we can further reduce the data precision while maintaining the training accuracy of DNNs, which can be obtained by truncating some least significant bits (LSBs) of FP16, named as hard approximation. Nevertheless, the existing hardware structures for DNN training cannot efficiently support such low precision. In this work, we propose a novel memory system architecture for GPUs, named as precision-controlled memory system (PCM), which allows for flexible management at the level of hard approximation. PCM provides high DRAM bandwidth by distributing each precision to different channels with as transposed data mapping on DRAM. In addition, PCM supports fine-grained hard approximation in the L1 data cache using software-controlled registers, which can reduce data movement and thereby improve energy saving and system performance. Furthermore, PCM facilitates the reduction of data maintenance energy, which accounts for a considerable portion of memory energy consumption, by controlling refresh period of DRAM. The experimental results show that in training CIFAR-100 dataset on Resnet-20 with precision tuning, PCM achieves energy saving and performance enhancement by 66% and 20%, respectively, without loss of accuracy.
Keywords: Deep Neural Network, Approximate Computing, Precision Control, Refresh Period Control, General Purpose Graphic Processing Unit, High Bandwidth Memory