RED: A ReRAM-based Deconvolution Accelerator

Zichen Fan1,a, Ziru Li1,b, Bing Li2,3,c, Yiran Chen3,d and Hai (Helen) Li3,e
1ECE Dept., Tsinghua University, Beijing, China
afanzc15@mails.tsinghua.edu.cn
blizr15@mails.tsinghua.edu.cn
2ECE Dept., Duke University, Durham, NC, USA
cbing.li.ece@duke.edu
3Army Research Office, Research Triangle Park, USA
dyiran.chen@duke.edu
ehai.li@duke.edu

ABSTRACT


Deconvolution has been widespread in neural networks. For example, it is essential for performing unsupervised learning in generative adversarial networks or constructing fully convolutional networks for semantic segmentation. Resistive RAM (ReRAM)-based processing-in-memory architecture has been widely explored in accelerating convolutional computation and demonstrates good performance. Performing deconvolution on existing ReRAM-based accelerator designs, however, suffers from long latency and high energy consumption because deconvolutional computation includes not only convolution but also extra add-on operations. To realize the more efficient execution for deconvolution, we analyze its computation requirement and propose a ReRAM-based accelerator design, namely, RED. More specific, RED integrates two orthogonal methods, the pixel-wise mapping scheme for reducing redundancy caused by zero-inserting operations and the zero-skipping data flow for increasing the computation parallelism and therefore improving performance. Experimental evaluations show that compared to the state-ofthe-art ReRAM-based accelerator, RED can speed up operation 3.69∼31.15× and reduce 8%∼88.36% energy consumption.



Full Text (PDF)