Hardware-Software Codesign of Weight Reshaping and Systolic Array Multiplexing for Efficient CNNs
Jingyao Zhang1,a, Huaxi Gu1,b, Grace Li Zhang2,c, Bing Li2,d and Ulf Schlichtmann2,e
1Xidian University
2Technical University of Munich
ajingyao.zhang.xidian@foxmail.com
bhxgu@xidian.edu.cn
cgrace-li.zhang@tum.de
db.li@tum.de
eulf.schlichtmann@tum.de
ABSTRACT
The last decade has witnessed the breakthrough of deep neural networks (DNNs) in various fields, e.g., image/speech recognition. With the increasing depth of DNNs, the number of multiply-accumulate operations (MAC) with weights explodes significantly, preventing their applications in resource-constrained platforms. The existing weight pruning method is considered to be an effective method to compress neural networks for acceleration. However, weights after pruning usually exhibit irregular patterns. Implementing MAC operations with such irregular weight patterns on hardware platforms with regular designs, e.g., GPUs and systolic arrays, might result in an underutilization of hardware resources. To utilize the hardware resource efficiently, in this paper, we propose a hardware-software codesign framework for acceleration on systolic arrays. First, weights after unstructured pruning are reorganized into a dense cluster. Second, various blocks are selected to cover the cluster seamlessly. To support the concurrent computations of such blocks on systolic arrays, a multiplexing technique and the corresponding systolic architecture is developed for various CNNs. The experimental results demonstrate that the performance of CNN inferences can be improved significantly without accuracy loss.
Keywords: Neural Networks, Systolic Arrays, Hardwaresoftware Codesign, Efficient CNNs.