A Novel Zero Weight/Activation-Aware Hardware Architecture of Convolutional Neural Network

Dongyoung Kim1, Junwhan Ahn2 and Sungjoo Yoo1
1Department of CSE, Seoul National University
2Department of EE, Seoul National University


It is imperative to accelerate convolutional neural networks (CNNs) due to their ever-widening application areas from server, mobile to IoT devices. Based on the fact that CNNs can be characterized by a significant amount of zero values in both kernel weights and activations, we propose a novel hardware accelerator for CNNs exploiting zero weights and activations. We also report a zero-induced load imbalance problem, which exists in zero-aware parallel CNN hardware architectures, and present a zero-aware kernel allocation as a solution. According to our experiments with a cycle-accurate simulation model, RTL, and layout design of the proposed architecture running two real deep CNNs, pruned AlexNet [1] and VGG-16 [2], our architecture offers 4x/1.8x (AlexNet) and 5.2x/2.1x (VGG-16) speedup compared with state-of-the-art zero-agnostic/zero-activation-aware architectures.

Keywords: Convolutional neural network, Accelerator, Zero value, Kernel, Activation.

Full Text (PDF)