Synthesis of Activation-Parallel Convolution Structures for Neuromorphic Architectures

Seban Kim and Jaeyong Chung
Incheon National University, Incheon, Korea

ABSTRACT


Convolutional neural networks have demonstrated continued success in various visual recognition challenges. The convolutional layers are implemented in the activation-serial or fully parallel manner on neuromorphic computing systems. This paper presents an unrolling method that generates parallel structures for the convolutional layers depending on a required level of parallel processing. We analyze the resource requirements for the unrolling of the two-dimensional filters, and propose methods to deal with practical considerations such as stride, borders, and alignment. We apply the propose methods to practical convolutional neural networks including AlexNet and the generated structures are mapped onto a recent neuromorphic computing system. This demonstrates that the proposed methods can improve the performance or reduce the power consumption significantly even without area penalty.

Keywords: Deep learning, Deep neural networks, Machine learning, Neuromorphic engineering.



Full Text (PDF)