Structure Optimizations of Neuromorphic Computing Architectures for Deep Neural Network
Heechun Parka and Taewhan Kimb
Seoul National University, Seoul, Korea
aphc@snucad.snu.ac.kr
btkim@snucad.snu.ac.kr
ABSTRACT
This work addresses a new structure optimization of neuromorphic computing architectures. This enables to speed up the DNN (deep neural network) computation twice as fast as, theoretically, that of the existing architectures. Precisely, we propose a new structural technique of mixing both of the dendritic and axonal based neuromorphic cores in a way to totally eliminate the inherent non‐zero waiting time between cores in the DNN implementation. In addition, in conjunction with the new architecture we propose a technique of maximally utilizing computation units so that the resource overhead of total computation units can be minimized. We have provided a set of experimental data to demonstrate the effectiveness (i.e., speed and area) of our proposed architectural optimizations: ∼2x speedup with no accuracy penalty on the neuromorphic computation or improved accuracy with no additional computation time.