HSCoNAS: Hardware-Software Co-Design of Efficient DNNs via Neural Architecture Search

Xiangzhong Luo1,a, Di Liu2,b, Shuo Huai1,2,c and Weichen Liu1,2,d
1School of Computer Science and Engineering, Nanyang Technological University, Singapore
2HP-NTU Digital Manufacturing Corporate Lab, Nanyang Technological University, Singapore
axiangzho001@e.ntu.edu.sg
bliu.di@ntu.edu.sg
cshuo001@e.ntu.edu.sg
dliu@ntu.edu.sg

ABSTRACT


In this paper, we present a novel multi-objective hardware-aware neural architecture search (NAS) framework, namely HSCoNAS, to automate the design of deep neural networks (DNNs) with high accuracy but low latency upon target hardware. To accomplish this goal, we first propose an effective hardware performance modeling method to approximate the runtime latency of DNNs on target hardware, which will be integrated into HSCoNAS to avoid the tedious on-device measurements. Besides, we propose two novel techniques, i.e., dynamic channel scaling to maximize the accuracy under the specified latency and progressive space shrinking to refine the search space towards target hardware as well as alleviate the search overheads. These two techniques jointly work to allow HSCoNAS to perform fine-grained and efficient explorations. Finally, an evolutionary algorithm (EA) is incorporated to conduct the architecture search. Extensive experiments on ImageNet are conducted upon diverse target hardware, i.e., GPU, CPU, and edge device to demonstrate the superiority of HSCoNAS over recent state-of-the-art approaches.



Full Text (PDF)