Towards Best-effort Approximation: Applying NAS to General-purpose Approximate Computing

Weiwei Chen1,2,a, Ying Wang1,b, Shuang Yang1,2,c, Chen Liu1,d and Lei Zhang1,e

1Institute of Computer Technology, Chinese Academy of Sciences, Beijing, China
2University of Chinese Academy of Sciences, Beijing, China
achenweiwei@ict.ac.cn
bwangying2009@ict.ac.cn
cyangshuang2019@ict.ac.cn
dliucheng@ict.ac.cn
ezlei@ict.ac.cn

ABSTRACT

The design of neural network architecture for code approximation involves a large number of hyper-parameters to explore, it is a non-trivial task to find an neural-based approximate computing solution that meets the demand of application-specified accuracy and Quality of Service (QoS). Prior works do not address the problem of ‘optimal’ network architectures design in program approximation, which depends on the user-specified constraints, the complexity of dataset and the hardware configuration. In this paper, we apply Neural Architecture Search (NAS) for searching and selecting the neural approximate computing and provide an automatic framework that tries to generate the best-effort approximation result while satisfying the user-specified QoS/accuracy constraints. Compared with previous method, this work achieves more than 1.43× speedup and 1.74× energy reduction on average when applied to the AxBench benchmarks.

Keywords: Approximate computing, NAS, Energy efficiency.



Full Text (PDF)