ApproxANN: An Approximate Computing Framework for Artificial Neural Network
Qian Zhanga, Ting Wangb, Ye Tianc, Feng Yuand and Qiang Xue
CUhk REliable computing laboratory (CURE), Department of Computer Science & Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong.
Artificial Neural networks (ANNs) are one of the most wellestablished machine learning techniques and have a wide range of applications, such as Recognition, Mining and Synthesis (RMS). As many of these applications are inherently error-tolerant, in this work, we propose a novel approximate computing framework for ANN, namely ApproxANN. When compared to existing solutions, ApproxANN considers approximation for both computation and memory accesses, thereby achieving more energy savings. To be specific, ApproxANN characterizes the impact of neurons on the output quality in an effective and efficient manner, and judiciously determine how to approximate the computation and memory accesses of certain less critical neurons to achieve the maximum energy efficiency gain under a given quality constraint. Experimental results on various ANN applications with different datasets demonstrate the efficacy of the proposed solution.
Full Text (PDF)