LAC: Learned Approximate Computing

Vaibhav Guptaa, Tianmu Lib and Puneet Guptac
Electrical and Computer Engineering UCLA Los Angeles, USA
avaibhav22@ucla.edu
blitianmu1995@ucla.edu
cpuneetg@ucla.edu

ABSTRACT


Approximate hardware trades acceptable error for improved performance and previous literature focuses on optimizing this trade-off in the hardware. We show in this paper that the application (i.e., the software) can be optimized for better accuracy without losing any performance benefits of the approximate hardware. We propose LAC: learned approximate computing as a method of tuning the application parameters to compensate for hardware errors. Our approach showed improvements across a variety of standard signal/image processing applications delivering an average improvement of 5.82db in PSNR and 0.23 in SSIM of the outputs. This translates to up to 87% power reduction and 83% area reduction for similar application quality. LAC allows the same approximate hardware to be used for multiple applications.

Keywords: Approximate Computing, Machine Learning.



Full Text (PDF)