Timing-Predictable Vision Processing for Autonomous Systems
Tanya Amert1,a, Michael Balszun2,e, Martin Geier2,f, F. Donelson Smith1,b, James H. Anderson1,c and Samarjit Chakraborty1,d
1University of North Carolina at Chapel Hill, USA
atamert@cs.unc.edu
bsmithfd@cs.unc.edu
canderson@cs.unc.edu
dsamarjit@cs.unc.edu
2Technical University of Munich, Germany
emichael.balszun@tum.de
fmgeier@tum.de
ABSTRACT
Vision processing for autonomous systems today involves implementing machine learning algorithms and vision processing libraries on embedded platforms consisting of CPUs, GPUs and FPGAs. Because many of these use closed-source proprietary components, it is very difficult to perform any timing analysis on them. Even measuring or tracing their timing behavior is challenging, although it is the first step towards reasoning about the impact of different algorithmic and implementation choices on the end-to-end timing of the vision processing pipeline. In this paper we discuss some recent progress in developing tracing, measurement and analysis infrastructure for determining the timing behavior of vision processing pipelines implemented on state-of-the-art FPGA and GPU platforms.