Specifying and Evaluating Quality Metrics for Vision-based Perception Systems

Anand Balakrishnan1,a, Aniruddh G. Puranic1,b, Xin Qin1,c, Adel Dokhanchi2,e, Jyotirmoy V. Deshmukh1,d, Heni Ben Amor2,f and Georgios Fainekos2,g
1University of Southern California
aanandbal@usc:edu
bpuranic@usc:edu
cxinqin@usc:edu
djdeshmuk@usc:edu
2Arizona State University
eadkohanc@asu:edu
fhbenamor@asu:edu
gfainekos@asu:edu

ABSTRACT


Robust perception algorithms are a vital ingredient for autonomous systems such as self-driving vehicles. Checking the correctness of perception algorithms such as those based on deep convolutional neural networks (CNN) is a formidable challenge problem. In this paper, we suggest the use of Timed Quality Temporal Logic (TQTL) as a formal language to express desirable spatio-temporal properties of a perception algorithm processing a video. While perception algorithms are traditionally tested by comparing their performance to ground truth labels, we show how TQTL can be a useful tool to determine quality of perception, and offers an alternative metric that can give useful information, even in the absence of ground truth labels. We demonstrate TQTL monitoring on two popular CNNs: YOLO and SqueezeDet, and give a comparative study of the results obtained for each architecture.

Keywords: Temporal Logic, Monitoring, Autonomous vehicles, Perception, Image processing, Quality Metrics



Full Text (PDF)