Using Formal Conformance Testing to Generate Scenarios for Autonomous Vehicles

Jean-Baptiste Horel1,a, Christian Laugier1,b, Lina Marsso2, Radu Mateescu3,e, Lucie Muller3,f, Anshul Paigwar1,c, Alessandro Renzaglia1,d and Wendelin Serwe3,g
1Univ. Grenoble Alpes Inria 38000 Grenoble, France
ajean-baptiste.horel@inria.fr
bchristian.laugier@inria.fr
canshul.paigwar@inria.fr
dalessandro.renzaglia@inria.fr
2Dept. of Computer Science University of Toronto Toronto, Canada
lina.marsso@utoronto.ca
3Univ. Grenoble Alpes Inria, CNRS, Grenoble INP✶, LIG 38000 Grenoble, France
eradu.mateescu@inria.fr
flucie.muller@inria.fr
gwendelin.serwe@inria.fr

ABSTRACT


Simulation, a common practice to evaluate autonomous vehicles, requires to specify realistic scenarios, in particular critical ones, occurring rarely and potentially dangerous to reproduce on the road. Such scenarios may be either generated randomly, or specified manually. Randomly generating scenarios is easy, but their relevance might be difficult to assess. Manually specified scenarios can focus on a given feature, but their design might be difficult and time-consuming, especially to achieve satisfactory coverage. In this work, we propose an automatic approach to generate a large number of relevant critical scenarios for autonomous driving simulators. The approach is based on the generation of behavioral conformance tests from a formal model (specifying the ground truth configuration with the range of vehicle behaviors) and a test purpose (specifying the critical feature to focus on). The obtained abstract test cases cover, by construction, all possible executions exercising a given feature, and can be automatically translated into the inputs of autonomous driving simulators. We illustrate our approach by generating thousands of behavior trees for the CARLA simulator for several realistic configurations.

Keywords: Behavior trees, CARLA simulator, Formal methods, Input-output conformance, Scenario generation, Test purpose.



Full Text (PDF)