Anomaly Detection and Classification to enable Self-Explainability of Autonomous Systems

Florian Ziesche, Verena Klösa and Sabine Glesnerb
Software and Embedded Systems Engineering Technische Universität Berlin Berlin, Germany
averena.kloes@tu-berlin.de
bsabine.glesner@tu-berlin.de

ABSTRACT


While the importance of autonomous systems in our daily lives and in the industry increases, we have to ensure that this development is accepted by their users. A crucial factor for a successful cooperation between humans and autonomous systems is a basic understanding that allows users to anticipate the behavior of the systems. Due to their complexity, complete understanding is neither achievable, nor desirable. Instead, we propose self-explainability as a solution. A self-explainable system autonomously explains behavior that differs from anticipated behavior. As a first step towards this vision, we present an approach for detecting anomalous behavior that requires an explanation and for reducing the huge search space of possible reasons for this behavior by classifying it into classes with similar reasons. We envision our approach to be part of an explanation component that can be added to any autonomous system.



Full Text (PDF)