Runtime Monitoring Neuron Activation Patterns

Chih-Hong Cheng1,a, Georg Nührenberg1,b and Hirotoshi Yasuoka2
1fortiss - Research Institute of the Free State of Bavaria
acheng@fortiss.org
bnuehrenberg@fortiss.org
2DENSO CORPORATION
hirotoshi_yasuoka@denso.co.jp

ABSTRACT


For using neural networks in safety critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in training. We propose runtime neuron activation pattern monitoring - after the standard training process, one creates a monitor by feeding the training data to the network again in order to store the neuron activation patterns in abstract form. In operation, a classification decision over an input is further supplemented by examining if a pattern similar (measured by Hamming distance) to the generated pattern is contained in the monitor. If the monitor does not contain any pattern similar to the generated pattern, it raises a warning that the decision is not based on the training data. Our experiments show that, by adjusting the similarity-threshold for activation patterns, the monitors can report a significant portion of misclassfications to be not supported by training with a small false-positive rate, when evaluated on a test set.

Keywords: Runtime monitoring, Neural network, Dependability, Autonomous driving.



Full Text (PDF)