An Anomaly Comprehension Neural Network for Surveillance Videos on Terminal Devices

Yuan Cheng1, Guangtai Huang2, Peining Zhen1, Bin Liu2, Hai-Bao Chen1, Ngai Wong3 and Hao Yu2

1Department of Micro/Nano Electronics, Shanghai Jiao Tong University, Shanghai, China
2Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
3Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong

ABSTRACT

Anomaly comprehension in surveillance videos is more challenging than detection. This work introduces the design of a lightweight and fast anomaly comprehension neural network. For comprehension, a spatio-temporal LSTM model is developed based on the structured, tensorized time-series features extracted from surveillance videos. Deep compression of network size is achieved by tensorization and quantization for the implementation on terminal devices. Experiments on large-scale video anomaly dataset UCF-Crime demonstrate that the proposed network can achieve an impressive inference speed of 266 FPS on a GTX-1080Ti GPU, which is 4.29 faster than ConvLSTMbased method; a 3.34% AUC improvement with 5.55% accuracy niche versus the 3D-CNN based approach; and at least 15k× parameter reduction and 228× storage compression over the RNN-based approaches. Moreover, the proposed framework has been realized on an ARM-core based IOT board with only 2.4W Power consumption.

Keywords: Anomaly Comprehension, Surveillance Videos, AI on IOT



Full Text (PDF)