Stealthy Inference Attack on DNN via Cache-based Side-Channel Attacks

Han Wanga, Syed Mahbub Hafizb, Kartik Patwaric, Chen-Nee Chuahd, Zubair Shafiqe and Houman Homayounf
University of California, Davis, CA, USA
ahjlwang@ucdavis.edu
bshafiz@ucdavis.edu
ckpatwari@ucdavis.edu
dchuah@ucdavis.edu
ezshafiq@ucdavis.edu
fhhomayoun@ucdavis.edu

ABSTRACT


The advancement of deep neural networks (DNNs) motivates the deployment in various domains, including image classification, disease diagnoses, voice recognition, etc. Since some tasks that DNN undertakes are very sensitive, the label information is confidential and contains a commercial value or critical privacy. The leakage of label information can lead to further crimes, like intentionally causing a collision with DNNenabled autonomous systems, disrupting energy networks with DNN-based controlling systems, etc. This paper demonstrates that DNNs also bring a new security threat, leading to the leakage of label information of input instances for the DNN models. In particular, we leverage the cache-based side-channel attack (SCA), i.e., Flush+Reload on the DNN (victim) models, to observe the execution of computation graphs, and create a database of them for building a classifier that the attacker can use to decide the label information of (unknown) input instances for victim models. Then we deploy the cache-based SCA on the same host machine with victim models and deduce the labels with the attacker’s classification model to compromise the privacy and confidentiality of victim models. We explore different settings and classification techniques to achieve a high attack success rate of stealing label information from the victim models. Additionally, we consider two attacking scenarios: binary attacking identifies specific sensitive labels and others while multi-class attacking targets recognize all classes victim DNNs provide. Last, we implement the attack on both static DNN models with identical architectures for all inputs and dynamic DNN models with an adaptation of architectures for different inputs to demonstrate the vast existence of the proposed attack, including DenseNet 121, DenseNet 169, VGG 16, VGG 19, MobileNet v1, and MobileNet v2. Our experiment exhibits that MobileNet v1 is the most vulnerable one with 99% and 75.6% attacking success rates for binary and multi-class attacking scenarios, respectively.

Keywords: Inference Attack, Deep Neural Network, Privacy Leakage, Side-Channel Attack.



Full Text (PDF)