4.6 Artificial Intelligence and Secure Systems

Printer-friendly version PDF version

Date: Tuesday 10 March 2020
Time: 17:00 - 18:30
Location / Room: Lesdiguières

Chair:
Annelie Heuser, Univ Rennes, Inria, CNRS, France, FR

Co-Chair:
Ilia Polian, University of Stuttgart, DE

In this session we will cover artificial intelligence algorithms in the context of secure systems. The presented papers cover an extension of a trusted execution environment to securely run machine learning algorithms, novel attacking strategies against logic-locking countermeasures, and an investigation of aging effects on the success rate of machine learning modelling attacks.

TimeLabelPresentation Title
Authors
17:004.6.1A PARTICLE SWARM OPTIMIZATION GUIDED APPROXIMATE KEY SEARCH ATTACK ON LOGIC LOCKING IN THE ABSENCE OF SCAN ACCESS
Speaker:
Rajit Karmakar, IIT KHARAGPUR, IN
Authors:
RAJIT KARMAKAR and Santanu Chattopadhyay, IIT Kharagpur, IN
Abstract
Logic locking is a well known Design-for-Security(DfS) technique for Intellectual Property (IP) protection of digital Integrated Circuits(IC). However, various attacks on logic locking can extract the secret obfuscation key successfully. Although Boolean Satisfiability (SAT) attacks can break most of the logic locked circuits, inability to deobfuscate sequential circuits is the main limitation of this type of attacks. Several existing defense strategies exploit this fact to thwart SAT attack by obfuscating the scan-based Design-for-Testability (DfT) infrastructure. In the absence of scan access, Model Checking based circuit unrolling attacks also suffer from scalability issues. In this paper, we propose a particle swarm optimization (PSO) guided attack framework, which is capable of finding an approximate key that produces correct output in most of the cases. Unlike the SAT attacks, the proposed attack framework can work even in the absence of scan access. Unlike Model Checking attacks, it does not suffer from scalability issues, thus can be applied on significantly large sequential circuits. Experimental results show that the derived key can produce correct outputs in more than 99% cases, for the majority of the benchmark circuits, while for the rest of the circuits, a minimal error is observed. The proposed attack framework enables partial activation of large sequential circuits in the absence of scan access, which is not feasible using the existing attack frameworks.

Download Paper (PDF; Only available from the DATE venue WiFi)
17:304.6.2EFFECT OF AGING ON PUF MODELING ATTACKS BASED ON POWER SIDE-CHANNEL OBSERVATIONS
Speaker:
Trevor Kroeger, University of Maryland Baltimore County, US
Authors:
Trevor Kroeger1, Wei Cheng2, Jean Luc Danger3, Sylvain Guilley4 and Naghmeh Karimi5
1University of Maryland Baltimore County, US; 2Telecom ParisTech, FR; 3Télécom ParisTech, FR; 4Secure-IC, FR; 5University of Maryland, Baltimore County, US
Abstract
Thanks to the imperfections in manufacturing process, Physically Unclonable Functions (PUFs) produce their unique outputs for given input signals (challenges) fed to identical circuitry designs. PUFs are often used as hardware primitives to provide security, e.g., for key generation or authentication purposes. However, they can be vulnerable to modeling attacks that predict the output for an unknown challenge, based on a set of known challenge/response pairs (CRPs). In addition, an attacker may benefit from power side-channels to break a PUFs' security. Although such attacks have been extensively discussed in literature, the effect of device aging on the efficacy of these attacks is still an open question. Accordingly, in this paper, we focus on the impact of aging on Arbiter-PUFs and one of its modeling-resistant counterparts, the Voltage Transfer Characteristic (VTC) PUF. We present the results of our SPICE simulations used to perform modeling attack via Machine Learning (ML) schemes on the devices aged from 0 to 20 weeks. We show that aging has a significant impact on modeling attacks. Indeed, when the training dataset for ML attack is extracted at a different age than the evaluation dataset, the attack is greatly hindered despite being performed on the same device. We show that the ML attack via power traces is particularly efficient to recover the responses of the anti-modeling VTC PUF, yet the aging still contributes to enhance its security.

Download Paper (PDF; Only available from the DATE venue WiFi)
18:004.6.3OFFLINE MODEL GUARD: SECURE AND PRIVATE ML ON MOBILE DEVICES
Speaker:
Emmanuel Stapf, TU Darmstadt, DE
Authors:
Sebastian P. Bayerl1, Tommaso Frassetto2, Patrick Jauernig2, Korbinian Riedhammer1, Ahmad-Reza Sadeghi2, Thomas Schneider2, Emmanuel Stapf2 and Christian Weinert2
1TH Nürnberg, DE; 2TU Darmstadt, DE
Abstract
Performing machine learning tasks in mobile applications yields a challenging conflict of interest: highly sensitive client information (e.g., speech data) should remain private while also the intellectual property of service providers (e.g., model parameters) must be protected. Cryptographic techniques offer secure solutions for this, but have an unacceptable overhead and moreover require frequent network interaction. In this work, we design a practically efficient hardware-based solution. Specifically, we build Offline Model Guard (OMG) to enable privacy-preserving machine learning on the predominant mobile computing platform ARM - even in offline scenarios. By leveraging a trusted execution environment for strict hardware-enforced isolation from other system components, OMG guarantees privacy of client data, secrecy of provided models, and integrity of processing algorithms. Our prototype implementation on an ARM HiKey 960 development board performs privacy-preserving keyword recognition using TensorFlow Lite for Microcontrollers in real time.

Download Paper (PDF; Only available from the DATE venue WiFi)
18:30End of session