Poisoning the (Data) Well in ML-Based CAD: A Case Study of Hiding Lithographic Hotspots

Kang Liua, Benjamin Tanb, Ramesh Karric and Siddharth Gargd

Center for Cybersecurity Department of ECE New York University 370 Jay Street, Brooklyn, NY, USA 11201
akang.liu@nyu.edu
bbenjamin.tan@nyu.edu
crkarri@nyu.edu
dsiddharth.garg@nyu.edu

ABSTRACT

Machine learning (ML) provides state-of-the-art performance in many parts of computer-aided design (CAD) flows. However, deep neural networks (DNNs) are susceptible to various adversarial attacks, including data poisoning to compromise training to insert backdoors. Sensitivity to training data integrity presents a security vulnerability, especially in light of malicious insiders who want to cause targeted neural network misbehavior. In this study, we explore this threat in lithographic hotspot detection via training data poisoning, where hotspots in a layout clip can be “hidden” at inference time by including a trigger shape in the input. We show that training data poisoning attacks are feasible and stealthy, demonstrating a backdoored neural network that performs normally on clean inputs but misbehaves on inputs when a backdoor trigger is present. Furthermore, our results raise some fundamental questions about the robustness of ML-based systems in CAD.



Full Text (PDF)