From Expert to Amateur: A simple method to ‘fool’ your Deep Neural Net

In recent years, multiple innovations in Artificial Intelligence and Machine Learning has led to the ubiquitous presence of brain-inspired neural networks in many real life applications. From robotic automation, to enhancing your ‘selfies’, there are a plethora of tasks performed by neural networks. Crucially, deep neural networks are also employed in making critical decisions in applications such as medical diagnostics. Due to the omnipresence of neural networks, an important question to ask is, ‘How reliable are they?’. In this research, we present an approach to craft a small, imperceptible noise, which, when added to the network’s input, can completely decimate it’s discriminative ability, and in essence ‘fool’ the network. Our research presents a simple optimization process for crafting such ‘Universal Adversarial Perturbations’  for any Computer Vision task, highlighting the vulnerability of neural networks independent of the task they are performing. By unraveling such drawbacks of neural networks, our research emphasizes the need for better understanding of neural networks, before utilizing them in critical decision-making tasks such as autonomous driving.

Mopuri Reddy, Aditya Ganeshan and R. Venkatesh Babu, “Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations”, accepted in IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 2018.

Project Page: