Lab Group
header-image_0001_layer_4.jpg
header-image_0003_layer_0.jpg
header-image_0002_layer_1.jpg

Adversarial Machine Vision

 

Adversarial Machine Vision

Deep Neural Networks (DNNs) are the state-of-the-art tools for a wide range of tasks. However, recent studies have found that DNNs are vulnerable to adversarial perturbation attacks, which are hardly perceptible to humans but cause mis-classification in DNN-based decision-making systems, e.g., image classifiers.The majority of the existing attack mechanisms today are targeted towards mis-classifying specific objects and activities. However, most scenes contain multiple objects and there is usually some relationship among the objects in the scene, e.g., certain objects co-occur more frequently than others.This is often referred to as context in computer vision and is related to top-down feedback in human vision; the idea has been widely used in recognition problems. However, context has not played a significant role in the design of adversarial attacks.We are studying how to develop better methods for both adversarial attacks and defense using spatio-temporal context information.

Adversarial Machine Vision