Lab Group
header-image_0001_layer_4.jpg
header-image_0003_layer_0.jpg
header-image_0002_layer_1.jpg

Three recent papers in NeurIPS 2021, ICCV 2021, and AAAI 2022 on adversarial attacks

Papers on adversarial attacks on black-box video classifiers accepted at NeurIPS 2021, defense against adversarial attacks on complex scene images accepted at ICCV 2021, and context-aware transfer attacks for object detection accepted at AAAI 2022

Three recent papers on adversarial attacks on machine vision systems have been accepted to NeurIPS 2021 and ICCV 2021.

1. The NeurIPS paper shows how geometric transformations can be used to design very efficient attacks on video classification systems. Specifically, we design a novel iterative algorithm Geometric TRAnsformed Perturbations (GEO-TRAP), for attacking video classification models. GEO-TRAP employs standard geometric transformation operations to reduce the search space for effective gradients into searching for a small group of parameters that define these operations. Our algorithm inherently leads to successful perturbations with surprisingly few queries. For example, adversarial examples generated from GEO-TRAP have better attack success rates with ~73.55% fewer queries compared to the state-of-the-art method for video adversarial attacks on the widely used Jester dataset.  

Title: Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power of Geometric Transformations

S. Li*, A. Aich*, S. Zhu, M. S. Asif, C. Song, A. Roy-Chowdhury, S. Krishnamurthy, Advances in Neural Information Processing Systems (NeurIPS), 2021 (* joint first authors)

2. The ICCV paper shows how language models can be used to describe complex scene images and defend against adversarial attacks. Motivated by the observation that language descriptions of natural scene images have already captured the object co-occurrence relationships that can be learned by a language model, the paper "Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes" developed a novel approach to perform context consistency checks using language models. The distinguishing aspect of this approach is that it is independent of the deployed object detector and yet offers very high accuracy in terms of detecting such examples in practical scenes with multiple objects.

Title: Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes.
M. Yin, S. Li, Z. Cai, C. Song, S. Asif, A. Roy-Chowdhury, S. Krishnamurthy, International Conference on Computer Vision (ICCV), 2021.    

3.  Title: Context-Aware Transfer Attacks for Object Detection, Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), 2022.