Skip to content

hanboyu/adversarial_patch_removal

Repository files navigation

Adversarial Patch Removal: Defense Against Adversarial Patch Attacks

Abstract

Adversarial patch attack is an effective attacking method on a neural network in computer vision. It is considered a universal attack because it can attack any model, and this makes finding a good defense method against adversarial patch attacks very crucial and challenging. In this project, we experimented with a new defense method trying to remove the patch from the image while leaving enough information for the neural network to still be able to perform the image classification task. Our experiment shows that our method shows minor improvement in defending against adversarial patch attacks. However, we think this result is enough to show that our method is on the right track to helping models against the adversarial patch attack. We present our analysis and suggestion for future work in the discussion section.

Demo

demo.ipynb contains a demo and tutorial on how to use our method.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published