Francesco Croce*, Sven Gowal*, Thomas Brunner*, Evan Shelhamer*, Matthias Hein, Taylan Cemgil
https://arxiv.org/abs/2202.13711
We evaluate the following defenses:
yoon_2021
: Adversarial Purification with Score-based Generative Modelshwang_2021
: AID-purifier: A light auxiliary network for boosting adversarial defensewu_2021
: Attacking Adversarial Attacks as A Defenseshi_2020
: Online Adversarial Purification based on Self-Supervisionkang_2021
: Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending against Adversarial Attacksmao_2021
: Adversarial Attacks are Reversible with Natural Supervisionqian_2021
: Improving Model Robustness with Latent Distribution Locally and Globallyalfarra_2021
: Combating Adversaries with Anti-Adversarieschen_2021
: Towards Robust Neural Networks via Close-loop Control
Some folders have a single Python notebook while other contain more involved code.
As a result, such folders will contain a run_eval.sh
with the commands to run the evaluations or an explanatory README.md
file.
The pre-trained models have to be downloaded following the indications in the corresponding folders and papers,
together with the details provided in the appendix of our paper.