Skip to content

LIMUNIMI/UniversalAudioAttacks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial robustness evaluation of representation learning models and universal audio representations

Source code for the paper "Adversarial Robustness Evaluation of Representation Learning for Audio Classification".

Setup

Use conda and the environment files provided as specified:

  1. install base dependencies: wget, tar, uv
  2. uv sync

Run

  • uv run Dataset_import.py - download the datasets
  • uv run Resample.py - resamples the data
  • uv run Model_import.py - compute and evaluate the embeddings
  • uv run Main_Loop.py - perform the attacks and evaluate them
  • uv run SVM.py - perform the SVM-based evaluation of the adversarial examples
  • uv run MLP.py - perform the MLP-based evaluation of the adversarial examples

Results

The results are presentend in the notebooks.
For a direct access the two zip files contain the final results for the Attack and SVM phases.

About

Adversarial Robustness Evaluation of Universal Audio Representation Learning Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •