Skip to content

Reproducing the result of cifar100 #8

@Ldoun

Description

@Ldoun

Hello, first and foremost, I would like to express my gratitude for the work you have done. Thank you.
I was trying to reproduce the result of cifar100 in the appendix using retrain and FT. But for some reason, I couldn't reproduce the result of unlearning by FT.

I tested cifar100 under 6 different independent trials. As you can see FT with class-wise forgetting has quite a low score of forget efficacy on Sparse(pruned model)/Dense(unpruned) class-wise forgetting and Sparse random data forgetting.
image

The hyperparameters of these experiments are the same as ciafar10.

for seed in 1 2 3 4 5 6
do
    python -u main_forget.py --save_dir ${save_dir}/random/${seed}_retrain --mask ${base_dir}/${seed}/base/0model_SA_best.pth.tar --unlearn retrain --unlearn_epochs 160 --unlearn_lr 0.1 --dataset $data --class_to_replace -1 --num_indexes_to_replace 4500 --seed $seed
    python -u main_forget.py --save_dir ${save_dir}/random/${seed}_FT_only --mask ${base_dir}/${seed}/base/0model_SA_best.pth.tar --unlearn FT --unlearn_lr 0.04 --unlearn_epochs 10 --dataset $data --class_to_replace -1 --num_indexes_to_replace 4500 --seed $seed
    
    python -u main_forget.py --save_dir ${save_dir}/class/${seed}_retrain --mask ${base_dir}/${seed}/base/0model_SA_best.pth.tar --unlearn retrain --unlearn_epochs 160 --unlearn_lr 0.1 --dataset $data --seed $seed
    python -u main_forget.py --save_dir ${save_dir}/class/${seed}_FT_only --mask ${base_dir}/${seed}/base/0model_SA_best.pth.tar --unlearn FT --unlearn_lr 0.01 --unlearn_epochs 10 --dataset $data --seed $seed

    python -u main_forget.py --save_dir ${save_dir2}/random/${seed}_retrain --mask ${base_dir}/${seed}/base/1model_SA_best.pth.tar --unlearn retrain --unlearn_epochs 160 --unlearn_lr 0.1 --dataset $data --class_to_replace -1 --num_indexes_to_replace 4500 --seed $seed
    python -u main_forget.py --save_dir ${save_dir2}/random/${seed}_FT_only --mask ${base_dir}/${seed}/base/1model_SA_best.pth.tar --unlearn FT --unlearn_lr 0.04 --unlearn_epochs 10 --dataset $data --class_to_replace -1 --num_indexes_to_replace 4500 --seed $seed
    
    python -u main_forget.py --save_dir ${save_dir2}/class/${seed}_retrain --mask ${base_dir}/${seed}/base/1model_SA_best.pth.tar --unlearn retrain --unlearn_epochs 160 --unlearn_lr 0.1 --dataset $data --seed $seed
    python -u main_forget.py --save_dir ${save_dir2}/class/${seed}_FT_only --mask ${base_dir}/${seed}/base/1model_SA_best.pth.tar --unlearn FT --unlearn_lr 0.01 --unlearn_epochs 10 --dataset $data --seed $seed
done

#4
I'm aware that you've used different learning rates of cifar10 for class-wise forgetting(0.01) and random data forgetting(0.04). Is learning rates for cifar100 different?

Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions