Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing accuracy is very low #5

Open
lynnprosper opened this issue Sep 5, 2019 · 13 comments
Open

Testing accuracy is very low #5

lynnprosper opened this issue Sep 5, 2019 · 13 comments

Comments

@lynnprosper
Copy link

Dear,
First thank you for your code.
I have run your code, however, the result is not satisfying.
Result:
Training accuracy: 43.00
Testing accuracy: 43.00

my cmd:

python main_fed.py --dataset cifar --num_channels 1 --model cnn --epochs 10 --gpu 0 --iid

look forward to your reply.
best wishes~

@LYF14020510036
Copy link

Hi, I met too. Have you solved it?

@shaoxiongji
Copy link
Owner

Yes, you're right. I never reproduced the acc as reported. Try more epochs and data augmentation, i achieved 60+, but still low.

@shaoxiongji
Copy link
Owner

Similar issues in another repo AshwinRJ/Federated-Learning-PyTorch#2

@najeebjebreel
Copy link

Thanks a lot.
I also used some parts of your code.
It's very clear and useful.

I congratulate you for this nice codes.

@EEstq
Copy link

EEstq commented Aug 11, 2020

cut down the number of args.num_users may work

@Minoo-Hsn
Copy link

Thanks for your code. I have a question regarding following lines:

`num_shards, num_imgs = 200, 300
idx_shard = [i for i in range(num_shards)]
dict_users = {i: np.array([], dtype='int64') for i in range(num_users)}
idxs = np.arange(num_shards*num_imgs)
labels = dataset.train_labels.numpy()

# sort labels
idxs_labels = np.vstack((idxs, labels))
idxs_labels = idxs_labels[:,idxs_labels[1,:].argsort()]
idxs = idxs_labels[0,:]

# divide and assign
for i in range(num_users):
    rand_set = set(np.random.choice(idx_shard, 2, replace=False))
    idx_shard = list(set(idx_shard) - rand_set)
    for rand in rand_set:
        dict_users[i] = np.concatenate((dict_users[i], idxs[rand*num_imgs:(rand+1)*num_imgs]), axis=0)`

Are you setting a fixed number of images for each user in this part equal to 600? So it works in case that we have 100 client?

@shaoxiongji
Copy link
Owner

@Minoo-Hsn yes, but you can change via --num_users

@Sprinter1999
Copy link

Hi, shaoxiong~
I've read your code, it's nice, but I still cannot figure out this line in your Readme.md:
"The scripts will be slow without the implementation of parallel computing."
So, is that means we readers have to implement parallel-computing by ourselves?
Thank you~

@shaoxiongji
Copy link
Owner

@Sprinter1999 yes

@Pnme79
Copy link

Pnme79 commented May 23, 2023

Dear, First thank you for your code. I have run your code, however, the result is not satisfying. Result: Training accuracy: 43.00 Testing accuracy: 43.00

my cmd:

python main_fed.py --dataset cifar --num_channels 1 --model cnn --epochs 10 --gpu 0 --iid

look forward to your reply. best wishes~

me too . low acc!

@XiaoshuangJi
Copy link

increase the numbers of local epochs may work. Obviously, the running time will also increase.

my cmd:

python main_fed.py --dataset cifar --num_channels 1 --model cnn --epochs 10 --gpu 0 --iid --local_ep 10

Result: Training accuracy: 50.45 Testing accuracy: 48.43

@XiaoshuangJi
Copy link

However, increasing the numbers of local epochs blindly may do harm to acc and cost longer running time.
When I change local_ep from 10 to 15 or 20, the acc is even lower.

@Sprinter1999
Copy link

Your experimental results make sense. In Non-iid scenario, too much local training does harm to the generalization of the global model of FedAvg.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants