Multi-source information fusion (MSIF) leverages diverse data streams to enhance decision-making, situational awareness, and system resilience. Federated Learning (FL) enables MSIF while preserving privacy but suffers from client drift under high data heterogeneity, leading to performance degradation. Traditional mitigation strategies rely on reference-based gradient adjustments, which can be unstable in partial participation settings. To address this, we propose Gradient Centralized Federated Learning (GC-Fed), a reference-free gradient correction method inspired by Gradient Centralization (GC). We introduce Local GC and Global GC, applying GC during local training and global aggregation, respectively. Our hybrid GC-Fed approach selectively applies GC at the feature extraction layer locally and at the classifier layer globally, improving training stability and model performance. Theoretical analysis and empirical results demonstrate that GC-Fed mitigates client drift and achieves state-of-the-art accuracy gains of up to 20% in heterogeneous settings.
This repository contains the official PyTorch implementation of GC-Fed.
The following datasets are supported:
- EMNIST
- CIFAR-10
- CIFAR-100
- TinyImageNet
Dataset will be automatically downloaded for the first run. See dataloader.py
.
Supported model architectures:
- MLP
- CNN
- VGG11 (not with mps)
- ResNet18
See models.py
.
This project supports multiple federated learning baseline algorithms, including:
- FedAvg (AISTATS 2016): arXiv:1602.05629
- FedProx (MLSys 2020): arXiv:1812.06127
- SCAFFOLD (ICML 2020): arXiv:1910.06378
- FedDyn (ICLR 2021): arXiv:2111.04263
- FedNTD (NeurIPS 2022): arXiv:2106.03097
- FedVARP (UAI 2022)): arXiv:2207.14130
- FedLC (ICML 2022): arXiv:2209.00189
- FedDecorr (ICLR 2023): arXiv:2210.00226
- FedSOL (CVPR 2024): arXiv:2308.12532
- FedACG (CVPR 2024): arXiv:2201.03172
See algorithms/
.
GC-Fed has been tested in the following environments:
- Operating Systems: Ubuntu (CUDA 12.2), macOS (M1, MPS)
- Python Version: 3.9
- Package Manager: pip 25.0.1
To set up the environment, run:
pip install -U pip
pip install -r requirements.txt
To train a model using FedAvg, use the following command:
python main.py --algorithm fedavg --project_name GCFED \
--lr 0.01 --momentum 0.9 --weight_decay 0.00001 \
--batch_size 50 --epochs 5 --noniid 0.1 \
--selected_clients 5 --rounds 800 --num_clients 100 \
--seed 40 --model cnn --dataset cifar10 --send_gradients
For GC-Fed:
python main.py --algorithm gcfed --project_name GCFED \
--lr 0.01 --momentum 0.9 --weight_decay 0.00001 \
--batch_size 50 --epochs 5 --noniid 0.1 \
--selected_clients 5 --rounds 800 --num_clients 100 \
--seed 40 --model cnn --dataset cifar10 --send_gradients --gc_target_layer fc2
Key parameters:
--dataset
: Dataset name (e.g.,emnist
,cifar10
,cifar100
,tinyimagenet
).--model
: Backbone model (e.g.,mlp
,cnn
,vgg11
,resnet18
).--algorithm
: Federated learning baseline algorithm (e.g.,fedavg
,fedprox
, etc., seealgorithms/__init__.py
).--noniid
: Level of data heterogeneity (LDA alpha). Lower values indicate higher heterogeneity (e.g.,0.05
is highly heterogeneous,1000
is nearly IID).--num_clients
: Total number of clients.--selected_clients
: Number of clients participating per round.--rounds
: Total number of communication rounds.
--gc_target_layer
: Specifies the target layer to be centralized on the server side. This layer is excluded from local training.--gc_layer_lambda
: Defines the ratio (0.0-1.0) of local gradient centralization. This parameter is effective only if--gc_target_layer
is not set (empty""
).
The code has been updated to improve readability and efficiency by removing unnecessary device changes. As a result, performance may differ slightly from reported results.
GC-Fed relies on the following libraries:
Additional relevant resources:
For further details, please refer to the respective repositories.
@article{seo2025gc,
title={GC-Fed: Gradient Centralized Federated Learning with Partial Client Participation},
author={Seo, Jungwon and Catak, Ferhat Ozgur and Rong, Chunming and Hong, Kibeom and Kim, Minhoe},
journal={arXiv preprint arXiv:2503.13180},
year={2025}
}