Skip to content

Federated learning is inherently vulnerable to having the integrity of the global model compromised because the training data from which the model parameter updates have been derived (if they were not somehow artificially synthesized) is not available to evaluate the validity of the updates during the aggregation process. An adversary may attemp…

Notifications You must be signed in to change notification settings

VishwasPrasanna/ADBI_CAPSTONE_Project

Repository files navigation

ADBI_CAPSTONE_Project

Federated learning is inherently vulnerable to having the integrity of the global model compromised because the training data from which the model parameter updates have been derived (if they were not somehow artificially synthesized) is not available to evaluate the validity of the updates during the aggregation process. An adversary may attempt to poison the global model with updates that aim to weaken the ability of the model to classify accurately. In order to protect against such attacks, the various possible types of attacks must be enumerated, their most probable effects on the model updates identified, and appropriate countermeasures put in place to minimize the likelihood that such updates will be aggregated into the global model while maximizing the likelihood that at least a minimal proportion of legitimate updates will be accepted. In this work we explore these issues by simulating a visual federated learning environment that is being attacked by one or more malicious agents performing two types of targeted attacks, i.e. attacks whose goal is the misclassification of a subset of images while more or less preserving the overall performance of the global model. We implemented a mechanism to detect anomalous model updates and prevent their inclusion in the global model and compared the performance of the global model after training with and without this mechanism enabled.

Contributors:

  1. John Warren([email protected])
  2. Pankaj Attri([email protected])
  3. Vishwas S P([email protected])

About

Federated learning is inherently vulnerable to having the integrity of the global model compromised because the training data from which the model parameter updates have been derived (if they were not somehow artificially synthesized) is not available to evaluate the validity of the updates during the aggregation process. An adversary may attemp…

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published