Submission for the AI track
, Flare
's track 1
& 2
, and the NERDo Awards
from DeSci
at the ETHOxford Hackathon 2025
by @marcellomaugeri.
NERDo Awards
for themost likely to disrupt
project.Flare
's track1
: pool prize.
Disclaimer: the project is not affiliated with Flare, but the logo was greatly inspired since it is a submission for the Flare track. As a consequence, the logo is a derivative work of the Flare logo and it is not intended to be used for commercial purposes.
Flare-FL
is a decentralized Federated Learning (FL) framework that uses the Flare
chain and the Flare Data Connector
to decentralize the training process with the objective of improving the security of the model and the privacy of the clients.
A user, called Client
, can train a model on its local data. Then, it sends the model update to a trusted third party (TTP)
. The TTP
calculates a SHA256
digest of the model and stores it in a db.
At this point, the Client
can submit an Attestation Request
to the Attestation Providers
. The request contains the SHA256
digest of the model update. The Attestation Providers
queries the TTP
to verify that the model update does not degrade the global model. According to the result, the Attestation Providers
vote to approve or reject the request.
If the request is approved, the model update will be stored on the Flare
chain.
The TTP
is responsible for storing and maintaining the global model. For the sake of simplicity, currently the global model is fixed and not updated. However, the TTP
can be extended easily to employ a strategy to update the global model (e.g. FedAvg).
The contribution of this project is two-fold:
Federated Learning
inherently suffers from model poisoning attacks when malicious clients submit adversarial updates. TheAttestation Providers
act as validators to ensure that the model update is not malicious, by querying theTTP
to verify that the model update does not degrade the global model.- The
Flare Data Connector
is used to store the model update digests on theFlare
chain. This allows to decentralize the training process, improving the integrity of the model while preserving the privacy of the clients (as the training data is not shared).
In the current implementation, the Attestation Providers
simply query the TTP
to verify the model update. However, in future works the Attestation Providers
could be extended to employ more and different strategies to evaluate the model update.
For example, one Attestation Provider
could use a detection algorithm to detect adversarial updates, while another Attestation Provider
could use a different algorithm.
Another functionality that could be added is the institution of a DAO
to choose which users are authorized to be Attestation Providers
or request attestations.
The possibilities are endless and the Flare
chain provides a solid foundation to build upon.
Client
: The client is the entity that trains the model on its local data.Attestation Request
: The Attestation Request is a request submitted by the client to the Attestation Providers. It contains the local model of the client.Attestation Provider
: The Attestation Provider is the entity that evaluates the Attestation Request. It queries the TTP to verify that the model update does not degrade the global model.Trusted Third Party (TTP)
: The TTP is the entity that stores and maintains the global model. It is responsible for verifying that the model update does not degrade the global model.
.
├── contracts
│ ├── FlareFL.sol # The FlareFL smart contract, the core of the project
|
├── scripts
│ ├── FlareFL.ts # The script to deploy and use the FlareFL smart contract
|
├── src
│ ├── client_module # The client module to communicate with the TTP)
│ ├── server # The TTP which validates and store the model updates (flask server)
| ├── openapi.yaml # The OpenAPI specification of the TTP
|
├── demo.py # The demo script to show how the project works
- Clone the repository
git clone https://github.com/marcellomaugeri/flare-FL
cd flare-FL
- Install Python dependencies
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
-
Install hardhat, yarn and node.
-
Install the node dependencies
npm install
- Setup the
.env
file (Change thePRIVATE_KEY
in the.env
file to your wallet private key, and theJQ_API_KEY
toflare-oxford-2025
)
mv .env.example .env
- Compile the project
npx hardhat compile
- Install the client module
cd src/client_module
poetry build -f wheel
pip install dist/*.whl
- Install ngrok (dependent on the OS) and perform the initial setup
The demo is designed to show how the project works.
- At the beginning, it will spawn the
TTP
server (flask server) and a ngrok tunnel (to expose the TTP to the validators). - Then, it will run a simulation where the client trains a model and submits an attestation request to the validators.
- The validators will query the TTP to verify the model update and vote to approve or reject the request.
- Finally, the client will submit the model update to the Flare chain.
- Once done, the client will query the Flare chain to get all the model updates and will aggregate them to its local model.
- Then, it performs a round of testing to evaluate the performance of the model.
- Refactor the server as there is duplicated code (
mlmodels
,data
andutils
). - Rewrite the tests (they are not working with latest changes).
- Implement a strategy to maintain a global model (e.g. FedAvg).
- Design a DAO to choose the Attestation Providers.
- Implement a detection validator to detect adversarial updates.
- Provide incentives to the Attestation Providers and the participants.
- Replace the TTP with a decentralized solution (Flare is working on a feature to run a specific code).
I have been buzzling in blockchains only for a few months, and I have to say that building on Flare was a valuable experience. Despite being able to read Solidity code fluently, it was thanks to their starter kit that I was able to write and deploy my first (serious) smart contract.
My main difficulty was to understand what was happening on the Attestation Provider
side, as the source code was not -- initially -- provided. However, the Flare team was very helpful and provided me a snippet. I cannot wait to see the future implementations of the Attestation Providers
, in particular the feature which allows to run a specific code. If you know, you know.
- Flare Data Connector Whitepaper
- The Flare Network Whitepaper
- Flare Developer Hub
- Hardhat Starter Kit
- Defending Against Poisoning Attacks in Federated Learning With Blockchain
I would like to thank the Flare team for all the support and for developing the Flare chain and the Flare Data Connector, as well as organizing the two workshops. I would like to thank the DeSci team for providing me Peter
, an AI assistant to whom I shared my thoughts and ideas. I would like to thank the ETHOxford Hackathon organizers for organizing this event and for the opportunity to participate. Finally, I would like to thank myself for the hard work, the dedication and the sleep I lost to build this project.