project screenshot 1
project screenshot 2
project screenshot 3
project screenshot 4
project screenshot 5
project screenshot 6

Aura FL

AURA-fl is the first practical and safe federated learning platform, revolutionizing data privacy and fairness with zero-knowledge proofs and verified model integrity.

Aura FL

Created At

ETHOnline 2024

Project Description

AURA-fl is the first practical and safe federated learning platform to include advanced cryptographic protocols for privacy, fairness, and model integrity. At its core, AURA-fl uses Zero-Knowledge Proofs (ZKPs) to ensure the integrity of distributed model training and inference. It specifically uses a krum function, a Byzantine-resilient aggregation method designed to detect and exclude fraudulent or erroneous model updates during training, guaranteeing that only the most reliable updates contribute to the global model.

To further secure model inference, AURA-fl employs recursive zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge) across many model layers. This recursive technique allows for concise verification of the model's decision-making process, ensuring that the computations in each layer are correctly conducted without disclosing any underlying data. AURA-fl establishes a new standard for secure and trustworthy federated learning by chaining zk-SNARKs recursively. This cryptographic assurance ensures that the final model output is accurate and untampered.

The architecture of AURA-fl uses a decentralized workflow to secure the integrity and security of federated learning tasks. A publisher runtime manages and distributes jobs to clients, using a task queue system to dynamically assign work. When a client accepts a task, it executes it locally with TensorFlow.js, allowing for on-device training without revealing raw data. The model parameters created during local training are subsequently transferred to an aggregator server and used to create a global model. Concurrently, a proof of inference is constructed locally using recursive zk-SNARKs, verifying that the computations executed are consistent with expected model behavior while preserving private data.

These zero-knowledge proofs are presented on-chain to an aggregator runtime, which checks their accuracy and ensures that model changes adhere to safe training rules. To ensure the integrity of participating clients, a staking registry is developed, forcing them to stake tokens as collateral. This registry communicates with the aggregator runtime; if a publisher publishes a valid krum proof on-chain, showing that a client's supplied update is fraudulent or incorrect, the implicated client's stake is automatically reduced. This technique promotes responsibility while discouraging hostile conduct, resulting in a secure and trustworthy federated learning environment.

How it's Made

It uses protokit for making my own appchain for the fl aggregation and staking. The krum function and proving is done through a zkdsl using o`1js.A UI has been made from nextjs and a graphql server provided by the protokit is used to query the chain. Intensive test's are written for each runtime module

background image mobile

Join the mailing list

Get the latest news and updates