not approved
AIDA: Decentralized AI on Cardano
Current Project Status
Unfunded
Amount
Received
$0
Amount
Requested
$50,000
Percentage
Received
0.00%
Solution

We develop AIDA, a scalable distributed AI learning framework on Cardano, with key innovations compared to existing SOTA learning frameworks

Problem

Existing works on ETH - Microsoft’s DCAI and HP’s Swarm Learning frameworks - lack scalability, model security & fair incentive mechanisms

Addresses Challenge
Feasibility
Auditability
AIDA: Decentralized AI on Cardano

AIDA (Artificial Intelligence on Distributed Architectures) with key differences and innovations compared to Microsoft’s DCAI framework (aka Sharing Updatable Models (SUM) on Blockchain) and Swarm Learning.

There are four main parts to the AIDA system as shown in the attached image.

IPFS

- Distributed Storage System

Blockchain

-Store Model Meta-info

-Training network info

Server

- Store Training Scripts

UI

- Download Training Scripts From Server

By using the above AIDA modules:

  1. Avoid Single Point Failure (train locally-> combine models in a central federated learning server -> chance of a single point failure)
  2. Achieve Model Transparency
  3. Prevent Model Corruption using a user Incentive Mechanism

References:

  1. Harris, Justin D., and Bo Waggoner. "Decentralized and collaborative AI on blockchain." 2019 IEEE international conference on blockchain (Blockchain). IEEE, 2019.
  2. Warnat-Herresthal, S., Schultze, H., Shastry, K.L. et al. Swarm Learning for decentralized and confidential clinical machine learning. Nature 594**,** 265–270 (2021)

Currently our framework is built on Ethereum and we propose to migrate this to Cardano, which is cheaper, faster and more secure. We will publish the results in a top blockchain conference and make the code open-source, just like how it is now: https://github.com/s-elo/DNN-Blockchain

In the highly unlikely scenario, if our implementation on the Cardano platform remains pending – we shall migrate it to KEVM or IELE while still running on Cardano.

  1. Move current AI training framework to Cardano blockchain
  2. Store ML models using IPFS (avoid storing on-chain limitations)
  3. Local learning, no need to store training data on-chain (privacy). The actual training process takes place at the edge, thereby preserving privacy and confidentiality of the users' data.
  4. Improved Incentive mechanism (avoid high gas fees of ETH) on Cardano. This is a critical piece of the ongoing work.
  5. Integration and testing state-of-the-art models (RESNET) on large datasets (ImageNet) in a distributed fashion

(1) By Q2 2022

(2) Q3 2022

(3) Q3 2022

Federated learning servers - $14K

Segregated Data Storage - $2K

Training scripts hosting servers - $5K

Web development 40 hours/week 8 weeks $25/hour - $8K

Smart contract development 40 hours/week 12 weeks $40/hour - $19K

User incentives - $2K

Dr. Bharath's Research Interests

  • Pattern recognition and computer vision
  • Event-based cameras for autonomous sensing and navigation
  • Object recognition and related areas such as scene understanding, face recognition, and object detection for silicon retinal, event-based cameras on-board unmanned aerial vehicles.
  • Control and Simulation, Image Classification using Invariant Features.

Publications

Bio for Sam:

● Plutus PBL 1st Cohort - Gimbalabs

● Founding Dev & Smart Contract Lead - Rarety.io

● Co-Founder - Fetachain.io

● 2021 Presidential Innovation Award - Government of India

● IIT Bombay & Mathworks Computational Agriculture Hackathon International Rank - 3.

Li Chao and Shen Qiuyu are Master of Science students at National University of Singapore. Their project on AI and Blockchains is co-supervised by Dr. Bharath Ramesh and Assoc Prof. Xiang Cheng

Maintain the Original AI model's accuracy trained in a standalone fashion and reduce training time by a factor depending on the number of nodes in the distributed network.

Another important way to measure our progress is the scale of the datasets used for testing the proof-of-concept system. Currently, we are limited to testing on CIFAR100 and Tiny ImageNet. Our aim is to scale to training and testing on millions of images, say on the full ImageNet dataset.

A full-scale AIDA system will be able to onboard real-world users with a specific model learning goal in a distributed fashion, while preserving confidentiality and privacy of the data.

No

close

Playlist

  • EP2: epoch_length

    Authored by: Darlington Kofa

    3m 24s
    Darlington Kofa
  • EP1: 'd' parameter

    Authored by: Darlington Kofa

    4m 3s
    Darlington Kofa
  • EP3: key_deposit

    Authored by: Darlington Kofa

    3m 48s
    Darlington Kofa
  • EP4: epoch_no

    Authored by: Darlington Kofa

    2m 16s
    Darlington Kofa
  • EP5: max_block_size

    Authored by: Darlington Kofa

    3m 14s
    Darlington Kofa
  • EP6: pool_deposit

    Authored by: Darlington Kofa

    3m 19s
    Darlington Kofa
  • EP7: max_tx_size

    Authored by: Darlington Kofa

    4m 59s
    Darlington Kofa
0:00
/
~0:00