funded
Decentralized Physics Simulations
Current Project Status
In Progress
Amount
Received
$21,000
Amount
Requested
$28,000
Percentage
Received
75.00%
Solution

Free decentralized physics computations while rewarding computation & data for corporate, academic, and community involvement with Cardano

Problem

Physics simulations run on centralized servers due to lack of decentralized infrastructure for academic & industry collaboration & use-cases

Addresses Challenge
Feasibility
Auditability

Team

1 member

Decentralized Physics Simulations

Overview

A myriad of sectors are heavily dependent on large simulations of physical systems based primarily on traditional methods like Molecular Dynamics, and Density Functional Theory. Such sectors include Pharmaceutical, energy, semiconductors, etc. For example, in the recent Covid-times, millions of Molecular Dynamics simulations have been run, largely independently, related to the ACE receptor and spike protein to better understand the binding mechanisms[3]. Currently, most of this information is dormant, redundant, and inconclusive. The data is frequently dormant as the simulation data is analyzed for publications or industrial applications and then held on local data storage units, redundant as there are often teams around the world doing highly similar simulations, and inconclusive because often single simulations lack enough information to lead to conclusive results. Thus, centralized infrastructures are rather limiting in developing AI-centric frameworks for improving the efficiency and accuracy of physics computation and knowledge extraction. As bad as this is, this is only the surface of the problem. The larger problem is that there is no natural way to incorporate vast and diverse amounts of physics information (experiments, quarks, chemicals, proteins), data, knowledge , and algorithms in a cohesive and synergetic manner.

We just nearly (the first) missed out on funding our project in Fund7 in the AI category. Here we have done refactoring, and updated our plans.

Objectives and Goals

Our end goal is clear. We hope to create the correct infrastructure to incentive mass adoption of cardano-based protocols in the computationally oriented scientific communities including academia, industry, start-ups, and individual community members.

We are creating a decentralized protocol for the simulation of physical systems while leveraging Nunet for computational resources and SingularityNet for AI enhancements with open ended improvements using anything from Deep Learning [1], to neuro-symbolic AI [2], quantum chemistry [4], cognitive architectures[5], etc. Additionally, we are building a tokemonics system to incentive computation, data, algorithm development, mining, and community rewards for collaborations and support from individual community members, academics, and even corporations. One of our driving principles is the coupling of advancements in artificial intelligence to advancements in functional near-term technologies.

Our solutions will be useful in markets like Biotechnology, Artificial Intelligence, Chemical Synthesis, and many more. These are quickly growing markets, and would be absolutely amazing for the health of the cardano ecosystem to bridge the market demand home. Take for instance just the Biotechnology market; it is expected to surpass 1.5 Trillion by 2030 and growing at nearly ten percent per year [6].

The paradigm shift we are creating with SNet and Nunet stems from creating a computational and algorithm environment for end-to-end integration of multi-scale simulations for developing and employing theoretical and AI algorithms built up from heterogeneous data sources, symbolic knowledge extraction, and cognitive principles to lead to the most interconnected framework for self-consistent computations in the physical sciences. This will all be done to mimic the use of High Performance Computing infrastructures, and in principle, we should be able to simulate molecular systems faster than many of the top supercomputer when Nunet is fully developed with a large enough ecosystem. All of our code will be developed for parallelized, multi-virtual node CPUs/GPUs. By using AI integration, we should also be able to surpass many of the conventional bottlenecks of such computations.

Industry and community

From an industry perspective, users (entities taking advantage of our computational protocol) can exchange tokens for theoretical computations of a particular system of study and/or private/public algorithms developed by various entities (individuals, research labs, corporations, community members). From the community perspective they will get rewarded for the contribution to data, computation, algorithm development (to name a few).

Rewards are mostly obtained from the following procedures: physics data (experiments, simulation data, theory), computational resources and storage, algorithm developments (developing new algorithms, training neural networks, improving existing networks), mining, and technology development. The first two are rather clear. In short, mining is the eventually-automated process of performing specific computations as suggested by community members or recommended by an AI agent that anyone can partake in by staking or resource allocation. As well, entities that develop on the protocol (via any of the above including mining) can obtain rewards via a predetermined ratio of tokens paid by industrial entities using smart contracts.

The particulars of the miscellaneous challenge are described such that there are no other well fitting categories to place our proposal. Some of the semi-related categories were developer ecosystem, open-source ecosystem, and Business creation. While our end objective is in partial alignment with each of these, the main focus and output of our current proposal is only the solution of a particular phase of development. That is, obtaining a foundational suite of algorithms and infrastructure to begin to address further developments at later funding events (either through project catalyst, SingularityNet’s DeepFunding, or third party funding). So, as this proposal is foundational specifically to physics algorithms and implementation with Nunet, we find it difficult to make a compelling argument in other challenge settings. Although, we do have smaller proposals to begin outreach to both industry and academic collaborations to begin this transition in parallel if possible.

Mostly general technical research and development uncertainties and complexity of the project from that side. We are fairly confident that the team will be able to deal with difficulties, but that may require additional time and work. Of course, we are working with Nunet, and any delays on their side could be near-term problematic, but can be circumvented by focusing on the details that can be directly implemented at current times. They are a well-proven team, and delays may happen, but they build great code.

Overview of Specific Algorithms to be implemented

  • In two-three months:
  • Prototypes of Traditional Computational Algorithms on single Node and Single GPU:
  • Langevin Dynamics Integrator
  • Open Source Molecular Dynamics
  • Density Functional Theory
  • In five-six months:
  • Prototypes of AI based Simulations on single Node and Single GPU:
  • 1-3 open source projects for AI enhanced molecular dynamics, and electron density calculations
  • Functionality to update Neural Network parameterization
  • Hybrid traditional/AI simulations
  • In six-eight months after fund
  • Prototypes of all algorithms in multi node/ multi-GPU settings
  • Training on heterogeneous data obtained from community members or Traditional Algorithms as test bed for multi-scale approaches
  • Possible collaborations with DeepChainAda in training neural networks with private information cryptographically

Overview of High Level Design

  • At three months -> Basic set of theoretical algorithms being run on Nunet and projects/comparisons with centralized servers.
  • At six months -> Demonstration of basic AI incorporated simulations, and white paper including specifics about tokenomics and protocols. Additionally, updated metrics compared to centralized servers and other AI methods leading to at least one publication in a peer-reviewed journal.
  • At seven to eight months -> General prototype for decentralized applications of industrial needs including smart contract design implementation.
  • After twelve months -> Partnerships and Industry and academic collaborations. Additional funding requests or token generation events to continue developing objectives refined in White Paper. Prototyping mining protocols and tokenomics.

Note that funding is up to month 78 We will then look for continued funding from future project catalyst cycles or alternate funding.

These are slightly difficult to precisely define as we will be developing our protocol as nunet, specifically, matures. Thus, many of our timeline objectives will be dependent on progress with Nunet.

Miscellaneous Hardware for local testing and development is not needed as we currently have self-owned servers. Any additional resources will be obtained out-of-pocket to improve our chances of obtaining funding.

Function Person/months People Salary Total

Physics Protocol Engineering 8 1 $3,500 $28,000

Justin Diamond - PhD Candidate - AI Researcher in Physics, Chemistry, Pharma, Bioinformatics at academic institutions including University of Michigan, Toyota Technological Institute of Chicago, Boston University, University of Luxembourg, and University of Basel.

Years of experience in academic settings studying machine learning related to chemistry, physics, bioinforamtics, and drug development. Some examples are at the University of Michigan I worked on Machine Learning for Protein Structure Prediction (working with Dr. Jinbo Xu, one of the inspirations for DeepMind's AlphaFold ) and at the University of Luxembourg I worked on generative machine learning models for calculating thermodynamic properties of small molecules as well as quantum mechanical and Molecular Dynamics to study the Spike Protein in the corona-virus in a highly parallelized and distributed fashion on a HPC.

https://www.linkedin.com/in/justin-sidney-diamond-881798193

https://github.com/blindcharzard

Floriane LeFloch - Foudning Member in lili.ai AI startup and Web3 Consultant

https://fr.linkedin.com/in/floriane-le-floch-678391a4

This Catalyst proposal will aid Hetzerk in prototyping large scale computations of physical systems using SingularityNet and Nunet allowing for the continued growth and progressive development with further funding.

Increased number of transactions on Cardano due to SingularityNET AI service calls.

Successful measures include

  1. Metric showing scalability of Nunet (and future projections) of theoretical simulations
  2. Increased accuracy of AI enhanced simulations due to heterogeneous data collection
  3. In due course, we will prototype neuro-symbolic integration using OpenCog 2.0 (among others), and other approaches such as Recommendation Systems to suggest optimal computations to be performed when faced with uncertainty in results or model
  4. Completed white paper and tokenomics by month 5
  5. Peer-reviewed article comparing to centralized solutions
  6. Peer-reviewed article comparing to centralized solutions
  7. Projected Cost Savings for Corporations to work with us
  8. lower computational fees and automated pipelines decrease costs
  9. Increased partnerships with academic institutions, we plan to have one strong academic partnership by the end of 8 months
  10. By 2-3 years, we plan to have 4-5 academic collaboratings in different fields of expertise ranging from quantum mechanics, drug discovery, biomolecular interactions, Artificial Intelligence.
  11. By the end of the year, we will have a fund raising event either via token distribution, venture capital, or government research grants.

Our end goal is simple, we are building computational infrastructure on Cardano, in collaboration with Nunet and SingularityNet, to create the correct incentive structures and a recursive cycle of development, roll-out, rewards, and increased efficiency and increasing user-base to obtain a decentralized platform of simulation based solutions for academic and industry related problems like AI based drug development or simulations of bio-molecules with Quantum Mechanical algorithms.

One of the key mechanisms of incentives is to allow, at least, academic groups free computational resources. We can do this, by coupling the loss of value due to computational usage to the gain of actionable knowledge, data, and algorithms to solve some of the most computationally demanding problems in dramatic need of better data, algorithms, knowledge, and efficient and connected solutions. To create a net profitable cycle will take time and a growing ecosystem of partnerships and solutions, but by building now we create the future infrastructure to naturally and with decentralized protocols create more opportunities and participation in the future of beneficial technological and materials development.

These are medium to long term goals that we hope to accomplish in the next three to five years.

In contrast, at the end of the eight months of funding, we will have a foundational number of key algorithms to start generating valuable data. The current state of Artificial Intelligence and Machine Learning relies heavily on accurate data and being able to obtain this data in a connected fashion to reliably train machine learning models is crucial to develop computational solutions in an automated fashion.

What this allows, is for us to have the bare minimum necessary to operate at the same practical level as High Performance Computing Clusters, which are conventionally used to simulate > millions of atoms. This is made possible in collaboration with Nunet (a computational network for CPUs, GPUs, and storage), and we will be developing our codebase hand in hand with them to ensure efficient solutions making us one of the first use-cases on the Nunet Platform.

Entirely new proposal

close

Playlist

  • EP2: epoch_length

    Authored by: Darlington Kofa

    3m 24s
    Darlington Kofa
  • EP1: 'd' parameter

    Authored by: Darlington Kofa

    4m 3s
    Darlington Kofa
  • EP3: key_deposit

    Authored by: Darlington Kofa

    3m 48s
    Darlington Kofa
  • EP4: epoch_no

    Authored by: Darlington Kofa

    2m 16s
    Darlington Kofa
  • EP5: max_block_size

    Authored by: Darlington Kofa

    3m 14s
    Darlington Kofa
  • EP6: pool_deposit

    Authored by: Darlington Kofa

    3m 19s
    Darlington Kofa
  • EP7: max_tx_size

    Authored by: Darlington Kofa

    4m 59s
    Darlington Kofa
0:00
/
~0:00