The Integration of AI and DePIN: The Rise and Development of Decentralized GPU Computing Networks

The Intersection of AI and DePIN: The Rise of Decentralized GPU Computing Networks

Since 2023, AI and DePIN have become popular trends in the Web3 space, with market values reaching 30 billion dollars and 23 billion dollars respectively. This article will explore the intersection of the two and examine the development of related protocols.

In the AI technology stack, the DePIN network empowers AI by providing computing resources. The development of large tech companies has led to a shortage of GPUs, making it difficult for other developers to obtain sufficient GPUs for computation. This often forces developers to choose centralized cloud service providers, but due to the need to sign inflexible long-term high-performance hardware contracts, efficiency is low.

DePIN provides a more flexible and cost-effective alternative by incentivizing resource contributions through tokens. The DePIN network in the AI field integrates GPU resources from individual owners into a unified supply, serving users in need of hardware. This not only offers developers customization and on-demand access but also creates additional income for GPU owners.

AI and the Intersection of DePIN

AI DePIN Network Overview

Each project aims to build a GPU computing market network. The following will introduce the characteristics, market focus, and achievements of each project.

Render is a pioneer of the P2P GPU computing network, initially focused on content creation graphics rendering, and later expanded to AI computing tasks.

Features:

  • Founded by cloud graphics company OTOY
  • Major entertainment companies like Paramount Pictures and PUBG use their GPU network.
  • Collaborating with Stability AI and Endeavor to integrate AI models with 3D content rendering workflows.
  • Approve multiple computing clients, integrate more DePIN network GPUs.

Akash positions itself as a "super cloud" alternative that supports storage, GPU, and CPU computing. By utilizing a container platform and Kubernetes-managed compute nodes, software can be seamlessly deployed across environments.

Features:

  • Targeting a wide range of computing tasks from general computing to web hosting
  • AkashML supports running over 15,000 models on Hugging Face.
  • Custodial Mistral AI's LLM model chatbot, Stability AI's SDXL and other applications
  • Supports Metaverse, AI deployment, and federated learning platform

io.net provides distributed GPU cloud clusters specifically for AI and ML use cases. It integrates GPU resources from data centers, crypto miners, and other fields.

Features:

  • IO-SDK is compatible with frameworks such as PyTorch and Tensorflow, and can automatically scale according to computational needs.
  • Supports the creation of 3 different types of clusters, which can be launched within 2 minutes.
  • Collaborate and integrate GPU resources with Render, Filecoin, Aethir, and others.

Gensyn provides GPU computing power focused on machine learning and deep learning. It employs techniques such as proof of learning and graph-based precise positioning protocols to enhance verification efficiency.

Features:

  • The hourly cost of a V100 equivalent GPU is approximately $0.40, significantly saving costs.
  • Support fine-tuning of pre-trained base models
  • Plan to provide decentralized, globally-owned foundational models

Aethir focuses on compute-intensive fields such as AI, ML, and cloud gaming. The containers in its network serve as virtual endpoints for executing cloud applications, providing a low-latency experience.

Features:

  • Expand to cloud phone services, launch decentralized cloud smart phones in collaboration with APhone.
  • Establish partnerships with large companies such as NVIDIA, Super Micro, and HPE.
  • Collaborating with multiple Web3 projects such as CARV, Magic Eden

Phala Network as the execution layer of Web3 AI solutions. It addresses privacy issues through a trusted execution environment (TEE), supporting AI agents controlled by on-chain smart contracts.

Features:

  • As a co-processor protocol for verifiable computation, supporting AI agents on-chain resources
  • AI agent contracts can access top language models such as OpenAI and Llama through Redpill.
  • The future will include zk-proofs, multi-party computation, fully homomorphic encryption, and other multi-proof systems.
  • Plan to support H100 and other TEE GPUs to enhance computing power.

AI and DePIN Intersection

Project Comparison

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|-------------|------------------|---------------------|---------|---------------|----------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphic Rendering and AI | Cloud Computing, Rendering and AI | AI | AI | AI, Cloud Gaming and Telecommunications | On-chain AI Execution | | AI Task Type | Inference | Both | Both | Training | Training | Execution | | Work Pricing | Performance-Based Pricing | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot | | Data Privacy | Encryption&Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fees | 0.5-5% per job | 20% USDC, 4% AKT | 2% USDC, 0.25% reserve fee | Low fees | 20% per session | Proportional to the staked amount | | Security | Render Proof | Proof of Stake | Proof of Work | Proof of Stake | Render Capability Proof | Inherited from Relay Chain | | Completion Proof | - | - | Time Lock Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verifier and Reporter | Checker Node | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |

Importance

Availability of Cluster and Parallel Computing

The distributed computing framework implements GPU clusters, providing more efficient training and scalability. Training complex AI models requires powerful computing capabilities and often relies on distributed computing. Most projects have now integrated clusters for parallel computing. io.net has successfully deployed over 3,800 clusters. Although Render does not support clusters, it decomposes a single task to be processed simultaneously across multiple nodes. Phala supports CPU worker clustering.

Data Privacy

AI model development requires the use of large datasets, which may pose a risk of sensitive data exposure. Most projects adopt some form of data encryption to protect privacy. io.net has partnered with Mind Network to launch full homomorphic encryption (FHE), allowing data to be processed in an encrypted state. Phala Network introduces a trusted execution environment (TEE) to prevent external processes from accessing or modifying data.

Proof of Completion and Quality Inspection

Gensyn and Aethir generate proofs after computation is completed, and the proof from io.net indicates that the rented GPU performance is fully utilized. Both Gensyn and Aethir conduct quality checks on the completed computations. Render suggests using a dispute resolution process. After Phala is completed, TEE proofs are generated to ensure that AI agents perform the required operations.

The Intersection of AI and DePIN

Hardware Statistics

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|------------|------------|--------| | Number of GPUs | 5600 | 384 | 38177 | - | 40000+ | - | | Number of CPUs | 114 | 14672 | 5433 | - | - | 30000+ | | H100/A100 Quantity | - | 157 | 2330 | - | 2000+ | - | | H100 Cost/Hour | - | $1.46 | $1.19 | - | - | - | | A100 cost/hour | - | $1.37 | $1.50 | $0.55 ( estimated ) | $0.33 ( estimated ) | - |

Requirements for high-performance GPU

AI model training requires the best-performing GPUs, such as Nvidia's A100 and H100. The inference performance of H100 is 4 times faster than that of A100, making it the preferred GPU. Decentralization GPU market providers need to meet actual market demands and offer lower prices. io.net and Aethir have each acquired over 2000 H100/A100 units, making them more suitable for large model computations.

The GPU cluster with network connections is cost-effective but has limited memory. GPUs connected via NVLink are best suited for LLMs with many parameters and large datasets, as they require high performance and intensive computation. Decentralized GPU networks can still provide powerful computing capabilities and scalability for distributed computing tasks, offering opportunities to build more AI and ML use cases.

provides consumer-grade GPU/CPU

CPUs also play an important role in AI model training, as they can be used for data preprocessing and memory resource management. Consumer-grade GPUs can be used for fine-tuning pre-trained models or small-scale training. Projects like Render, Akash, and io.net also serve this market, developing their own niche markets.

Conclusion

The AI DePIN field is still relatively emerging and faces challenges. However, the number of tasks and hardware executed on these networks has significantly increased, highlighting the demand for alternatives to Web2 cloud provider hardware resources. In the future, the artificial intelligence market is expected to thrive, and these decentralized GPU networks will play a key role in providing developers with cost-effective computing alternatives.

The Intersection of AI and DePIN

AI and the Intersection of DePIN

The Intersection of AI and DePIN

AI and the Intersection of DePIN

AI and DePIN intersection

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 6
  • Share
Comment
0/400
WenMoonvip
· 07-29 01:40
I understand, I understand, everyone is queuing up for the GPU.
View OriginalReply0
SilentObservervip
· 07-28 17:52
Did the GPU directly fall?
View OriginalReply0
ForeverBuyingDipsvip
· 07-28 17:45
Is the gpu going to da moon?
View OriginalReply0
BearMarketSunriservip
· 07-28 17:43
It's another story to fool the suckers to enter a position.
View OriginalReply0
WhaleMinionvip
· 07-28 17:32
The GPU is rising too fast, retail investors are really having a tough time.
View OriginalReply0
StablecoinArbitrageurvip
· 07-28 17:23
hmm *adjusts glasses* looking at the cost basis per compute unit... depin could actually yield better risk-adjusted returns than centralized providers tbh
Reply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)