🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
The Integration of AI and DePIN: The Rise and Development of Decentralized GPU Computing Networks
The Intersection of AI and DePIN: The Rise of Decentralized GPU Computing Networks
Since 2023, AI and DePIN have become popular trends in the Web3 space, with market values reaching 30 billion dollars and 23 billion dollars respectively. This article will explore the intersection of the two and examine the development of related protocols.
In the AI technology stack, the DePIN network empowers AI by providing computing resources. The development of large tech companies has led to a shortage of GPUs, making it difficult for other developers to obtain sufficient GPUs for computation. This often forces developers to choose centralized cloud service providers, but due to the need to sign inflexible long-term high-performance hardware contracts, efficiency is low.
DePIN provides a more flexible and cost-effective alternative by incentivizing resource contributions through tokens. The DePIN network in the AI field integrates GPU resources from individual owners into a unified supply, serving users in need of hardware. This not only offers developers customization and on-demand access but also creates additional income for GPU owners.
AI DePIN Network Overview
Each project aims to build a GPU computing market network. The following will introduce the characteristics, market focus, and achievements of each project.
Render is a pioneer of the P2P GPU computing network, initially focused on content creation graphics rendering, and later expanded to AI computing tasks.
Features:
Akash positions itself as a "super cloud" alternative that supports storage, GPU, and CPU computing. By utilizing a container platform and Kubernetes-managed compute nodes, software can be seamlessly deployed across environments.
Features:
io.net provides distributed GPU cloud clusters specifically for AI and ML use cases. It integrates GPU resources from data centers, crypto miners, and other fields.
Features:
Gensyn provides GPU computing power focused on machine learning and deep learning. It employs techniques such as proof of learning and graph-based precise positioning protocols to enhance verification efficiency.
Features:
Aethir focuses on compute-intensive fields such as AI, ML, and cloud gaming. The containers in its network serve as virtual endpoints for executing cloud applications, providing a low-latency experience.
Features:
Phala Network as the execution layer of Web3 AI solutions. It addresses privacy issues through a trusted execution environment (TEE), supporting AI agents controlled by on-chain smart contracts.
Features:
Project Comparison
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|-------------|------------------|---------------------|---------|---------------|----------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphic Rendering and AI | Cloud Computing, Rendering and AI | AI | AI | AI, Cloud Gaming and Telecommunications | On-chain AI Execution | | AI Task Type | Inference | Both | Both | Training | Training | Execution | | Work Pricing | Performance-Based Pricing | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot | | Data Privacy | Encryption&Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fees | 0.5-5% per job | 20% USDC, 4% AKT | 2% USDC, 0.25% reserve fee | Low fees | 20% per session | Proportional to the staked amount | | Security | Render Proof | Proof of Stake | Proof of Work | Proof of Stake | Render Capability Proof | Inherited from Relay Chain | | Completion Proof | - | - | Time Lock Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verifier and Reporter | Checker Node | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |
Importance
Availability of Cluster and Parallel Computing
The distributed computing framework implements GPU clusters, providing more efficient training and scalability. Training complex AI models requires powerful computing capabilities and often relies on distributed computing. Most projects have now integrated clusters for parallel computing. io.net has successfully deployed over 3,800 clusters. Although Render does not support clusters, it decomposes a single task to be processed simultaneously across multiple nodes. Phala supports CPU worker clustering.
Data Privacy
AI model development requires the use of large datasets, which may pose a risk of sensitive data exposure. Most projects adopt some form of data encryption to protect privacy. io.net has partnered with Mind Network to launch full homomorphic encryption (FHE), allowing data to be processed in an encrypted state. Phala Network introduces a trusted execution environment (TEE) to prevent external processes from accessing or modifying data.
Proof of Completion and Quality Inspection
Gensyn and Aethir generate proofs after computation is completed, and the proof from io.net indicates that the rented GPU performance is fully utilized. Both Gensyn and Aethir conduct quality checks on the completed computations. Render suggests using a dispute resolution process. After Phala is completed, TEE proofs are generated to ensure that AI agents perform the required operations.
Hardware Statistics
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|------------|------------|--------| | Number of GPUs | 5600 | 384 | 38177 | - | 40000+ | - | | Number of CPUs | 114 | 14672 | 5433 | - | - | 30000+ | | H100/A100 Quantity | - | 157 | 2330 | - | 2000+ | - | | H100 Cost/Hour | - | $1.46 | $1.19 | - | - | - | | A100 cost/hour | - | $1.37 | $1.50 | $0.55 ( estimated ) | $0.33 ( estimated ) | - |
Requirements for high-performance GPU
AI model training requires the best-performing GPUs, such as Nvidia's A100 and H100. The inference performance of H100 is 4 times faster than that of A100, making it the preferred GPU. Decentralization GPU market providers need to meet actual market demands and offer lower prices. io.net and Aethir have each acquired over 2000 H100/A100 units, making them more suitable for large model computations.
The GPU cluster with network connections is cost-effective but has limited memory. GPUs connected via NVLink are best suited for LLMs with many parameters and large datasets, as they require high performance and intensive computation. Decentralized GPU networks can still provide powerful computing capabilities and scalability for distributed computing tasks, offering opportunities to build more AI and ML use cases.
provides consumer-grade GPU/CPU
CPUs also play an important role in AI model training, as they can be used for data preprocessing and memory resource management. Consumer-grade GPUs can be used for fine-tuning pre-trained models or small-scale training. Projects like Render, Akash, and io.net also serve this market, developing their own niche markets.
Conclusion
The AI DePIN field is still relatively emerging and faces challenges. However, the number of tasks and hardware executed on these networks has significantly increased, highlighting the demand for alternatives to Web2 cloud provider hardware resources. In the future, the artificial intelligence market is expected to thrive, and these decentralized GPU networks will play a key role in providing developers with cost-effective computing alternatives.