How AI is pushing the evolution of DePIN services.

John Gleeson
August 13, 2024

Out of all the potential use cases for decentralized systems, DePIN services (decentralized physical infrastructure network) demonstrate the potential for large-scale adoption of the technology by traditional enterprise customers.  DePIN services have been in the best position to capitalize on the rise of AI, and many services have evolved rapidly in response to shifts in the market.

DePIN services use token rewards to incentivize the deployment of hardware-based networks and are typically geared toward providing more traditional infrastructure services. DePIN services are frequently at the top of the Web3 Index, a site that tracks actual customer demand and usage for decentralized services.

The most common DePIN networks are services like storage (Storj, Filecoin, Arweave), compute (Akash, Cudo), networking (Helium) or services built on top like video streaming (Livepeer) or global mapping (Hivemapper). What they have in common is they are able to leverage the advantages of distributed systems for performance and security, while achieving superior economics by aggregating underutilized but existing infrastructure.

DePIN services work because of the potential to earn real revenue from outside the network that funds the ongoing operational costs of the physical infrastructure. These networks also have a significant advantage in startup costs over traditional tech since they do not have the same capital-intensive up-front costs to build data centers, purchase hardware and software, or support/maintain the environments.

Traditional infrastructure tech also has been impacted by both inflation and higher interest rates, which significantly impacts margins in a capex intensive industry. DePIN can leverage the low startup cost and ongoing cost of operations to compete aggressively and grow rapidly.  Infrastructure services, and especially storage services, tend to take time to get started as it takes significant time to build customer trust. Once established, growth tends to be exponential as the services tend to be sticky because they are mission critical and the cost/risk of change is high.

How DePIN is fueling AI growth.

The rapid adoption of AI technologies has created an unprecedented need for powerful computing resources, and this trend has created a unique opportunity to grow for many DePIN services. While some DePIN services like Livepeer and Akash were effectively sitting on a mountain of GPU resources at a time when those resources were in extremely short supply, other services like Storj discovered that their distributed storage architecture was ideal for storing data for AI model training or inference in conjunction with distributed high perform compute.

Many of the DePIN services leaned into their strengths to capitalize on the sudden demand for resources driven by AI use cases. A large number of providers established partnerships or built integrations to provide more comprehensive AI solutions for customers’ AI workloads. AI DePIN compute providers have evolved in one of two directions: attempting to directly offer decentralized compute or creating decentralized marketplaces for centralized compute.

The direct decentralized compute providers have the primary challenge of untrusted access to data and verifiability of outcomes. These services need to solve both the challenge of verifiability of the actual infrastructure offered (is this really an H100?) and the security of the workload being run. They’ve attempted to solve these challenges with a combination of trusted execution environments or running multiple workloads then comparing outcomes to verify the results. The jury is still out whether this approach works in heterogeneous compute environments or whether the economics can ever work when every compute job is effectively run twice.

The decentralized marketplace approach has the advantage of presenting infrastructure to customers in a manner that is familiar and consistent with traditional cloud services. These marketplaces can also address market segments that need immediate, short term access to high end infrastructure. Scarcity of resources and especially limited access to those resources for shorter terms have driven adoption, but part of the challenge has been that GPUs alone are only part of an HPC solution. Portability of data and the ability to move data sets for model training quickly and at low cost to compute resources that could be anywhere has been an emerging challenge.

Rapid advances in DePIN cloud storage.

Decentralized storage services are also evolving to meet those challenges. Providers like Filecoin who operate a decentralized marketplace for centralized storage have started evaluating leveraging compute resources used in sealing (preparation of data for storage) for providing compute colocated with data. This approach is promising, like the decentralized marketplace approach for compute resources and is driving innovation, but there is not yet a production service available.

Another DePIN storage service, Storj, operates a decentralized and distributed storage network with an S3 compatible API bridging Web2 and Web3. By providing a familiar and consistent interface with traditional cloud services, Storj was among the first to achieve mainstream and enterprise adoption beyond Web3. Storj provides a storage service that is both low cost and provides consistent global performance from a single upload. This storage model worked well with distributed compute marketplaces. As Storj recognized the growth opportunity from the rise of AI, they evolved their network by further optimizing for performance and focusing on integration with compute marketplace leaders.

Storj worked primarily with GPU marketplace Valdi, a distributed cloud with compute providers in over a dozen countries and over 35,000 GPUs available for deployment. This includes some of the most in-demand chips on the market, including over 16,000 H100s, GH200s, thousands of A100, A6000 and many more. Valdi already provides GPUs to many decentralized compute networks today. Valdi offered persistent storage volumes for AI workloads which was a private label version of Storj.

Storj acquisition of Valdi creates the largest DePIN service for storage and compute.

Storj has further invested in the evolution driven by AI by recently announcing the acquisition of Valdi. The company now expands beyond storage into what is becoming a distributed cloud platform built by leveraging spare storage capacity and GPUs. Just as Google pioneered the concept of building a cloud on commodity hardware, Storj is building a cloud on distributed hardware, but with a modern, trustless design and architecture. With Valdi, Storj is now able to offer storage plus compute (GPUs) for AI and a wide range of other use cases.

As AI continues to grow and evolve in mainstream tech, we can expect DePIN services to further evolve to capture increased market share of AI-related workloads. For the present, the AI revolution has provided a real use case to put many DePIN services on the path to be enterprise-grade distributed solutions that can compete effectively with the hyperscalers.

Share this blog post

Put Storj to the test.

It’s simple to set up and start using Storj. Sign up now to get 25GB free for 30 days.
Start your trial
product guide