Comprehensive interpretation of the computing DePIN track ecosystem

Analiz3 hafta önce发布 6086cf...
17 0

Orijinal yazar: Paul Timofeev

Orijinal çeviri: TechFlow

Comprehensive interpretation of the computing DePIN track ecosystem

Temel Çıkarımlar

  • Computing resources have become increasingly popular with the rise of machine learning and deep learning for generative AI gelişim, both of which require large compute-intensive workloads. However, as large companies and governments accumulate these resources, startups and independent developers are now facing a shortage of GPUs in the market, making the resources prohibitively expensive and/or inaccessible.

  • Compute DePINs enable the creation of a decentralized Pazar yeri for computing resources such as GPUs by allowing anyone in the world to offer their idle supply in exchange for monetary rewards. This is intended to help underserved GPU consumers gyapay zekan access to new supply channels to obtain the development resources they need for their workloads at reduced cost and overhead.

  • Computational DePINs still face many economic and technical challenges in competing with traditional centralized service providers, some of which will resolve themselves over time, while others will require new solutions and optimizations.

Computing is the new oil

Since the Industrial Revolution, technology has propelled humanity forward at an unprecedented pace, impacting or completely transforming nearly every aspect of daily life. Computers eventually emerged as the culmination of the collective efforts of researchers, academics, and computer engineers. Originally designed to solve large-scale arithmetic tasks for advanced military operations, computers have evolved into the backbone of modern life. As the impact of computers on humanity continues to grow at an unprecedented rate, the demand for these machines and the resources that drive them is also growing, outstripping the available supply. This, in turn, has created market dynamics in which most developers and businesses are unable to gain access to key resources, leaving the development of machine learning and generative AI—one of the most transformative technologies today—in the hands of a small number of well-funded players. At the same time, the large supply of idle computing resources presents a lucrative opportunity to help alleviate the imbalance between computing supply and demand, exacerbating the need for coordination mechanisms between both parties. As such, we believe that decentralized systems powered by blockchain technology and digital assets are essential for the broader, more democratic, and responsible development of generative AI products and services.

Computing resources

Computing can be defined as any activity, application, or workload in which a computer produces a well-defined output based on a given input. Ultimately, it refers to the computational and processing power of computers , which is the core utility of these machines, driving many parts of the modern world and generating a whopping $1.1 trillion in revenue in the past year alone.

Computing resources refer to the various hardware and software components that make computing and processing possible. As the number of applications and functions they enable continues to grow, these components are becoming increasingly important and increasingly present in peoples daily lives. This has led to a race among national powers and businesses to accumulate as many of these resources as possible as a means of survival. This is reflected in the market performance of companies that provide these resources (e.g., Nvidia, whose market value has increased by more than 3000% in the past 5 years).

GPU

GPUs are one of the most important resources in modern high-performance computing . The core function of a GPU is to serve as a specialized circuit that accelerates computer graphics workloads through parallel processing. Originally serving the gaming and PC industries, GPUs have evolved to serve many of the emerging technologies that are shaping the future of our world (e.g., consoles and PCs, mobile devices, cloud computing, IoT). However, the demand for these resources has been particularly exacerbated by the rise of machine learning and artificial intelligence – by performing calculations in parallel, GPUs accelerate ML and AI operations, thereby increasing the processing power and capabilities of the resulting technology.

The rise of AI

At its core, AI is about enabling computers and machines to simulate human intelligence and problem-solving abilities . AI models, as neural networks, are made up of many different pieces of data. The model requires processing power to identify and learn the relationships between these pieces of data, and then reference these relationships when creating outputs based on given inputs.

Despite popular belief, AI development and production is not new; in 1967, Frank Rosenblatt built the Mark 1 Perceptron, the first neural network-based computer that “learned” through trial and error. Additionally, much of the academic research that laid the foundation for the development of AI as we know it today was published in the late 1990s and early 2000s, and the industry has continued to grow since then.

Beyond RD efforts, “narrow” AI models are already at work in a variety of powerful applications in use today . Examples include social media algorithms like Apple’s Siri and Amazon’s Alexa, customized product recommendations, and more. Notably, the rise of deep learning has transformed the development of artificial generative intelligence (AGI). Deep learning algorithms utilize larger, or “deeper,” neural networks than machine learning applications as a more scalable and more versatile alternative. Generative AI models “encode a simplified representation of their training data and refer to it to emit new outputs that are similar, but not identical.”

Deep learning is enabling developers to scale generative AI models to images, speech, and other complex data types, and while milestone apps like ChatGPT, which has seen some of the fastest user growth in modern times, are just early iterations of what’s possible with generative AI and deep learning.

With this in mind, it should come as no surprise that generative AI development involves multiple computationally intensive workloads that require significant amounts of processing power and computing power.

Buna göre the trifecta of deep learning application requirements , the development of AI applications is constrained by several key workloads;

  • Training – Models must process and analyze large datasets to learn how to respond to given inputs.

  • Tuning – The model goes through a series of iterative processes where various hyperparameters are tuned and optimized to improve performance and quality.

  • Simulation – Before deployment, some models, such as reinforcement learning algorithms, go through a series of simulations for testing.

Computational crunch: Demand outstrips supply

Over the past few decades, many technological advances have driven an unprecedented surge in demand for computing and processing power. As a result, today the demand for computing resources such as GPUs far outstrips the available supply, creating a bottleneck in AI development that will only continue to grow without effective solutions.

The broader constraints on supply are further supported by the large number of companies purchasing GPUs in excess of their actual needs, both as a competitive advantage and as a means of survival in the modern global economy. Compute providers often employ contract structures that require long-term capital commitments, granting customers supply in excess of what their needs require.

Epoch’s research shows that the overall number of compute-intensive AI models published is growing rapidly, suggesting that the resource requirements driving these technologies will continue to grow rapidly.

Comprehensive interpretation of the computing DePIN track ecosystem

As the complexity of AI models continues to grow, so too will the computing and processing power requirements of application developers. In turn, the performance of GPUs and their subsequent availability will play an increasingly important role. This is already starting to happen, as demand for high-end GPUs, such as those produced by Nvidia, has hailed GPUs as the “rare earth metals” or “gold” of the AI industry.

The rapid commercialization of AI has the potential to hand control to a handful of tech giants, similar to today’s social media industry, raising concerns about the ethical foundations of these models. A notable example is the recent controversy surrounding Google Gemini. While its many bizarre responses to various prompts did not pose an actual danger at the time, the incident demonstrated the inherent risks of a handful of companies dominating and controlling AI development.

Today’s tech startups face increasing challenges in acquiring computing resources to power their AI models. These applications perform many computationally intensive processes before the model is deployed. For smaller businesses, amassing a large number of GPUs is a largely unsustainable endeavor, and while traditional cloud computing services like AWS or Google Cloud offer a seamless and convenient developer experience, their limited capacity ultimately results in high costs that make them unaffordable for many developers. Ultimately, not everyone can come up with the $7 trillion to cover their hardware costs.

So what is the reason?

Nvidia once estimated that there are more than 40K companies around the world using GPUs for AI and accelerated computing, with a developer community of more than 4 million people. Looking ahead, the global AI market is expected to grow from $515 billion in 2023 to $2.74 trillion in 2032, with an average annual growth rate of 20.4%. At the same time, the GPU market is expected to reach $400 billion by 2032, with an average annual growth rate of 25%.

However, the growing imbalance between supply and demand of computing resources in the wake of the AI revolution could create a rather utopian future where a handful of well-funded giants dominate the development of transformative technologies. Therefore, we believe that all roads lead to decentralized alternative solutions to help bridge the gap between the needs of AI developers and the available resources.

The role of DePIN

What are DePINs?

DePIN is a term coined by the Messari research team that stands for Decentralized Physical Infrastructure Network. Specifically, decentralized means that there is no single entity extracting rents and restricting access. Whereas physical infrastructure refers to the “real life” physical resources that are utilized. A network refers to a group of participants working in coordination to achieve a predetermined goal or set of goals. Today, the total market value of DePINs is approximately $28.3 billion .

At the core of DePINs is a global network of nodes that connect physical infrastructure resources with the blockchain in order to create a decentralized marketplace that connects buyers and suppliers of resources, where anyone can become a supplier and be paid for their services and contribution of value to the network. In this case, the central intermediary that restricts access to the network through various legal and regulatory means and service fees is replaced by a decentralized protocol composed of smart contracts and code, which is governed by its corresponding token holders.

The value of DePINs is that they provide a decentralized, accessible, low-cost, and scalable alternative to traditional resource networks and service providers. They enable decentralized markets to serve specific end goals; the cost of goods and services is determined by market dynamics, and anyone can participate at any time, resulting in naturally lower unit costs due to the increase in the number of suppliers and the minimization of profit margins.

Using blockchain enables DePINs to build crypto-economic incentive systems that help ensure network participants are appropriately compensated for their services, turning key value providers into stakeholders. However, it is important to note that network effects, which are achieved by transforming small personal networks into larger, more productive systems, are key to realizing many of the benefits of DePINs. In addition, while token rewards have proven to be a powerful tool for network bootstrapping mechanisms, building sustainable incentives to aid user retention and long-term adoption remains a key challenge in the broader DePIN space.

How does DePINs work?

To better understand the value of DePINs in enabling a decentralized computing market, it is important to recognize the different structural components involved and how they work together to form a decentralized resource network. Let’s consider the structure and participants of a DePIN.

protocol

A decentralized protocol, a set of smart contracts built on top of an underlying base layer blockchain network, is used to facilitate trustless interactions between network participants. Ideally, the protocol should be governed by a diverse set of stakeholders who are actively committed to contributing to the long-term success of the network. These stakeholders then use their share of the protocol token to vote on proposed changes and developments to DePIN. Given that successfully coordinating a distributed network is a huge challenge in itself, the core team typically retains the power to initially implement these changes and then transfers power to a decentralized autonomous organization (DAO).

Network Participants

The end users of a resource network are its most valuable participants and can be categorized according to their function.

  • Supplier : An individual or entity that provides resources to the network in exchange for monetary rewards paid in DePIN native tokens. Suppliers are “connected” to the network through the blockchain native protocol, which may enforce a whitelist on-chain process or a permissionless process. By receiving tokens, suppliers gain a stake in the network, similar to stakeholders in an equity ownership context, enabling them to vote on various proposals and developments of the network, such as proposals that they believe will help drive demand and network value, thereby creating higher token prices over time. Of course, suppliers who receive tokens may also utilize DePINs as a form of passive income and sell them after receiving the tokens.

  • Consumers : These are individuals or entities actively seeking resources provided by DePIN, such as AI startups seeking GPUs, representing the demand side of the economic equation. Consumers are attracted to using DePIN if there are real advantages to using DePIN over using traditional alternatives (such as lower costs and overhead requirements), thus representing organic demand for the network. DePINs typically require consumers to pay for resources in their native tokens in order to generate value and maintain a stable cash flow.

resource

DePINs can serve different markets and adopt different business models to allocate resources. Blockworks provides a good framework : custom hardware DePINs , which provide dedicated proprietary hardware for vendors to distribute; commodity hardware DePINs, which allow the distribution of existing idle resources, including but not limited to computing, storage, and bandwidth.

Economic Model

In an ideally run DePIN, value comes from revenue that consumers pay for supplier resources. Continued demand for the network means continued demand for the native token, which aligns with the economic incentives of suppliers and token holders. Generating sustainable organic demand in the early stages is a challenge for most startups, which is why DePINs offer inflationary token incentives to incentivize early suppliers and bootstrap the networks supply as a means of generating demand and therefore more organic supply. This is similar to how venture capital firms subsidized passenger fares in Ubers early stages to bootstrap the initial customer base to further attract drivers and enhance its network effects.

DePINs need to manage token incentives as strategically as possible, as they play a key role in the overall success of the network. When demand and network revenue rise, token issuance should be reduced. Conversely, when demand and revenue fall, token issuance should be used again to incentivize supply.

To further illustrate what a successful DePIN network looks like, consider the “ DePIN Flywheel,” a positive feedback loop that guides DePINs. Here’s a summary:

  • DePIN distributes inflationary token rewards to incentivize providers to provide resources to the network and establish a base supply level available for consumption.

  • Assuming the number of suppliers starts to grow, a competitive dynamic begins to form in the network, improving the overall quality of the goods and services provided by the network until it provides services that are superior to existing market solutions, thereby gaining a competitive advantage. This means that decentralized systems surpass traditional centralized service providers, which is no easy feat.

  • Organic demand for DePIN begins to build, providing legitimate cash flow to suppliers. This presents a compelling opportunity for investors and suppliers to continue to drive demand for the network and therefore token price.

  • Growth in token price increases revenue for suppliers, attracting more suppliers and restarting the flywheel.

Comprehensive interpretation of the computing DePIN track ecosystem

This framework offers a compelling growth strategy, although it is important to note that it is largely theoretical and assumes that the resources provided by the network are continually competitively attractive.

Calculation of DePINs

The decentralized computing market is part of a broader movement, the “sharing economy,” a peer-to-peer economic system based on consumers sharing goods and services directly with other consumers through online platforms. This model, pioneered by companies like eBay and dominated today by companies like Airbnb and Uber, is ultimately poised for disruption as the next generation of transformative technologies sweeps across global markets. Valued at $150 billion in 2023 and expected to grow to nearly $800 billion by 2031 , the sharing economy demonstrates a broader trend in consumer behavior that we believe DePINs will both benefit from and play a key role in.

Comprehensive interpretation of the computing DePIN track ecosystem

Fundamental

Compute DePINs are peer-to-peer networks that facilitate the allocation of computing resources by connecting suppliers and buyers through decentralized marketplaces. A key differentiator of these networks is their focus on commodity hardware resources, which are already in the hands of many people today. As we have discussed, the advent of deep learning and generative AI has led to a surge in demand for processing power due to their resource-intensive workloads, creating bottlenecks in accessing critical resources for AI development. Simply put, decentralized compute marketplaces aim to alleviate these bottlenecks by creating a new supply stream – one that spans the globe and that anyone can participate in.

In Computing DePINs, any individual or entity can lend out their idle resources at any time and receive appropriate compensation. At the same time, any individual or entity can obtain necessary resources from the global permissionless network at a lower cost and with greater flexibility than existing market products. Therefore, we can describe the participants in Computing DePINs through a simple economic framework:

  • Supplier : An individual or entity that owns computing resources and is willing to lend or sell them in exchange for a subsidy.

  • Demander : An individual or entity that needs computing resources and is willing to pay a price for them.

Key Benefits of Computing DePINs

Compute DePINs offer a number of advantages that make them an attractive alternative to centralized service providers and marketplaces. First, enabling permissionless, cross-border market participation unlocks a new supply stream, increasing the amount of critical resources needed for compute-intensive workloads. Compute DePINs focus on hardware resources that most people already own—anyone with a gaming PC already has a GPU that can be rented out. This expands the range of developers and teams that can participate in building the next generation of goods and services, benefiting more people around the world.

Looking further, the blockchain infrastructure that supports DePINs provides an efficient and scalable settlement rail for facilitating the micropayments required for peer-to-peer transactions. Crypto-native financial assets (tokens) provide a shared unit of value that is used by demand-side participants to pay suppliers, aligning economic incentives through a distribution mechanism consistent with todays increasingly globalized economy. Referring to the DePIN flywheel we built earlier, strategically managing economic incentives is very beneficial to increasing DePINs network effects (on both the supply and demand sides), which in turn increases competition among suppliers. This dynamic reduces unit costs while improving service quality, creating a sustainable competitive advantage for DePIN, from which suppliers can benefit as token holders and key value providers.

DePINs are similar to cloud computing service providers in the flexible user experience they aim to provide, where resources can be accessed and paid for on demand. Referring to Grandview Research S forecast , the global cloud computing market size is expected to grow at a compound annual growth rate of 21.2% to reach more than $2.4 trillion by 2030, demonstrating the feasibility of such business models in the context of future growth in demand for computing resources. Modern cloud computing platforms utilize central servers to handle all communications between client devices and servers, creating a single point of failure in their operations. However, building on top of blockchain allows DePINs to provide greater censorship resistance and resilience than traditional service providers. Attacking a single organization or entity (such as a central cloud service provider) would compromise the entire underlying resource network, and DePINs are designed to resist such incidents through their distributed nature. First, the blockchain itself is a globally distributed network of dedicated nodes designed to resist centralized network authority. In addition, computing DePINs also allow for permissionless network participation, bypassing legal and regulatory barriers. Based on the nature of the token distribution, DePINs can adopt a fair voting process to vote on proposed changes and developments to the protocol to eliminate the possibility of a single entity suddenly shutting down the entire network.

The current state of computational DePINs

Render Network

Render Network is a computational DePIN that connects buyers and sellers of GPUs through a decentralized computing marketplace, with transactions conducted through its native token. Renders GPU marketplace involves two key parties – creators seeking access to processing power and node operators who rent out idle GPUs to creators in exchange for compensation in native Render tokens. Node operators are ranked based on a reputation system, and creators can choose GPUs from a multi-tier pricing system. The Proof-of-Render (POR) consensus algorithm coordinates operations, and node operators commit their computing resources (GPUs) to process tasks, i.e., graphics rendering work. Upon completion of a task, the POR algorithm updates the node operators status, including a change in reputation score based on the quality of the task. Renders blockchain infrastructure facilitates payment for work, providing a transparent and efficient settlement rail for suppliers and buyers to transact through the network token.

Comprehensive interpretation of the computing DePIN track ecosystem

The Render Network was originally conceived by Jules Urbach in 2009. The network went live on Ethereum ( RNDR ) in September 2020 and migrated to Solana ( RENDER ) about three years later to improve network performance and reduce operating costs.

As of this writing, the Render Network has processed up to 33 million tasks (in terms of rendered frames) and has grown to 5600 total nodes since its inception. Approximately 60k RENDERs have been destroyed, a process that occurs during the distribution of work credits to node operators.

IO Net

Io Net is launching a decentralized GPU network on top of Solana as a coordination layer between a large number of idle computing resources and the individuals and entities that need the processing power these resources provide. The unique selling point of Io Net is that rather than competing directly with other DePINs on the market, it aggregates GPUs from a variety of sources (including data centers, miners, and other DePINs such as Render Network and Filecoin) while leveraging a proprietary DePIN, the Internet-of-GPUs (IoG), to coordinate operations and align the incentives of market participants. Io Net customers can customize their workload clusters on IO Cloud by selecting processor type, location, communication speed, compliance, and service time. Conversely, anyone with a supported GPU model (12 GB RAM, 256 GB SSD) can participate as an IO Worker, lending their idle computing resources to the network. While service payments are currently settled in fiat and USDC, the network will soon support payments in the native $IO token as well. The price of resources is determined by their supply and demand as well as various GPU specifications and configuration algorithms. The ultimate goal of Io Net is to become the GPU marketplace of choice by offering lower costs and higher quality of service than modern cloud service providers.

The multi-layer IO architecture can be mapped as follows:

  • UI layer – consists of the public website, the client area, and the Workers area.

  • Security Layer – This layer consists of firewalls for network protection, authentication services for user verification, and logging services for tracking activities.

  • API Layer – This layer acts as a communication layer and consists of a public API (for the website), a private API (for Workers), and an internal API (for cluster management, analytics, and monitoring reports).

  • Backend Layer – The backend layer manages Workers, cluster/GPU operations, customer interactions, billing and usage monitoring, analytics, and autoscaling.

  • Database Tier − This tier is the data repository of the system and uses primary storage (for structured data) and cache (for frequently accessed temporary data).

  • Message Broker and Task Layer − This layer facilitates asynchronous communication and task management.

  • Infrastructure layer – This layer contains GPU pools, orchestration tools, and manages task deployment.

Comprehensive interpretation of the computing DePIN track ecosystem

Current Statistics/Roadmap

  • As of this writing:

  • Total network revenue – $1.08m

  • Total Computing Hours – 837.6 k hours

  • Total cluster-ready GPUs – 20.4K

  • Total Cluster Ready CPU – 5.6k

  • Total on-chain transactions – 1.67 million

  • Total inference times – 335.7k

  • Total created clusters – 15.1k

(Data from Io Net explorer )

Aethir

Aethir is a cloud computing DePIN that facilitates the sharing of high-performance computing resources in compute-intensive fields and applications. It leverages resource pooling to achieve global GPU allocation at significantly reduced costs and decentralized ownership through distributed resource ownership. Aethir is designed for high-performance workloads and is suitable for industries such as gaming and AI model training and inference. By unifying GPU clusters into a single network, Aethir is designed to increase cluster size, thereby improving the overall performance and reliability of services provided on its network.

Aethir Network is a decentralized economy comprised of miners, developers, users, token holders, and the Aethir DAO. Three key roles that ensure the successful operation of the network are containers, indexers, and inspectors. Containers are the core nodes of the network, performing important operations that maintain network liveness, including validating transactions and rendering digital content in real time. Inspectors act as quality assurance personnel, continuously monitoring the performance and quality of service of containers to ensure reliable and efficient operation for GPU consumers. Indexers act as matchmakers between users and the best available containers. Underpinning this structure is the Arbitrum Layer 2 blockchain, which provides a decentralized settlement layer to pay for goods and services on the Aethir Network in native $ATH tokens.

Comprehensive interpretation of the computing DePIN track ecosystem

Rendering Proof

Nodes in the Aethir network have two key functions – rendering proof of capacity , where a group of these worker nodes are randomly selected every 15 minutes to validate transactions; and rendering proof of work , closely monitoring network performance to ensure users are optimally served, adjusting resources based on demand and geography. Miner rewards are distributed to participants who run nodes on the Aethir network, calculated based on the value of the computing resources they have loaned out, and rewards are paid in the native $ATH token.

Nosana

Nosana is a decentralized GPU network built on Solana. Nosana allows anyone to contribute idle computing resources and be rewarded in the form of $NOS tokens for doing so. DePIN facilitates cost-effective allocation of GPUs that can be used to run complex AI workloads without the overhead of traditional cloud solutions. Anyone can run a Nosana node by lending out their idle GPUs and receive token rewards proportional to the GPU power they provide to the network.

The network connects two parties that allocate computing resources: users seeking access to computing resources and node operators who provide computing resources. Important protocol decisions and upgrades are voted on by NOS token holders and managed by the Nosana DAO.

Nosana has an extensive roadmap for its future plans – Galactica (v1.0 – H1/H2 2024) will launch mainnet, release CLI and SDK, and focus on expanding the network through container nodes for consumer GPUs. Triangulum (v1.X – H2 2024) will integrate major machine learning protocols and connectors such as PyTorch, HuggingFace, and TensorFlow. Whirlpool (v1.X -H1 2025) will expand support for diverse GPUs from AMD, Intel, and Apple Silicon. Sombrero (v1.X – H2 2025) will add support for medium and large enterprises, fiat payments, billing, and team features.

Akash

Akash Network is an open-source proof-of-stake network built on the Cosmos SDK that allows anyone to join and contribute without permission, creating a decentralized cloud computing marketplace. $AKT tokens are used to secure the network, facilitate resource payments, and coordinate economic behavior between network participants. Akash Network consists of several key components:

  • The blockchain layer uses Tendermint Core and Cosmos SDK to provide consensus.

  • Application layer , manages deployment and resource allocation.

  • The provider layer manages resources, bidding, and user application deployment.

  • The User Layer enables users to interact with the Akash Network, manage resources, and monitor application status using the CLI, console, and dashboard.

The network initially focused on storage and CPU rental services, and as demand for AI training and inference workloads has grown, the network has expanded its services to include GPU rental and allocation, responding to these needs through its AkashML platform. AkashML uses a reverse auction system where customers (called tenants) submit their desired GPU prices and compute providers (called providers) compete to supply the requested GPUs.

As of this writing, the Akash blockchain has completed over 12.9 million transactions, over $535,000 has been spent accessing computing resources, and over 189,000 unique deployments have been leased.

Honorable Mentions

The computational DePIN space is still developing, and many teams are competing to bring innovative and efficient solutions to market. Other examples worth further investigation include Hyperbolic , which is building a collaborative open access platform for resource pooling for AI development, and Exabits , which is building a distributed computing power network supported by computational miners.

Important considerations and future prospects

Now that we have understood the basic principles of calculating DePIN and reviewed several complementary case studies currently being run, it is important to consider the impact of these decentralized networks, including both advantages and disadvantages.

challenge

Building distributed networks at scale often requires trade-offs in performance, security, and resiliency. For example, training an AI model on a globally distributed network of commodity hardware may be far less cost-effective and time-efficient than training it on a centralized service provider. As we mentioned earlier, AI models and their workloads are becoming increasingly complex, requiring more high-performance GPUs rather than commodity GPUs.

Comprehensive interpretation of the computing DePIN track ecosystem

This is why large corporations hoard high-performance GPUs in large quantities, and is an inherent challenge faced by computational DePINs that aim to solve the GPU shortage problem by establishing a permissionless market where anyone can lend out idle GPUs (see this tweet for more on the challenges facing decentralized AI protocols ) . Protocols can address this problem in two key ways: one is to establish baseline requirements for GPU providers who want to contribute to the network, and the other is to pool the computational resources provided to the network to achieve greater overall. Still, this model is inherently challenging to establish compared to centralized service providers, who can allocate more funds to deal directly with hardware providers (such as Nvidia). This is something DePINs should consider as they move forward. If a decentralized protocol has a large enough fund, the DAO can vote to allocate a portion of the funds to purchase high-performance GPUs, which can be managed in a decentralized manner and loaned out at a higher price than commodity GPUs.

Another challenge specific to computational DePINs is managing proper resource utilization . In their early stages, most computational DePINs will face the problem of structural under-demand, just as many startups face today. In general, the challenge for DePINs is to build enough supply early on to achieve minimum viable product quality. Without supply, the network will not be able to generate sustainable demand and will not be able to serve its customers during peak demand periods. On the other hand, excess supply is also a problem. Above a certain threshold, more supply will only help when network utilization is close to or at full capacity. Otherwise, DePINs will run the risk of paying too much for supply, resulting in under-utilization of resources, and suppliers will receive less revenue unless the protocol increases token issuance to keep suppliers engaged.

A telecommunications network is useless without extensive geographic coverage . A taxi network is useless if passengers have to wait a long time for a ride. A DePIN is useless if it has to pay people to provide resources over a long period of time. While centralized service providers can predict resource demand and manage resource supply efficiently, computing DePIN lacks a central authority to manage resource utilization. Therefore, it is particularly important for DePIN to determine resource utilization as strategically as possible.

A bigger issue is that the decentralized GPU market may no longer face a GPU shortage . Mark Zuckerberg recently said in an interview that he believes energy will become the new bottleneck , rather than computing resources, because companies will now compete to build data centers at scale, rather than hoarding computing resources as they do now. This, of course, means a potential reduction in GPU costs, but it also raises the question of how AI startups will compete with large companies in terms of performance and the quality of goods and services they provide if building proprietary data centers raises the overall bar for AI model performance.

Example of calculating DePINs

To reiterate, the gap between the complexity of AI models and their subsequent processing and computational requirements and the available high-performance GPUs and other computing resources is widening.

Computing DePINs are poised to be innovative disruptors in computing markets that are today dominated by major hardware manufacturers and cloud computing service providers based on several key capabilities:

1) Provide lower cost of goods and services.

2) Provide stronger anti-censorship and network resilience protection.

3) Benefit from potential regulatory guidelines that may require AI models to be as open as possible for fine-tuning and training, and easily accessible to anyone.

Comprehensive interpretation of the computing DePIN track ecosystem

The percentage of households in the United States with a computer and internet access has grown exponentially, approaching 100%. The percentage has also grown significantly in many parts of the world. This suggests an increase in the number of potential providers of computing resources (GPU owners) who would be willing to lend their idle supply if there were sufficient monetary incentives and a seamless transaction process. This is, of course, a very rough estimate, but it suggests that the foundation for building a sustainable shared economy of computing resources may already exist.

In addition to AI, future demand for computing will come from many other industries, such as quantum computing. The quantum computing market size is expected to grow from $928.8 million in 2023 to $6528.8 million in 2030 , at a CAGR of 32.1%. Production in this industry will require different kinds of resources, but it will be interesting to see if there will be any quantum computing DePINs launched and what they look like.

“A strong ecosystem of open models running on consumer hardware is an important hedge against a future where value is highly centralized by AI and most human thought is read and mediated by central servers controlled by a few people. These models are also much less risky than corporate giants and the military.” — Vitalik Buterin

Large enterprises may not be the target audience for DePINs, nor will they be. Computational DePINs bring back individual developers, scattered builders, startups with minimal funding and resources. They allow for the transformation of idle supply into innovative ideas and solutions, enabled by more abundant computing power. AI will undoubtedly change the lives of billions of people. Instead of worrying about AI replacing everyone’s job, we should encourage the idea that AI can empower individual and self-employed entrepreneurs, startups, and the public at large.

Orijinal bağlantı

This article is sourced from the internet: Comprehensive interpretation of the computing DePIN track ecosystem

Related: Ethereum spot ETF latest developments and market outlook: issuer submits revised documents, with a maximum target price

Original | Odaily Planet Daily Author | Nanzhi Latest News at a Glance Will the BTC spot ETF route repeat itself? This morning, Bloomberg ETF analyst James Seyffart wrote on the X platform : Five potential Ethereum spot ETF issuers have submitted 19 b-4 revised documents to the US SEC through Cboe BZX, including: Fidelity, VanEck, Invesco/Galaxy, Ark/21 Shares and Franklin. The DTCC official website has also listed the VanEck spot Ethereum ETF VANECK ETHEREUM TR SHS (code ETHV). On the other hand, as the SECs expectations for the Ethereum spot ETF have increased, the negative premium rate of Grayscale Ethereum Trust (ETHE) has narrowed to 11.82% . According to relevant sources, Grayscale has submitted an update to the Ethereum Mini Trust 19 b-4 form to the U.S. Securities and…

© 版权声明

Amerika Birleşik Devletleri

Yorum yok

Yorum bırakmak için giriş yapmalısınız!
Hemen giriş yapın
Yorum yok...