ไอคอนติดตั้ง ios เว็บ ไอคอนติดตั้ง ios เว็บ ไอคอนติดตั้งเว็บแอนดรอยด์

การแบ่งความเชี่ยวชาญเทียบกับการแบ่งทั่วไป อนาคตของ ZK คืออะไร?

การวิเคราะห์4 เดือนที่แล้ว发布 6086cf...
33 0

Original author: mo

ต้นฉบับแปล: ลูฟี่, Foresight News

Specialization or generalization, which one is the future of ZK? Let me try to answer this question with a picture:

การแบ่งความเชี่ยวชาญเทียบกับการแบ่งทั่วไป อนาคตของ ZK คืออะไร?

As shown in the figure, is it possible that we will converge to a magical optimal point in the trade-off coordinate system in the future?

No, the future of off-chain verifiable computation is a continuous curve that blurs the lines between specialized and general purpose ZK. Allow me to explain the historical evolution of these terms and how they will converge in the future.

Two years ago, dedicated ZK infrastructure meant low-level circuit frameworks such as circom, Halo 2, and arkworks. ZK applications built using these frameworks are essentially handwritten ZK circuits. They are fast and cheap for specific tasks, but are generally difficult to develop and maintain. They are similar to the various dedicated integrated circuit chips (physical silicon chips) in todays IC (integrated circuit) industry, such as NAND chips and controller chips.

However, over the past two years, dedicated ZK infrastructure has gradually become more general purpose.

We now have ZKML, ZK coprocessor, and ZKSQL frameworks, which provide easy-to-use and highly programmable SDKs for building different categories of ZK applications without writing a single line of ZK circuit code. For example, ZK coprocessor allows smart contracts to access blockchain historical states, events, and transactions in a trustless manner and run arbitrary computations on this data. ZKML enables smart contracts to leverage AI inference results in a trustless manner to process a wide range of machine learning models.

These evolving frameworks have significantly increased programmability within their target domains while still maintaining high performance and low cost due to the thin abstraction layer (SDK/API) that is close to the bare metal circuitry.

They are analogous to GPUs, TPUs, and FPGAs in the IC market: they are programmable domain experts.

ZKVM has also made great progress in the past two years. It is worth noting that all general-purpose ZKVM is built on top of the low-level, specialized ZK framework. The idea is that you can write ZK applications in a high-level language (even more user-friendly than the SDK/API), and these applications can be compiled to a combination of specialized circuits and instruction sets (RISC-V or something like WASM). They are like CPU chips in the IC industry.

ZKVM is an abstraction layer on top of the low-level ZK framework, just like ZK coprocessors etc.

As a wise man once said, one layer of abstraction solves every computer science problem, but it also creates another. Tradeoffs, that’s the key. Fundamentally, with ZKVM we’re making a tradeoff between performance and generality.

Two years ago, ZKVMs bare metal performance was really bad. However, in just two years, ZKVMs performance has improved significantly.

Why?

Because these “general” ZKVMs have become more “specialized”. A key reason for the performance improvement is “precompilations”. These precompilations are specialized ZK circuits that can calculate commonly used high-level procedures, such as SHA 2 and various signature verifications, much faster than the normal process of breaking them down into instruction circuit fragments.

Therefore, the trend is now very obvious.

Dedicated ZK infrastructure is becoming more common, and general-purpose ZKVMs are becoming more specialized.

Over the past few years, optimizations on both solutions have achieved better trade-offs than before: making progress on one point without sacrificing another. That’s why both sides feel like “we are definitely the future.”

However, computer science wisdom tells us that at some point we hit the “Pareto optimality wall” (green dashed line), where we cannot improve one property without sacrificing another.

So, the million-dollar question arises: Will one technology completely replace the other in due course?

To put this in context with the IC industry: CPUs are a $126 billion market, while the entire IC industry (plus all specialty ICs) is $515 billion. Im sure that history will repeat itself here at a micro level, and they wont replace each other.

That being said, no one today would say, “Hey, I’m using a computer that’s powered entirely by a general-purpose CPU,” or “Hey, this is a fancy robot that’s powered by a specialized IC.”

Yes, we should indeed look at this issue from a macro perspective. In the future, there will be a trade-off curve that allows developers to flexibly choose according to their needs.

In the future, dedicated ZK infrastructure and general-purpose ZKVM can work together. This can be achieved in many forms. The simplest approach is already possible today. For example, you can use the ZK coprocessor to generate some calculation results in the blockchain transaction history, but the calculation business logic on top of this data is very complex and you cannot simply express it in the SDK/API.

What you can do is get high-performance and low-cost ZK proofs of data and intermediate computation results, and then converge them to a general-purpose VM via proof recursion.

การแบ่งความเชี่ยวชาญเทียบกับการแบ่งทั่วไป อนาคตของ ZK คืออะไร?

While I think this debate is interesting, I know we are all building this asynchronous computing future for blockchains, driven by off-chain verifiable computation. As use cases for mass user adoption emerge in the coming years, I believe this debate will eventually come to fruition.

This article is sourced from the internet: Specialization vs. generalization, which is the future of ZK?

Related: Pixelverse Completes $5.5 Million Funding to Promote Global Web3 Gaming

Startup entertainment studio and gaming ecosystem Pixelverse has completed a new round of financing, which was led by a Tier 1 venture capital fund and prominent founders from gaming and Web3, raising a total of $5.5 million. The new funds will be used to fund the development of the Pixelverse gaming ecosystem. Notably, the ecosystem has attracted more than 15 million users in less than a month of operation. This round of financing was led by Delphi Ventures, Merit Circle and Mechanism Capital, and other investors included Bitscale Capital, Ghaf Capital, Big Brain Holdings, LiquidX, Foresight Ventures, The Sandbox founder Sébastien Borget, Luca Netz, Dingaling, DCF GOD, Grail and James Kwon. The funding round comes at a time of growth for Pixelverse, which has more than 15 million registered users…

© 版权声明

相关文章