simge_kurulum_ios_web simge_kurulum_ios_web simge_yükle_android_web

Vitaliks'in yeni makalesi: Ethereum'un olası geleceği, The Verge

Analiz2 ay önce更新 6086cf...
54 0

Original title: Possible futures of the Ethereum protocol, part 4: The Verge

Vitalik Buterin'in orijinal makalesi

Original translation: Mensh, ChainCatcher

Special thanks to Justin Drake, Hsia-wei Wanp, Guillaume Ballet, Icinacio, Rosh Rudolf, Lev Soukhanoy Ryan Sean Adams, and Uma Roy for their feedback and reviews.

One of the most powerful features of blockchain is that anyone can run a node on their computer and verify the correctness of the blockchain. Even if all 9596 nodes running chain consensus (PoW, PoS) immediately agreed to change the rules and started producing blocks according to the new rules, everyone running a fully validating node would refuse to accept the chain. Coin miners who are not part of this conspiracy group will automatically converge on a chain that continues to follow the old rules and continue to build on this chain, and fully validated users will follow this chain.

This is a key difference between blockchains and centralized systems. However, for this property to hold true, running a fully validating node needs to be feasible for enough people. This applies both to campaigners (because if campaigners don’t validate the chain, they’re not contributing to enforcing the protocol rules) and to regular users. Today, it’s possible to run a node on a consumer laptop (including the one I’m using to write this), but it’s difficult to do. The Verge is here to change that, making it computationally cheap to fully validate the chain so that every mobile wallet, browser wallet, and even smartwatch will validate by default.

Vitaliks'in yeni makalesi: Ethereum'un olası geleceği, The Verge

The Verge 2023 Roadmap

Initially, Verge referred to the transfer of Ethereum state storage to Verkle trees – a tree structure that allows for more compact proofs, enabling stateless verification of Ethereum blocks. A node can verify an Ethereum block without storing any Ethereum state (account balances, contract code, storage space, …) on its hard drive, at the cost of a few hundred KB of proof data and a few hundred milliseconds of extra time to verify a proof. Today, Verge represents a larger vision focused on achieving maximum resource-efficient verification of the Ethereum chain, which includes not only stateless verification techniques, but also the use of SNARKs to verify all Ethereum executions.

In addition to the long-standing concern about SNARKs verifying the entire chain, another new issue has to do with whether Verkle trees are the best technology. Verkle trees are vulnerable to attacks by quantum computers, so if we put a Verkle tree in the current KECCAK Merkle Patricia tree, we will have to replace the tree again in the future. The self-replacement method of Merkle trees is to directly skip the STARKs that use Merkle branches and put them into a binary tree. Historically, this approach has been considered unfeasible due to overhead and technical complexity. Recently, however, we have seen Starkware prove 1.7 million Poseidon hashes per second on a laptop using ckcle STARKs, and proof times for more traditional hashes are also rapidly decreasing due to the emergence of technologies such as GKB. Therefore, in the past year, Verge has become more open to several possibilities.

The Verge: Key goals

  • Stateless Clients: A fully authenticated client and signed node should require no more than a few GB of storage.

  • (Long term) Fully verify the chain (consensus and execution) on a smartwatch. Download some data, verify SNARK, done.

In this chapter

  • Stateless Clients: Verkle or STARKs?

  • Proof of validity of EVM execution

  • Proof of Consensus Validity

Stateless Verification: Verkle or STARKs

What problem are we trying to solve?

Today, Ethereum clients need to store hundreds of gigabytes of state data to validate blocks, and this amount is increasing every year. The raw state data grows by about 30 GB per year, and a single client must store some additional data on top of it to efficiently update triples.

Vitaliks'in yeni makalesi: Ethereum'un olası geleceği, The Verge

This reduces the number of users who can run a fully validating Ethereum node: while hard drives large enough to store all of Ethereum’s state, even years of history, are readily available, the computers people buy by default tend to have only a few hundred gigabytes of storage. The state size also introduces a huge amount of friction into the process of setting up a node for the first time: the node needs to download the entire state, which can take hours or days. This has all sorts of knock-on effects. For example, it makes it significantly more difficult for node makers to upgrade their node setups. Technically, it’s possible to do the upgrade without downtime — start a new client, wait for it to sync, then shut down the old client and transfer the keys — but in practice, this is technically very complicated.

Nasıl çalışır?

Stateless verification is a technique that allows nodes to verify blocks without knowing the entire state. Instead, each block comes with a witness that includes: (i) the values, code, balances, storage at specific locations in the state that the block will access; (ii) kriptographic proof that these values are correct.

In fact, implementing stateless verification requires changing Ethereums state tree structure. This is because the current Merkle Patricia tree is extremely unfriendly to the implementation of any cryptographic proof scheme, especially in the worst case. This is true for both the original Merblk branch and the possibility of packaging into STARK. The main difficulties stem from some weaknesses of MPT:

1. This is a six-way tree (i.e. each node has 16 children). This means that, in a tree of size N, a proof takes on average 32*(16-1)*log16(N) = 120*log2(N) bytes, or about 3840 bytes in a tree of 2^32 items. For a binary tree, it takes only 32*(2-1)*log2(N) = 32*log2(N) bytes, or about 1024 bytes.

2. The code is not Merkle-ified. This means that to prove any access to the account code, the entire code is required, which is at most 24,000 bytes.

Vitaliks'in yeni makalesi: Ethereum'un olası geleceği, The Verge

We can calculate the worst case scenario as follows:

30000000 gas / 2400 (cold account read cost) * (5 * 488 + 24000) = 330000000 bytes

The branch cost is slightly reduced (5*480 instead of 8*480) because the top parts of the branches are repeated when there are more branches. But even so, the amount of data to download in a time slot is completely unrealistic. If we try to encapsulate it with STARK, we will encounter two problems: (i) KECCAK is relatively STARK-unfriendly; (ii) 330MB of data means we have to prove 5 million calls to the KECCAK round function, which is probably impossible to prove for all but the most powerful consumer hardware, even if we can make STARK prove that KECCAK is more efficient.

If we replace the hex tree with a binary tree directly, and do additional Merkle-ification of the code, the worst case becomes roughly 30000000/2400* 32*(32-14+ 8) = 10400000 bytes (14 is the subtraction of redundant bits for the 2^14 branches, and 8 is the length of the proof to enter the leaf in the code block). Note that this requires changing the gas cost to charge for accessing each individual code block; EIP-4762 does this. 10.4 MB is much better, but it is still too much data for many nodes to download in a single time slot. Therefore, we need to introduce more powerful technology. There are two leading solutions in this regard: Verkle trees and STARKed binary hash trees.

Verkle Tree

Verkle trees use vector commitments based on elliptic curves to make shorter proofs. The key to unlocking this is that the portion of the proof corresponding to each parent-child relationship is only 32 bytes, regardless of the width of the tree. The only limit on the width of the proof tree is that if the proof tree is too wide, the proof becomes computationally inefficient. The implementation proposed for Ethereum has a width of 256.

Vitaliks'in yeni makalesi: Ethereum'un olası geleceği, The Verge

Therefore, the size of a single branch in the proof becomes 32 – 1 og 256(N) = 4* log 2(N) bytes. Therefore, the theoretical maximum proof size is roughly 30000000 / 2400 * 32* ( 32 -14 + 8) / 8 = 130000 bytes (the actual calculation result is slightly different due to the uneven distribution of state blocks, but it is OK as a first approximation).

Also note that in all of the above examples, this worst case is not the worst case: a worse case is that the attacker deliberately mines two addresses so that they have a long common prefix in the tree and reads data from one of them, which could extend the worst-case branch length by another factor of 2. But even with this, the worst-case proof length of the Verkle tree is 2.6 MB, which is roughly consistent with the current worst-case verification data.

We also do one other thing with this caveat: we make it very cheap to access adjacent storage: either many blocks of the same contract, or adjacent storage slots. EIP-4762 provides a kesinliklenition of adjacency, and charges only 200 gas for adjacency access. In the case of adjacent access, the worst-case proof size becomes 30000000 / 200*32 – 4800800 bytes, which is still roughly within the tolerance. If we want to reduce this value for safety, we can increase the fee for adjacent access slightly.

STARKed Binary Hash Tree

The technique is pretty self-explanatory: you just make a binary tree, get a proof of up to 10.4 MB, prove the value in the block, and then replace the proof with the STARK of the proof. This way, the proof itself only contains the data being proved, plus a fixed overhead of 100-300 kB from the actual STARK.

The main challenge here is verification time. We can do essentially the same calculation as above, except instead of counting bytes, we count hashes. A 10.4 MB block means 330,000 hashes. If you add in the possibility of an attacker mining a tree of addresses with a long common prefix, the worst case hashes come to about 660,000 hashes. So if we can prove 200,000 hashes per second, well be fine.

These numbers have been achieved on consumer laptops using the Poseidon hash function , which is specifically designed to be STARK-friendly. However, the Poseidon system is still relatively immature, so many people don’t trust its security yet. Therefore, there are two realistic paths forward:

  1. Quickly perform extensive security analysis on Poseidon and become familiar enough with it to deploy it at L1

  2. Use a more conservative hash function like SHA 256 or BLAKE

If proving a conservative hash function, Starkwares STARK circle can only prove 10-30k hashes per second on a consumer laptop at the time of writing. However, STARK technology is improving rapidly. Even today, GKR-based technology has shown the ability to increase this speed to the 100-200k range.

Witness use cases beyond validating blocks

In addition to validating blocks, there are three other key use cases that require more efficient stateless validation:

  • Mempool: When a transaction is broadcast, nodes in the P2P network need to verify that the transaction is valid before rebroadcasting it. Today verification includes verifying the signature, but also checking that the balance is sufficient and the prefix is correct. In the future (for example, using native account abstractions such as EIP-7701), this may involve running some EVM code that does some state access. If the node is stateless, the transaction needs to be accompanied by a proof proving the state object.

  • Inclusion lists: This is a proposed feature that would allow (potentially small and unsophisticated) proof-of-stake validators to force the next block to include a transaction, regardless of the wishes of the (potentially large and complex) block builders. This would reduce the ability of powerful parties to manipulate the blockchain by delaying transactions. However, this would require validators to have a way to verify the validity of transactions in the inclusion list.

  • Light Client: If we want users to access the chain through a wallet (such as Metamask, Rainbow, Rabby, etc.), to do this, they need to run a light client (such as Helios). The Helios core module provides users with a verified state root. In order to have a completely trustless experience, users need to provide proof for each RPC call they make (for example, for an Ethereum call request (the user needs to prove all the states accessed during the call process).

What all of these use cases have in common is that they require quite a few proofs, but each is quite small. Therefore, STARK proofs are not practical for them; instead, the most realistic approach is to use Merkle branches directly. Another advantage of Merkle branches is that they are updatable: given a proof for state object X rooted at block B, if a child block B 2 and its witness are received, the proof can be updated to be rooted at block B 2. Verkle proofs are also natively updatable.

What are the connections with existing research?

What else can be done?

The remaining main work is

1. More analysis on the consequences of EIP-4762 (stateless gas cost changes)

2. More work to complete and test transition procedures, which is a major part of the complexity of any implementation plan for a stateless environment

3. More security analysis of Poseidon, Ajtai, and other “STARK-friendly” hash functions

4. Further development of ultra-efficient STARK protocol features for conservative (or traditional) hashing, e.g. based on Binius or GKRs ideas.

Furthermore, we will soon decide to choose one of three options: (i) Verkle trees, (ii) STARK-friendly hash functions, and (iii) conservative hash functions. Their characteristics can be roughly summarized in the following table:

Vitaliks'in yeni makalesi: Ethereum'un olası geleceği, The Verge

Beyond these headline numbers, there are other important considerations:

  • The Verkle tree code is fairly mature these days. Using anything other than Verkle would delay deployment, and likely a hard fork. That’s OK, especially if we need extra time for hash function analysis or validator implementation, or if we have other important features we want to include sooner.

  • Updating the state root using a hash is faster than using a Verkle tree. This means that a hash-based approach can reduce the synchronization time of a full node.

  • Verkle trees have an interesting witness update property – Verkle tree witnesses are updateable. This property is useful for mempools, inclusion lists, and other use cases, and may also help make implementations more efficient: if a state object is updated, the witness of the second-to-last layer can be updated without having to read the contents of the last layer.

  • Verkle trees are harder to SNARK. If we want to get proof size down to a few kilobytes, Verkle proofs introduce some difficulties. This is because the verification of Verkle proofs introduces a lot of 256-bit operations, which requires the proof system to either have a lot of overhead or have a custom internal structure that contains the 256-bit Verkle proof part. This is not a problem for statelessness itself, but it introduces more difficulties.

If we want to get Verkle witness updatability in a quantum-safe and reasonably efficient way, another possible approach is lattice-based Merkle trees.

If the proof system is not efficient enough in the worst case, we can also make up for this deficiency with the unexpected tool of multidimensional gas: separate gas limits for (i) calldata; (ii) computation; (iii) state access, and possibly other different resources. Multidimensional gas increases complexity, but in exchange it more strictly limits the ratio between average case and worst case. With multidimensional gas, the maximum number of branches that need to be proved in theory may be reduced from 12500 to, for example, 3000. This would make BLAKE 3 (barely) sufficient even today.

Vitaliks'in yeni makalesi: Ethereum'un olası geleceği, The Verge

Multi-dimensional gas allows the resource limits of blocks to be closer to the resource limits of the underlying hardware

Another unexpected tool is to delay the state root calculation until the timeslot after the block. This gives us a full 12 seconds to calculate the state root, which means that even in the most extreme case, only 60,000 hashes per second of proof time is sufficient, which again makes us think that BLAKE 3 is barely adequate.

The downside of this approach is that it adds one slot of light client latency, but there are more clever techniques that can reduce this latency to just the latency of proof generation. For example, the proof could be broadcast on the network as soon as any node generates it, rather than waiting for the next block.

How does it interact with the rest of the roadmap?

Solving the statelessness problem greatly increases the difficulty of single-player targeting. If there are technologies that can reduce the minimum balance of single-player targeting, such as Orbit SSF, or application-level strategies such as squad targeting, this will become more feasible.

If EOF is also introduced, multi-dimensional gas analysis becomes much easier. This is because the main execution complexity of multi-dimensional gas comes from handling sub-calls that do not pass the full gas of the parent call, and EOF can make this problem trivial by simply making such sub-calls illegal (and the native account abstraction will provide an in-protocol alternative for some of the current major uses of gas).

There is another important synergy between stateless validation and history expiration. Today, clients must store nearly 1 TB of history data; this data is several times the state data. Even if the client is stateless, the dream of a nearly storage-free client will not be realized unless we can relieve the client of the responsibility of storing history data. The first step in this regard is EIP-4444, which also means storing history data in torrents or portals.

Proof of validity of EVM execution

What problem are we trying to solve?

The long-term goal of Ethereum block validation is clear – it should be possible to validate an Ethereum block by: (i) downloading the block, or even just a small sampling of the data availability in the block; and (ii) verifying a small proof that the block is valid. This will be an extremely low-resource operation that can be done in a mobile client, a browser wallet, or even in another chain (without the data availability part).

To achieve this, SNARK or STARK proofs are required for (i) the consensus layer (i.e. proof of stake) and (ii) the execution layer (i.e. EVM). The former is a challenge in itself and should be addressed in the process of further continuous improvement of the consensus layer (e.g. for single-slot finality). The latter requires EVM execution proofs.

What is it and how does it work?

Formally, in the Ethereum specification, the EVM is defined as a state transition function: you have some pre-state S, a block B, and you are computing a post-state S = STF(S, B). If the user is using a light client, they do not have S and S, or even E in full; instead, they have a pre-state root R, a post-state root R, and a block hash H.

  • Public input: previous state root R, post-state root R, block hash H

  • Private input: block body B, objects in the state accessed by block Q, the same objects after executing block Q, proof of state (e.g. Merkle branch) P

  • Claim 1: P is a valid proof that Q contains some portion of the state represented by R

  • Claim 2: If we run STF on Q, (i) the execution process only accesses objects inside Q, (ii) the data block is valid, and (iii) the result is Q

  • Claim 3: If we use the information of Q and P to recalculate the new state root, we will get R

If this existed, it would be possible to have a light client that fully validated the Ethereum EVM execution. This makes the client resources already quite low. To achieve a true fully validating Ethereum client, the same work would need to be done on the consensus side as well.

Implementations of validity proofs for EVM computations already exist and are heavily used by L2. There is still a lot of work to be done to make EVM validity proofs feasible in L1.

What are the connections with existing research?

What else can be done?

Today, the effectiveness of electronic accounting systems has proven to be insufficient in two areas: security and verification time.

A secure validity proof requires assurance that the SNARK actually verifies the EVMs computation and that there are no vulnerabilities. The two main techniques for improving security are multi-validator and formal verification. Multi-validator means that there are multiple independently written validity proof implementations, just like there are multiple clients, and a client will accept a block if it is proven by a large enough subset of these implementations. Formal verification involves using tools commonly used to prove mathematical theorems, such as Lean 4, to prove that the validity proof only accepts correct execution of the underlying EVM specification (such as EVM K semantics or the Ethereum Execution Layer Specification (EELS) written in Python).

Sufficiently fast verification times would mean that any Ethereum block could be verified in less than 4 seconds. Today, we are still far from this goal, although we are much closer than we thought two years ago. To achieve this goal, we need to make progress in three directions:

  • Parallelization — The fastest EVM validator currently can prove an Ethereum block in 15 seconds on average. It does this by parallelizing across hundreds of GPUs and then aggregating their work together at the end. In theory, we know exactly how to make an EVM validator that can prove a computation in O(log(N)) time: have a GPU do each step, and then make a aggregate tree:

Vitaliks'in yeni makalesi: Ethereum'un olası geleceği, The Verge

There are challenges in implementing this. Even in the worst case, where a very large transaction takes up an entire block, the computation cannot be split up by transaction, but rather by opcode (of the underlying virtual machine like the EVM or RISC-V). Ensuring that the memory of the virtual machine remains consistent between different parts of the proof is a key challenge in the implementation. However, if we can implement this recursive proof, then we know that at least the provers latency problem has been solved, even if nothing else has been improved.

  • Proof System Optimizations – New proof systems such as Orion, Binius, GRK, and many more will likely lead to yet another significant reduction in verification times for general purpose computation.

  • Other changes to EVM gas costs – Many things in the EVM can be optimized to make them more favorable to provers, especially in the worst case scenario. If an attacker can construct a block that blocks provers for ten minutes, then being able to prove a normal Ethereum block in 4 seconds is not enough. The required EVM changes can be roughly divided into the following categories:

– Changes in gas costs — If an operation takes a long time to prove, then it should have a high gas cost even if it is relatively fast to compute. EIP-7667 is an EIP proposed to deal with the worst issues in this regard: it significantly increases the gas cost of (traditional) hash functions, since the opcodes and precompiles for these functions are relatively cheap. To compensate for these gas cost increases, we can reduce the gas cost of EVM opcodes that are relatively cheap to prove, thus keeping the average throughput unchanged.

– Data structure replacement – In addition to replacing the state trie with something more STARK-friendly, we also need to replace transaction lists, receipt tries, and other structures that are expensive to prove. Etan Kissling’s EIP to move transaction and receipt structures to SSZ is a step in this direction.

In addition, the two tools mentioned in the previous section (multidimensional gas and delayed state roots) can also help in this regard. However, it is worth noting that unlike stateless verification, using these two tools means that we already have enough technology to do what we need at present, and even with these technologies, full ZK-EVM verification requires more work – just less work.

One thing not mentioned above is prover hardware: using GPUs, FPGAs, and ASICs to generate proofs faster. Fabric Cryptography, Cysic, and Accseal are three companies making progress in this area. This is very valuable for L2, but is unlikely to be a decisive consideration for L1, as there is a strong desire for L1 to remain highly decentralized, which means that proof generation must be within reasonable reach of Ethereum users and should not be bottlenecked by a single companys hardware. L2 can make more aggressive tradeoffs.

There is more work to be done in these areas:

  • Parallelizing proofs requires that different parts of the proving system can share memory (like a lookup table). We know the techniques to do this, but we need to implement them.

  • We need to do more analysis to figure out the ideal set of gas cost variations that minimize the worst-case validation time.

  • We need to do more work on proof systems

Possible costs are:

  • Security and validator time: It is possible to reduce validator time by choosing a more aggressive hash function, a more complex proof system, or more aggressive security assumptions or other design choices.

  • Decentralization and Prover Time: The community needs to agree on the “specs” of the provers hardware to target. Is it okay to require the provers to be large-scale entities? Do we want a high-end consumer laptop to be able to prove an Ethereum block in 4 seconds? Something in between?

  • The extent of breaking backward compatibility: Other deficiencies could be compensated by more aggressive gas cost changes, but this is more likely to disproportionately increase costs for some applications, forcing developers to rewrite and redeploy code to remain economically viable. Again, both tools have their own complexities and drawbacks.

How does it interact with the rest of the roadmap?

The core technology required to achieve L1 EVM validity proofs is largely shared with two other areas:

  • L2 Validity Proof (aka ZK Rollup)

  • Stateless STARK Binary Hash Proof Method

Successfully implementing proofs of validity on L1 will eventually enable easy single-person staking: even the weakest computers (including phones or smartwatches) will be able to stake. This further increases the value of addressing other limitations of single-person staking, such as the 32 ETH minimum.

Additionally, L1’s EVM validity proof can significantly increase L1’s gas limit.

Proof of Consensus Validity

What problem are we trying to solve?

If we want to fully verify an Ethereum block using SNARKs, then the execution of the EVM isn’t the only part we need to prove. We also need to prove consensus, the part of the system that handles deposits, withdrawals, signatures, validator balance updates, and other elements of Ethereum’s Proof of Stake component.

Consensus is much simpler than the EVM, but the challenge with it is that we dont have L2 EVM convolutions, so most of that work has to be done anyway. Therefore, any proof-of-scratch Ethereum consensus implementation needs to start from scratch, although the proof-of-scratch system itself is a shared effort that can be built on top of it.

What is it and how does it work?

The beacon chain is defined as a state transition function, just like the EVM. The state transition function consists of three main parts:

  • ECADD (for verifying BLS signatures)

  • Pairing (for verifying BLS signatures)

  • SHA 256 hash value (used for reading and updating state)

In each block, we need to prove 1-16 BLS 12-381 ECADDs per validator (possibly more than one, as signatures may be included in multiple sets). This can be compensated by subset precomputation techniques, so we can say that each validator only needs to prove one BLS 12-381 ECADD. Currently, there are 30,000 validator signatures per slot. In the future, as single-slot finality is achieved, this may change in two directions: If we take the brute force route, the number of validators per slot may increase to 1 million. At the same time, if Orbit SSF is adopted, the number of validators will remain at 32,768, or even decrease to 8,192.

Vitaliks'in yeni makalesi: Ethereum'un olası geleceği, The Verge

How BLS aggregation works: Verifying the total signature only requires one ECADD per participant, not one ECMUL. But 30,000 ECADDs is still a lot of proofs.

In terms of pairings, there are currently a maximum of 128 proofs per slot, which means 128 pairings need to be verified. With EIP-7549 and further modifications, this can be reduced to 16 per slot. The number of pairings is small, but the cost is extremely high: each pairing takes thousands of times longer to run (or prove) than ECADD.

A major challenge in proving BLS 12-381 operations is that there are no convenient curves of degree equal to the BLS 12-381 field size, which adds considerable overhead to any proof system. On the other hand, the Verkle trees proposed for Ethereum are built using Bandersnatch curves, making BLS 12-381 itself a native curve for proving Verkle branches in SNARK systems. A naive implementation could prove 100 G 1 additions per second; making proofs fast enough would almost certainly require clever techniques like GKR.

The worst case for SHA 256 hashes right now is the epoch transition block, where the entire validator short balanced tree and a large number of validator balances are updated. Each validator short balanced tree is only one byte, so 1 MB of data is rehashed. This is equivalent to 32768 SHA 256 calls. If a thousand validators have balances above or below a threshold, the effective balances in the validator records need to be updated, which is equivalent to a thousand Merkle branches, so perhaps ten thousand hashes. The shuffle mechanism requires 90 bits per validator (thus 11 MB of data), but this can be calculated at any time in an epoch. In the case of single-slot finality, these numbers may increase or decrease depending on the situation. Shuffling becomes unnecessary, although Orbit may restore the need to some extent.

Another challenge is the need to re-fetch all validator states, including public keys, to validate a block. For 1 million validators, just reading the public keys would require 48 million bytes, plus the Merkle branches. This would require millions of hashes per epoch. If we had to prove the validity of PoS, a realistic approach would be some form of incrementally verifiable computation: store a separate data structure within the proof system that is optimized for efficient lookups, and prove updates to that structure.

In summary, the challenges are numerous. Most effectively addressing them will likely require a deep redesign of the beacon chain, which could happen in parallel with the move to single-slot finality. Features of such a redesign could include:

  • Changes in hash functions: We are now using the full SHA 256 hash function, so each call is matched by two calls to the underlying compression function due to padding. We could get at least a 2x gain by using SHA 256 compression instead. We could potentially get a 100x gain by using Poseidon instead, thus completely solving all of our problems (at least in terms of hashrate): at 1.7 million hashes per second (54 MB), even a million verification records could be read into a proof in a few seconds.

  • In the case of Orbit, the shuffled validator records are stored directly: if a certain number of validators (such as 8192 or 32768) are chosen to be the committee for a given slot, they are put directly into the state next to each other, so that only minimal hashing is required to read all validators public keys into the proof. This also allows all balance updates to be done efficiently.

  • Signature aggregation: Any high-performance signature aggregation scheme will involve some kind of recursive proof, where different nodes in the network will perform intermediate proofs on subsets of signatures. This naturally distributes the proof work to multiple nodes in the network, greatly reducing the workload of the final prover.

  • Other signature schemes: For Lamport+ Merkle signatures, we need 256 + 32 hashes to verify the signature; multiplied by 32768 signers, this gives 9437184 hashes. Optimizations to the signature scheme can further improve this result by a small constant factor. If we use Poseidon, this can be proven in a single slot. But in practice, it is faster to use a recursive aggregation scheme.

What are the connections with existing research?

What work is still to be done and how to make choices:

In reality, it will take us years to get proof of validity for Ethereum consensus. This is roughly the same amount of time it took us to achieve single-slot finality, Orbit, modify the signature algorithm, and the security analysis required to have enough confidence to use a radical hash function like Poseidon. Therefore, the most sensible approach is to solve these other problems and consider STARK friendliness while solving them.

The main trade-off will likely be in the order of operations, between a more incremental approach to reforming Ethereums consensus layer and a more radical change many at once approach. For the EVM, an incremental approach makes sense, as it minimizes disruption to backwards compatibility. For the consensus layer, there are smaller backwards compatibility impacts, and there are benefits to a more holistic rethinking of various details of how the beacon chain is built to best optimize SNARK-friendliness.

How does it interact with the rest of the roadmap?

STARK-friendliness must be a top consideration in any long-term redesign of Ethereum PoS, especially single-slot finality, Orbit, signature scheme changes, and signature aggregation.

This article is sourced from the internet: Vitaliks new article: The possible future of Ethereum, The Verge

Related: Fallen superstars: A look back at the arrests of cryptocurrency figures

Introduction: The arrest of Pavel Durov In August 2024, Telegram founder Pavel Durov was arrested in Paris, an event that attracted widespread attention and discussion in the cryptocurrency community. Durovs arrest not only had a direct negative impact on the Toncoin project, causing its market performance to plummet, with both prices and trading volumes falling sharply, but also highlighted the legal and regulatory risks in the cryptocurrency field. Toncoin is a cryptocurrency project developed based on the Telegram Open Network (TON), which aims to provide a high-speed, secure and scalable blockchain network. However, Durovs arrest, on suspicion of involvement in illegal transactions and possession and distribution of child pornography, casts a shadow on the future of this project. After the news of Durovs arrest, the Kremlin quickly stated that it…

© 版权声明

Amerika Birleşik Devletleri