icon_install_ios_web icon_install_ios_web icon_install_android_web

A 10,000-word review of the Ethereum Foundations latest AMA: ETH value, the current status of the foundation, the future

تحليلمنذ أسبوع واحد发布 6086 سنًا...
10 0

أصلي | التطوير التنظيميبالنيابةly Planet Daily ( @OdailyChina )

Author | Fu Howe ( @vincent 31515173 )

A 10,000-word review of the Ethereum Foundations latest AMA: ETH value, the current status of the foundation, the future

Starting from September 4, the Ethereum Foundation (EF) research team held the 12th AMA on the reddit forum. Community members can leave messages in the post to ask questions, and the research team members will answer them. Odaily Planet Daily compiled the relevant questions and technical points involved in this AMA.

The following is the original content, compiled by Odaily Planet Daily, and summarized when relevant topics are presented to facilitate readers to quickly understand.

About ETH value accumulation and its impact on EF

Members of the Ethereum Foundation believe that the value accumulation of ETH is crucial to the success of Ethereum. As a currency, ETH supports decentralized stablecoins and provides economic security for the network. Justin Drake emphasized that Ethereum must become the programmable currency of the Internet, and the value accumulation of ETH will be achieved through total fees and currency premiums. In addition, the value growth of ETH will support the security and economic activities of the Ethereum ecosystem, thereby promoting Ethereum to become a global financial platform. Although different researchers have different views, they generally believe that the value accumulation of ETH is indispensable.

Question 1: What is the value accumulation thesis for ETH assets in 2024? Does the Ethereum Foundation believe that the continued value accumulation of ETH assets is important? If the rest of the roadmap is executed, and the result is that Rollups form a diverse ecosystem on Ethereum L1, there are a large number of DApps on L2, and users pay less than a cent in fees, but ETH assets have almost no value accumulation, will the Ethereum Foundation consider this a successful implementation of the Ethereum roadmap?

Justin Drake – Ethereum Foundation:

First of all, I think ETH is money.

Secondly, the value accumulation of ETH is crucial to the success of Ethereum. Ethereum cannot become the settlement layer of the Internet of Value unless ETH becomes the programmable currency of the Internet. The monetary premium will only accumulate to a special asset (which may be tens of trillions of dollars). The necessity of this monetary premium is:

  • Economic Bandwidth: Decentralized Stablecoins (Trillions of Dollars).

  • Economic Security: Providing unquestionable security against nation-state threats.

  • Economic Significance: Attracting the attention of major economies.

Ultimately, ETHs value accumulation comes down to money flows and monetary premiums. Its the total fees that matter, not the fees per transaction. Even if the fees per transaction were less than a cent, billions of dollars in revenue could still be generated with 10 million transactions/second. For example, at $0.002/transaction, thats about $2 billion in daily revenue. In addition, the proportion of ETH used as collateral currency, such as supporting DeFi, is also important.

Ethereum is building a financial platform that allows financial assets to be issued, traded, and derivatives to be created. These activities are valuable, and the value capture mechanism is uncertain, but may be based on a fee mechanism. In the Rollup roadmap, the Ethereum mainnet will be the intersection of high-value activities, and L1 expansion is necessary. If the current mechanism is not the optimal way to accrue value, there are still other interesting alternatives, such as data availability fees or ETH as the main medium of exchange and collateral.

Anders Elowsson – Ethereum Foundation:

When Ethereum promotes sustainable economic activity, the value of ETH will accumulate. Sustainable means that it can bring utility to participants and last for a long time. In this case, ETH, as a trustless asset in the Ethereum ecosystem, will accumulate value. Settlement payments are completed through ETH, and the ETH destruction mechanism distributes value to all holders. The accumulation of ETHs value is critical to the security of Ethereum, because the security of Ethereum is guaranteed by ETH staking.

Ideally, ETH as a currency should maintain its long-term value. In a decentralized economy, having a reliable, trustless currency has tremendous value. Therefore, the value accumulated by ETH makes Ethereum a better platform. In addition, a large number of future investments may be held in the form of ETH, including the Ethereum Foundations treasury.

In the long run, there is a direct correlation between Ethereum facilitating sustainable economic activity and the value accumulation of ETH. If Ethereum is designed to facilitate sustainable economic activity, the value accumulation of ETH will follow.

Question 2: Is it important for the Ethereum Foundation to drive the value of the ETH token?

Justin Drake – Ethereum Foundation:

EF has about 300 people spread across dozens of teams. I cannot speak for the views of EF as a whole, or even the views of the EF research team (38 people).

My personal view is that the ETH token is crucial to the success of Ethereum. ETH becoming valuable or even extremely valuable will bring about positive chain reactions:

  • Economic Bandwidth: At the core of decentralized stablecoins is ETH, which is critical to the rise of DeFi and Ethereum.

  • Economic Security: Trillions of dollars in staked ETH protect against the world’s most powerful forces.

  • Economic Significance: Once ETH surpasses BTC, Ethereum and ETH will become an unstoppable force.

Discussion on Ethereum Foundation Funding, Core Development and DeFi

Members of the Ethereum Foundation have expressed similar views on fund management. Vitalik Buterin mentioned that the Foundation spends 15% of the remaining funds each year to ensure long-term existence. Justin Drake expects that EF still has about 10 years of operating funds, but this will fluctuate with the prجليد of ETH.

Regarding core تطوير, Vitalik Buterin and Carl Beekhuizen emphasized that core developers are not limited to EF researchers, but many independent developers are also involved.

Additionally, Vitalik Buterin believes that there is a shortage of Ethereum developers.

Finally, EF does not have a unified view on DeFi, but individual researchers believe that DeFi is an important use case on Ethereum, especially in terms of decentralized stablecoins and liquidity provision for financial activities.

Question 1: How long will it take for the Ethereum Foundation’s current funds to run out? When this happens, what does the Ethereum Foundation plan to do?

Vitalik Buterin: The current budget strategy is to spend 15% of the remaining funds each year. This means that EF will exist forever, but its influence in the ecosystem will decrease over time.

Justin Drake: Financial reports similar to this should be released soon. EF spends about $100 million per year ( this tweet from Aya ). EFs main Ethereum wallet holds about $650 million. EF also has a fiat buffer that covers several years of operating expenses (as Aya mentioned, ETH sales were temporarily suspended for regulatory reasons, so the buffer was not replenished until recently.). It is estimated that EF has about 10 years of operating funds. This runway will change with the fluctuations of ETH price.

Q2: Is Ethereum Foundation Research the same as “core development”? Or is “core developers” a more casual term for those who contribute to the protocol?

فيتاليك بوتيرين : There are many core developers outside of EF, the most notable examples being members of the various Ethereum client teams (such as Nethermind, Besu, Nimbus). There are also many independent researchers and contributors to specific topics (for example, some people from Optimism and Base contributed to the 4844 deployment).

Carl Beekhuizen: EF Research is different from Core Developers. Core Developers are people who contribute to the client or tooling for some reason. They are a spontaneous group of individuals with no fixed boundaries. People who participate in ACD calls are often considered Core Developers, but this is neither a necessary nor sufficient criterion.

Question 3: What does the Ethereum Foundation think of DeFi? Does it consider DeFi to be the most valuable use case on Ethereum? Why doesn’t EF talk to teams like Maker, Aave, and Comp?

Dankrad Feist: EF does not have a unified view on this. Ethereum researchers have their own views, and this is my view. I like DeFi, but it cannot solve all of Ethereums problems alone. Financial markets themselves do not create value, but by providing services such as liquidity and insurance, they can create more value for society.

The most valuable contribution of DeFi on Ethereum is decentralized stablecoins. I hope these stablecoins can become a pure cryptocurrency medium of exchange, but they have severe scaling limitations, so custodial solutions are more popular right now. Nonetheless, I think there is great value in having decentralized, censorship-resistant alternatives.

In addition, DeFi currently lacks valuable assets. I believe that once DeFi is fully developed, it will make Ethereum the center of future financial activities, but there is still a lot of work to be done.

As for interaction with projects, I have talked to many DeFi projects. My daily work is mainly infrastructure construction, so I have less contact with DeFi projects, but we do interact.

Julianma: I personally think DeFi is a very valuable use case on Ethereum and a fascinating application area. I have been researching DeFi related topics for the past year, such as application layer MEV minimization . We interact with DeFi teams regularly. For example, ETHconomics organized a conference on automated market makers and invited excellent speakers from DeFi teams.

Question 4: Is Ethereum facing a shortage of manpower in its development?

فيتاليك بوتيرين : There is a clear shortage of people in the p2p network space, and the issue is rarely discussed.

EF Research : Core development work does need more people, especially important areas like fork choice. These areas urgently need more attention and contributors.

About the future development of Ethereum mainnet

In the discussion about the future development of Ethereum, members of the Ethereum core team explored several key issues. First, regarding the expansion of Ethereum Layer 1, Vitalik Buterin mentioned that in the short term, the storage burden of full nodes will be reduced by implementing EIP-4444 (historical data expiration), and performance will be improved through Verkle trees and ZK-SNARKing EVM. Justin Drake mentioned that long-term plans include almost unlimited L1 EVM expansion through SNARK technology, and proposed ideas to enhance EVM execution capabilities, such as introducing EVM-MAX and SIMD extensions. Dankrad Feist added that expanding Layer 1 execution capabilities is one of the goals, but Rollups will continue to be the main expansion method.

Regarding the Ethereum data availability market and the Blobs fee pricing mechanism, Dankrad Feist discussed how to adjust prices if Blobs fail to reach their goals, and suggested not to artificially raise prices for the time being to avoid affecting the development of Rollups. Justin Drake believes that it will take time for the demand for Blobs to grow, and pointed out that some Rollup projects have found better ways to use Blobs. Davide Crapis also mentioned that if the demand for Blobs is lower than expected, it should be considered to increase the minimum fee or speed up the update to improve the mechanism.

Finally, Vitalik Buterin discussed how to reduce dependence on centralized infrastructure, suggested promoting lightweight clients as the standard configuration of consumer wallets, and extending the security guarantee of light clients to Layer 2. As for whether Bitcoins implementation of OP_Cat and development of a strong Layer 2 ecosystem will affect Ethereums position, Vitalik Buterin believes that Ethereum still has unique value, such as a larger Rollup DA space, a better proof-of-stake mechanism, and a more efficient social layer, community, and culture.

Question 1: As Layer 2 solutions mature, are there any plans to further expand Ethereum’s Layer 1? If so, what methods are being considered?

Vitalik Buterin: Ethereum Layer 1’s expansion plan includes two main strategies:

Reduce full node load:

  • Implement EIP-4444 (Historical Data Expiration): This proposal aims to reduce the storage burden of full nodes by setting data retention time and reducing the storage of old data.

  • Verkle trees or hash-based binary trees: These data structures are designed to improve the efficiency of data storage and query speed, thereby reducing the burden on full nodes.

  • ZK-SNARKing EVM: The ultimate goal is to use zero-knowledge succinct non-interactive proofs (ZK-SNARKs) to verify EVM execution, thereby reducing the computational burden of verification. These improvements will pave the way for increasing the gas limit in the short term. EIP-4444 is the most realistic short-term solution because it does not require changes to the consensus layer, only adjustments to the client code.

Improve client execution capabilities:

  • Improved execution, virtual machine, and precompilation: Improve the execution efficiency of EVM and optimize the performance of virtual machine and precompilation.

  • Optimize state reading/writing: Solve the inefficiency problem in the state reading and writing process.

  • Enhanced data bandwidth: Increase the bandwidth of network data transmission to support more transactions and smart contract operations.

  • There are known inefficiencies in these areas and improvements will help increase the gas limit further.

Another consideration is to add functionality to the EVM to speed up specific calculations. One suggestion is to combine EVM-MAX و SIMD (Single Instruction Multiple Data) to provide numpy-like extensions that allow the EVM to do a lot of cryptographic processing faster. This will make applications that rely on cryptography more economical, especially important for privacy protocols, and may reduce the cost of Layer 2 submission to the chain, thereby shortening deposit and withdrawal times.

Justin Drake: The long-term plan is to achieve virtually unlimited L1 EVM scaling through SNARK technology. With real-time L1 EVM SNARKing, validators can verify cheap SNARKs without having to re-execute EVM transactions. This will enable us to increase the Gas limit by multiple orders of magnitude without increasing the burden on validators. All heavy EVM execution will be done by specialized nodes (such as searchers, builders, explorers), and users and consensus participants will be able to run their nodes more easily, even on their phones or watches.

In addition to the vertical expansion benefits brought by significantly increasing the L1 EVM gas limit, arbitrary horizontal expansion can also be achieved through the EVM precompiled module within the EVM. This precompiled module will allow developers to programmatically launch new L1 EVM instances, unlocking a super version of execution sharding, where the number of shards is no longer limited to 64 or 1024, but is unlimited, and each shard is a programmable Rollup (with programmable governance, sorting, Gas), called a native Rollup.

Some notes:

  • calldata: SNARKs don’t help with calldata, and we may need to set a separate EVM gas limit for calldata.

  • State Growth: If you want to limit state growth, also set a separate EVM Gas limit for opcodes that grow state. Processing state is relatively cheap, so a limit may not be needed.

  • Physical Limits: Even if the gas limit is completely removed, L1 EVM execution still faces physical vertical scaling limitations. The good news is that projects like MegaETH claim to be able to push the EVM to 100,000 transactions per second, indicating that the L1 EVM may still have several orders of magnitude of growth. EVM performance optimization projects, such as Reth and Monad, will eventually have a positive impact on L1.

  • Diversity: To ensure that validators can safely rely on SNARKs instead of re-execution, we need diversity in zkEVM clients to hedge against SNARK errors. Currently the diversity of zkVM vendors and execution clients is roughly equal.

  • Formal Verification: Another long-term strategy to reduce SNARK errors is formal verification. Alex Hicks and his team are focused on accelerating formal verification of zkEVM and have a $20 million budget for grants and competitions. If you are an expert in formal verification, you can contact them.

  • Real-time Proofs: SNARK proofs must be fast enough (within about one slot) to be useful to validators. The speed of SNARK proofs could potentially be significantly improved with the advent of SNARK ASICs. Delaying one block to check the EVM post-state root is also a simple EVM performance optimization that would help with SNARKing.

Dankrad Feist: In the process of building the Rollup Center roadmap, expanding the execution capacity of Layer 1 should be a goal, but the two are not necessarily in conflict. Data availability can be expanded almost infinitely, and the ultimate limit lies in the interest in Ethereum, that is, how many people are willing to run full nodes seriously and record all data. Execution capacity will always be subject to certain limitations, and the ultimate bottleneck is the single-threaded limitation. With zkEVM and parallelization technology, we can increase the scalability of L1 by 10 to 1000 times. Rollups will provide the remaining scalability to meet the needs of world scale.

Question 2: Regarding the Ethereum data availability market and Blobs fee pricing mechanism, how to deal with the situation where blobs cannot reach the target?

Dankrad Feist: Ethereum is creating a new data availability market for Rollups. Many alternative solutions (such as Celestia, Eigenlayer, Avail, etc.) hope to grab market share from Ethereum. Since these solutions cannot compete with Ethereum in terms of security, they may put pressure on the price. Therefore, we should not artificially increase the price immediately, so as not to push our most important asset (secure Rollup) away from Ethereum.

With 3 blobs per block, this revenue has a small impact on Ethereums protocol revenue. We should focus on expanding this functionality as much as possible, and then think about how to capture fees from it. Fees for blobs are not the best value capture mechanism for Ethereum. The data availability market is too volatile, so it will never be an ideal way to extract value. Ethereum L1, as the natural financial intersection in the ecosystem, will have the highest value transactions, which is the best value-added mechanism for Ether.

Justin Drake: Blobs will not fail to reach their goals. We just need to be patient and the induction of demand takes time to work. In addition, rollup projects (such as Base, Scroll, and Taiko) have recently found ways to better utilize blobs, which also extends the timeline for blob price discovery.

Davide Crapis: If the demand for blobs is far below the target, it is reasonable for the price to remain low. However, this situation affects price discovery in the case of congestion. We should make the mechanism more efficient, for example by increasing the minimum fee or speeding up updates. Please refer to relevant materials و recent proposals .

Question 3: Despite core developers/EF researchers insisting on limiting full node requirements to consumer hardware, 99% of Ethereum users do not run full nodes. How can we reduce dependence on centralized infrastructure?

Vitalik Buterin: We need to push for light clients to become standard for consumer wallets. Helios is constantly improving and will be ready for this. Another key part is extending the security guarantees of light clients to Layer 2. This is actually more practical and standardized on L2 than L1 because L2 already uses the L1 state as a constantly updated root of trust.

Question: If Bitcoin implements OP_Cat and develops a strong Layer 2 ecosystem, what unique value can Ethereum provide?

Vitalik Buterin:

  • More options for Layer 2 security due to larger Rollup DA space (Bitcoin only has 4 MB/600 s = 6667 bytes per second, and that assumes all on-chain data is used for DA; compare 32 kB/s EIP-4844 status quo to 1.3 MB/s long-term goal);

  • Proof of Stake, which proves its ability to remain decentralized month after month and provides more options for 51% redemption rates;

  • Demonstrated an efficient social layer, e.g. censorship fears , client centralization fears , stake pool market share centralization fears, and many other issues have been resolved through coordinated ecosystem-wide action;

  • Community, culture, values, etc.

About the Ethereum Foundation’s current research areas

The Ethereum Foundation is actively researching technological advances in multiple areas. Regarding zero-knowledge proofs (ZK), George Kadianakis introduced their research on the use of STARKs and SNARKs, such as recursive signature aggregation and achieving post-quantum security. Justin Drake mentioned that the introduction of SNARKs has significantly reduced the cost of proofs, and emphasized the formal verification work of zkEVM.

Regarding the Verifiable Delay Function (VDF), Antonio Sanso said that although it has not yet been implemented in Ethereum, the team is studying its potential applications, but it needs further improvement and evaluation.

Regarding Maximum Extractable Value (MEV), Barnabé Monnot and s0isp0ke discussed the research progress of solutions such as ePBS, Execution Tickets, and Inclusion Lists to reduce the impact of MEV and improve the networks anti-censorship capabilities.

Vitalik Buterin and Justin Drake believe that binary hash trees may be used instead of Verkle trees in the future to adapt to technological upgrades.

In addition, formal verification and verifiable computing are seen as key techniques for ensuring code correctness and facilitating interoperability between different programs.

Ethereum’s research progress on ZK

Question 1: What areas of zero-knowledge (ZK) research is the Ethereum Foundation (EF) currently working on, both theoretically and practically? Where can I find current/past ZK research conducted by the EF?

George Kadianakis: The Ethereum Foundation is currently working on different zero-knowledge (ZK) projects at various stages. Here are some examples of research projects related to L1:

  • Binary hash trees verified using STARKs for statelessness

  • Large-Scale Recursive Signature Aggregation Using Recursive SNARKs

  • Improve the robustness of the network layer through ZK and use anonymous credentials

  • Using STARKs as a method for implementing post-quantum aggregatable signatures (alternative to BLS)

  • Providing privacy in a single-secret leader election design using ZK

  • L1 execution with ZK and zkEVMs (long term goal)

Justin Drake: Im really excited about bringing SNARKs to the L1 EVM. Weve made tremendous progress over the past few months. According to the latest data from Uma (from Succinct), the current cost of proving all L1 EVM blocks is about $1 million per year, and future optimizations will reduce this cost further. I expect that by this time next year, the cost of proving all L1 EVM blocks may be only $100,000 per year, thanks to SNARK ASICs and optimizations at all levels of the stack. The Ethereum Foundation is also accelerating formal verification of the zkEVM, a project led by Alex Hicks with a budget of $20 million.

For the Beacon Chain, our recent benchmark accelerated the timeline for converging hash-based signatures with SNARKs. This is key to achieving post-quantum security for the Beacon Chain.

About Ethereum’s Research on VDF

Q1: EF seems to be actively researching VDFs. Can you provide some information on how to use them? Which VDFs are used? Do you have any improvements to the current VDFs?

Antonio Sanso: The Ethereum Foundations Cryptography Research Team has released a new statement in Ethereum Research, highlighting the need for a deeper understanding before integrating Verifiable Delay Functions (VDFs) into Ethereum. Currently, the team does not recommend the use of VDFs in Ethereum, noting that further research and significant improvements are needed to re-evaluate this position. More details can be found on the Ethereum Research website.

In a recent statement released by Ethereum Research, the cryptography research team stressed that a deeper understanding of verifiable delay functions (VDFs) is needed before they can be integrated into Ethereum. The team currently does not recommend the use of VDFs in Ethereum, noting that ongoing research and significant improvements are essential for a possible revision of this position in the future. For more details, see the full statement هنا .

Mary Maller discussed VDFs at the Devconnect conference, you can view her talk here, and I also discussed related topics at the IC 3 Winter Workshop in 2024, details of the event can be found here.

Additionally, Mary Maller discussed VDFs in a talk at Devconnect, which can be viewed هنا . I also presented on related topics at the 2024 IC 3 Winter Retreat, event details can be found هنا .

Justin Drake: There are two aspects to VDFs:

  • Building production-grade VDFs as cryptographic primitives.

  • Use this primitive in your application.

On the application side, incentivized use cases for VDFs include strengthening RANDAO to obtain unbiased randomness for leader election. IMO, VDFs are the ultimate goal for L1 randomness and remain a splurge project in Vitaliks roadmap. So far, there is no evidence that RANDAO is being abused, so VDF RD has been deprioritized. Other L1 projects (e.g. inclusion lists, stake caps, SNARKifying L1) are more important.

Another important use case for VDFs is lotteries. There is an attractive opportunity to build a world lottery that is provably fair, global in scope, and commission-free. PM me if you want to build this 🙂 Another interesting application of VDFs that has emerged recently is facilitating simultaneous block publication in the context of multiple proposals.

For the VDF primitives themselves, this has proven to be much more difficult than I expected, but there is light at the end of the tunnel. We now have a MinRoot VDF ASIC that I believe can be used in lottery production, although theoretical MinRoot analysis has not yielded a practical attack on 256-bit MinRoot. We now need a team to complete the integration work to verify MinRoot SNARK proofs on-chain (such as Nova or STARK proofs). This is easy for BN 254 MinRoot, but Pasta curves require wrapper SNARKs.

About MEV

Question: What is the current direction of MEV research? I am a little confused by so many proposals, such as ePBS, Execution tickets, Inclusion lists, BRAID, PEPC, MEV-sharing, etc.

EF Research: Some of the terminology in MEV research can be confusing, so I will try to briefly define the concepts you mentioned:

  • ePBS (enshrined Proposer-Builder Separation): Its main goal is to get rid of the trust of third parties (such as relayers) and interact directly between builders and proposers. There is currently a related EIP under discussion: EIP-7732 , and there is a lot of work going on in this regard.

  • Execution Tickets (ETs) and Execution Auctions (EAs): These fall under the broader concept of “Attester-Proposer Separation (APS)”, which aims to further separate consensus roles (such as proposal and validation) to prevent negative effects caused by MEV, such as timing games, which could undermine consensus.

  • Inclusion Lists (ILs): This is intended to improve the network’s censorship resistance, allowing Ethereum’s decentralized set of validators to better enforce transactions to be included in blocks and limit the power of builders. A lot of progress has been made in this regard, and the most recent proposal is FOCIL ( Fork-choice Enforced Inclusion Lists ), which has great potential.

  • BRAID: This is a new concept proposed by Max Resnick that aims to improve censorship resistance and solve the MEV problem by allowing multiple proposers to run multiple parachains simultaneously. I recently wrote a note comparing FOCIL and BRAID, which can be found هنا .

  • PEPC (Protocol-Enforced Proposer Commitments): The purpose of this proposal is to provide a protocol tool for validators to make binding commitments to the blocks they produce. Barnabé’s PEPC-FAQ provides a more detailed explanation, which can be found here .

  • MEV-share: This is a solution provided by Flashbots, which allows users to send transactions to Flashbots RPC instead of the public memory pool, which can avoid MEV extraction and may also receive rewards from the generated MEV. However, it should be noted that this solution is centralized, users need to trust Flashbots, and it is carried out outside the protocol.

Barnabé Monnot: There are currently two main directions in MEV research:

  • Specific protocol upgrade proposals: such as ePBS and FOCIL (based on committee inclusion lists or multi-proposer format). These proposals are specific plans that are currently being discussed and promoted.

  • Broader research directions: For example, Attester-Proposer Separation (APS), which covers the concepts of Execution Tickets and Auctions, and BRAID. I personally hope that specific work can support these more forward-looking research.

Additionally, we recently set up a tracking system for ePBS, and we are in the process of expanding that system to add more materials. You can check out the related notes to learn more.

About APS+FOCIL+ePBS and BRAID

Question 1: If APS+ FOCIL + ePBS or BRAID can be effectively applied, how do you think it will help Ethereum?

س 0 ISP 0 ke EF Research: I recently wrote a note comparing FOCIL and BRAID :

  • FOCIL can be thought of as a gadget, or an add-on to the existing Ethereum protocol. It focuses on utilizing multiple validators to improve the network’s censorship resistance, but with minimal disruption to the current block market structure.

  • BRAID is much broader in scope, as it aims not only to improve CR, but also to solve MEV by trying to prevent any one proposer from having a privileged role or special advantage over other proposers. This involves building a protocol from scratch, adopting a new consensus mechanism, and making significant changes to the execution layer (e.g., sorting rules) and market structure.

For me your question is hard to answer exactly because of the if it works part, but I think both approaches have merits and the good thing is that they are not mutually exclusive and are working in parallel.

Justin Drake: Im glad BRAID is being investigated, but as of today, Im fully in the FOCIL + APS camp.

I think the fundamental problem with BRAID is that it introduces a vertical multi-block game that is potentially highly centralized. This is equivalent to a multi-slot game that can be played using sequential slots, but across a space dimension instead of a time dimension.

Assume we are doing BRAID with n = 4 concurrent proposers. If a large operator controls k>1 proposers, then proposer fairness breaks down:

  • k= 2 : There is a so-called risky last seen attack vector. The basic idea is that one of the proposers acts conservatively and proposes a fat block on time to collect the inclusion fee. The other proposer proposes a thin block at the proof boundary that contains a bunch of last seen MEV from the timing game.

  • k = 4: This is where things really go astray. One entity unusually wins full control of the slot and can maximally extract all MEV. This can be highly centralized, as large operators (such as Coinbase or Kiln) occasionally win the MEV jackpot while smaller operators only get MEV dust.

  • k = 3: Things get dangerous here too. For example, large operators have an incentive to reject a fourth proposer that they cannot control, essentially returning to the situation of k = 4. Large operators also have an incentive to collude with the fourth proposer, again because of the MEV jackpot.

julianma EF Research: Mechan-stein (APS + FOCIL + ePBS) and BRAID are both very exciting directions. However, FOCIL + ePBS and BRAID are at very different stages of research and development. The former is well understood: there is a detailed description of FOCIL and an EIP for ePBS. The latter is an exciting new idea that still requires a lot of research.

I think Mechan-stein and BRAID don’t have to be seen as competing with each other, but rather as explorations of blockchain co-creation.

About Verkle Tree and State Tree

Q1: It is generally believed that the next HF after Pectra will be dedicated to Verkle trees. With the rapid development of ZK-proof technology, is there any advantage in making the current MPT snark friendly?

Vitalik Buterin: Im personally currently in favor of moving post-Pectra forks toward various non-state trie-related things, especially inclusion lists, perhaps Orbit (just the shuffle mechanism, without the SSF part) to allow validators (much less) than 32 ETH to participate, and perhaps some EVM improvements or simplifications. This would give us breathing room to jump straight to binary hash tries for state in later forks.

We saw Starkware demonstrate >600k Poseidon hashes per second on a CPU, but Poseidon is controversial due to its newness. That said, there are newer approaches (such as GKR) that can provide high enough performance even for more traditional hashes (such as BLAKE 3, perhaps). So either more security analysis of Poseidon, a more mature GKR, or a third option (such as lattice-based hashing) could help us get there.

Justin Drake: I agree with this sentiment, and Im sure a few others agree as well 🙂 My inclination would be to repurpose the stateless work to use binary Merkle trees instead of Verkle trees. A lot of the heavy lifting happens at the state tree transition points, and the Verkle transition work can be reused for the binary Merkle tree.

SNARK proofs are getting incredibly fast. In July, a laptop CPU was shown to be able to process 1.2 million Posseidon 2 hashes per second , which opened the Overton Window. This benchmark may be outdated, especially when GPU acceleration is added to the mix. Preliminary data provided by Eli Ben-Sasson of SBC suggests that GPU acceleration will provide a 5x speedup, even using SHA 256 binary trees.

IMO using GPU acceleration to achieve statelessness is perfectly fine for several reasons. First, statelessness operates under the honest minority assumption, which makes sense in that we only need a small number of entities around the world to compute SNARKs for statelessness, and those entities do not need to be consensus participants. Second, over time, as CPU SNARK proofs continue their seemingly unstoppable exponential acceleration, the need for GPUs will naturally disappear.

On the application of formal verification and verifiable computation in Ethereum

Question 1: What do you think the future of formal verification and verifiable computation looks like in the Ethereum ecosystem, especially its potential impact on interoperability and bringing non-Solidity developers into the ecosystem?

Justin Drake: Formal verification and verifiable computation are closely tied to a common goal: we want to trust the code that everyone in the network is running. This is the core reason why we want to use blockchain. Verifiable computation allows us to get cryptographic proofs of program execution, and with zkVM we can do this for any program that can be compiled to an underlying ISA such as RISC-V. This is where formal verification comes into play. First of all, zkVM is complex, so you want to make sure there are no problems in its implementation. Secondly, if you are running a particularly non-trivial program – lets say you are running the EVM, you have a zkEVM – you also need to make sure that EVM is a correct implementation of the EVM. I also want to emphasize that formal verification here is not just about checking the correctness of what we already have, it also allows us to further optimize things and get better performance out of code that otherwise might be difficult to audit while still having correctness guarantees.

Regarding interoperability and bringing non-Solidity developers into the ecosystem, I think both help. Verifiable computation removes the need to re-execute computations, so if you have a snark proof that a program executes on a certain VM, you can verify that on another VM (perhaps with a different ISA or w/e). This helps with interoperability and gives developers more flexibility. Formal verification doesnt help directly, but I think it leads to some interesting things. If we get to the point where program verification is cheap; for example, through automation, whether using solvers or AI, it will become easier to generate code that can be checked for correctness and safe to deploy, translate code from any language into solid code with guarantees that the semantics of the program are preserved, or prompt LLM to generate a contract with proof that your contract implements the required specification.

Ethereum’s measures to maintain credible neutrality

The Ethereum Foundation is taking multiple steps to ensure the trusted neutrality of the Ethereum network. To enhance the networks censorship resistance, the Foundation is implementing the Inclusion List (IL) mechanism, which allows a decentralized set of validators to enforce the inclusion of transactions in blocks, reducing reliance on a small number of sophisticated entities, such as those that may censor transactions with sanctioned addresses. Specific proposals include the Fork Choice Enforced Inclusion List (FOCIL), which aims to further improve the effectiveness of this mechanism.

Researchers are also exploring other approaches, such as the rainbow staking proposal, which recommends that the protocol introduce multiple categories of service providers to ensure a diverse set of decentralized validators, thereby enhancing neutrality. These measures are all aimed at ensuring that Ethereum can maintain its neutrality and impartiality in the face of government pressure.

Question 1: Since governments cannot pressure validators to censor specific transactions (such as those involving sanctioned addresses or smart contracts), what is the Ethereum Foundation (EF) doing to ensure that Ethereum remains credibly neutral?

Justin Drake: The Ethereum Foundation is working to enhance the network’s censorship resistance (CR) by iterating on the design of an “inclusion list” (IL). IL allows a decentralized set of validators to enforce the inclusion of transactions in the builder’s blocks, reducing reliance on a small number of sophisticated entities that may decide which transactions are included in Ethereum blocks (e.g., censoring transactions that interact with sanctioned addresses). Our latest proposal is the Fork Choice Enforced Inclusion List (FOCIL), which can be found in the FOCIL proposal .

Barnabé Monnot: Ethereum Foundation researchers are exploring multiple approaches to ensure credible neutrality. Specifically, inclusion list mechanisms allow more participants to contribute to block construction, reflecting the preferences of multiple people. As long as the validator set can show diverse preferences (i.e. a decentralized validator set), these approaches are effective in ensuring credible neutrality.

In addition, staking economics is also one of the key factors in ensuring credible neutrality. In particular, the Rainbow Staking Proposal suggests that protocols can include multiple categories of service providers, without expecting all stakeholders to provide all services. This division of labor may allow protocols to have a group of service providers that focus on showing transactions that others may miss. For more information, see the Rainbow Staking Proposal .

On the problem of excessive issuance of Ethereum and its solution

When discussing the issue of Ethereum over-issuance, Justin Drake said that currently, proposals to solve Ethereum over-issuance include adjusting the issuance reward curve, setting an economic cap, limited issuance, and minimizing issuance, etc. The advancement of these proposals is mainly limited by social coordination, requiring the community to reach a consensus and promote relevant Ethereum Improvement Proposals (EIPs).

Anders Elowsson gave a more detailed answer and described the problems he encountered. As a tool for adjusting staking rewards, the PID controller can dynamically adjust the yield to balance the intersection of the supply curve and the reward curve. However, the PID controller also has disadvantages, such as it may lead to too low returns or too high issuance, thereby increasing user costs. To solve these problems, researchers are exploring solutions such as MEV destruction.

Regarding the staking ratio, while it is feasible to design a smart issuance curve to cope with high staking ratios (such as 50%), actual progress depends on community support and coordination. The growth of staking participation is gradual and is expected to rise gradually over the next few years. The rate of this growth can be mitigated by moderately reducing the issuance. Overall, future progress will depend on the communitys acceptance of the proposal and the speed of its implementation.

Question: How close are we to a proposal to fix Ethereum’s over-issuance? Can we use a PID controller like Rai to target the collateralization ratio instead of a fixed issuance curve? How much time do we have before the collateralization ratio reaches a highly undesirable level like 50%?

Justin Drake: Designing a smart issuance curve that gradually returns to zero around a soft cap (such as one-quarter, one-third, or one-half of ETH being staked) is the obvious choice. The main bottleneck is social coordination. It takes a smart and motivated person to push the EIP (Ethereum Improvement Proposal) until it goes live. I expect the community to support this.

Anders Elowsson:

1. Over-issuance problem

Currently, there are multiple proposals under discussion on how to adjust Ethereum’s issuance strategy. These proposals include adjusting the issuance reward curve, setting an economic cap (target), limited issuance, and minimizing issuance (MVI). Related research articles and FAQs also explore these options.

At present, we need to promote the movement of reducing issuance within the Ethereum community and have in-depth discussions on the extent of reducing issuance. Since the adjustment of the issuance policy is very sensitive, reaching a consensus will help advance the relevant proposals.

2. Application of PID controller

PID controllers can be used as a tool to adjust staking rewards by setting a target staking amount or ratio to adjust the reward curve. In the long run, the main advantage of PID controllers is the ability to dynamically adjust the yield to balance the intersection of the supply curve and the reward curve. However, this approach also has disadvantages:

  • Too low a yield: If the yield is set too low, individual stakers may exit due to high fixed costs, which may lead to negative issuance yield, i.e. fees are deducted from stakers every epoch.

  • Too high an issuance: Setting a target that is too high may result in too many tokens being issued, increasing user costs.

The PID controller may try to set the issuance yield to negative, but this will bring additional problems such as consensus breakdown and increased reward variability. The best way to solve these problems is to explore MEV destruction or other solutions that prevent proposers from extracting MEV, but these solutions are still under research.

3. Risk of excessive issuance

A fixed reward curve cannot limit the amount of issuance. If the target is set too high, it may lead to too many tokens being issued, increasing user costs. Ethereums reward curve needs to optimize all known trade-offs of benefits and equity participation in the long term to reflect its derivative utility.

4. Challenges of “Deflate Attack”

Stakers may encounter deflation attacks where attackers profit by depriving honest participants of their rewards. To counter such attacks, protocols can set fixed target participation levels, increasing the incentive to leave a stake. However, these measures may also lead to suboptimal reward curves.

5. The potential of dynamic approaches

Although PID controllers have their flaws, their effectiveness can be improved by incorporating dynamic methods. This approach allows the reward curve to adjust over longer time scales, thus addressing some of the shortcomings of PID controllers.

6. Discussion on the 50% staking limit

If the staking participation rate reaches 50%, it means that more than half of potential ETH holders believe that the risk/reward of staking is worthwhile. The growth of staking participation rate is gradual, and a moderate reduction in issuance can help control this growth. While staking participation may gradually increase in the next few years, the growth rate is expected to slow down.

In general, although there are multiple ways to adjust Ethereum issuance and staking rewards, each method has its own advantages and disadvantages. Future progress will rely on community consensus and in-depth discussion of various proposals.

Question 2: Do you think ETH should be in a net deflationary state in the long term? Before EIP 4844, users paid high fees and ETH was deflationary. After 4844, our users pay lower fees and ETH is inflationary. How can we truly achieve these two goals: (1) ETH is deflationary; (2) fees are low for ordinary users.

Justin Drake: Achieving a net deflationary state for ETH in the long term requires addressing several key points. First, the goals of deflation and low fees are achieved through scaling, which contributes to sustainable economic activity. Ideally, this should include millions of users paying low transaction fees and having the Ethereum network secure these transactions, thereby increasing total fees and achieving sustainable economic growth. Barnabé explained this in detail from a user perspective in an AMA three years ago.

When discussing the situation before and after EIP 4844, there are a few things to consider:

  • Scaling and Fees: Layer 2 solutions are being developed on Ethereum and have previously been delayed due to high fees, but the existence of a roadmap suggests lower fees in the future. Without these scaling promises, Layer 2 may not have developed to where it is today. Additionally, current Gas prices reflect not only the past, but also the promise of future scaling. This suggests that abandoning scaling may not make Ethereum net deflationary, as transaction demand is also driven by future scaling plans.

  • Long-term deflation requirements: Achieving permanent net deflation requires not only reduced issuance or increased fee consumption, but also involves a long-term staking equilibrium. This equilibrium is adjusted based on the amount of stake (deposit size D) rather than the stake ratio (deposit ratio d), and is affected by the balance of circulating supply. To achieve permanent deflation, D needs to be replaced with d in the reward curve equation and a normalization layer is performed by including the circulating supply in transactions when the circulating supply begins to be tracked by consensus.

In summary, ETH’s long-term net deflationary state requires finding a balance between adjusting issuance policy and managing transaction fees to ensure sustainable economic activity and a low fee structure.

Ethereum Foundations discussion on L2

The discussion mainly revolved around the following aspects: Base L2s Gas limit and Blob requirements, L2s user experience improvements, the relationship between L2 and Ethereum L1, the latest progress based on sequencing, and the motivation of decentralized sorters.

Regarding Base L2, Vitalik Buterin explained the blob count required to reach the 1 Giga Gas per second target and discussed different ways to achieve the goal, including increasing data bandwidth, optimizing data compression, and transforming the architecture. Francesco further discussed estimating the number of blobs required and how to predict blob demand based on current usage patterns.

In terms of improving L2 user experience, Carl Beekhuizen mentioned that they are promoting L2 standardization to achieve functional compatibility between L2s and solve the fragmentation problem in the cross-L2 ecosystem. In addition, Vitalik Buterin and others are also promoting standardization work around wallets and bridges.

Regarding the relationship between L2 and Ethereum L1, Justin Drake responded to Max Resnicks criticism that L2 has incentives for decentralized sorters to maximize fees, rather than centralized sorters. He explained Bases revenue model and why the terminology of sorter fees can be misleading.

Finally, regarding the latest progress based on sequencing, Justin Drake introduced several planned Rollup-based components and pre-conference projects, as well as recent developments, including pre-configured development networks and test networks. He also mentioned the upcoming Sequencing Week event and the outlook for future work. Vitalik Buterin believes that although decentralized sequencers are important, more practical issues should be focused on in the short and medium term, such as labor inclusion channels and complete de-trust.

Question 1: Base L2 is increasing its gas limit and is only 1% away from the 1 Giga Gas per second target. What target blob count is needed to support this goal? What is the timeline to achieve this goal? If Base buys 100% of the blobs, how many blobs are needed? Unproven assumption is that Base may switch to validium or volition.

Vitalik Buterin: Right now, the average block size is 70 kB, and the average block gas consumption is 15 Mgas, or 214 gas per byte. Therefore, 1 Ggas/second will require 4.67 MB/second of data bandwidth, which is several times higher than the 1.33 MB/second target we set for full DAS. If we want to get there, there are three ways:

  • Efforts are being made to get DA bandwidth even higher than 16 MB/slot. This will require a lot of research and practical work, although it is not impossible.

  • With ideal data compression, Base should be able to reduce on-chain data consumption by about 7 times. This would reduce Bases usage requirements to about 667 kB/second, which is exactly half of Ethereums data capacity.

  • Base may be converted to Plasma architecture.

Francesco: It is not easy to answer this question, because ultimately it depends on the average ratio of Gas used to bytes published on L1, which itself depends on the type of activity that consumes Gas on L2, and how well the achieved compression ratio works in practice. Nevertheless, we can try it with some guesses. Suppose we only consider ERC-20 transfers that consume about 60k Gas, and consider the compression of such transfers discussed here.

1 Ggas/s then corresponds to about 16k transfers/second, about 16 blobs/s in the “ERC-4377 with aggregation” scenario, or about 195 blobs/s, and about 3 blobs/s in the best case with compression estimates, or about 36 blobs/s. In the former case, this would require about 50% more capacity than the 128 blob target that has been the goal for some time, but in the latter case this would only require about 1/4 of the capacity.

Perhaps a more useful way to look at it is to just focus on current activity. According to this dashboard, at 10 Mgas/s, Base has consumed 2435 blobs in the last 12 hours, or about 0.05 blob/s, at the time of writing. Projecting the same usage pattern to 1 Ggas/s, they would use about 5 blob/s, or about 60 blobs/slot, somewhere in between the two previous estimates. This makes sense intuitively, as Rollups are likely to have considerably more complex activity, which can have a better gas/byte ratio, but on the other hand, my understanding is that we are still far from optimal compaction (no Rollups have stateful compaction). On average, for this kind of activity, a DA layer with a capacity of 128 blobs/slot can support about 2 Ggas/s. In the medium to long term, I think this situation is bound to improve, although probably not by much.

Question 2: How to improve the user experience (UX) of L2 and the experience across L2?

Carl Beekhuizen: Ansgar, Yoav, and I have been working on the L2 Standards Forum to collaborate on features that exist across L2s. The idea is that if a specific L2 wants to release a feature (e.g. multi-dimensional gas pricing) then they can write it as a Rollup Improvement Proposal (RIP), then they can get feedback from other L2 teams on the changes that will help make it more useful to the broader industry, and then anyone else can provide the same functionality and it should be compatible.

By providing a neutral platform for standards and discussion, we hope to be able to deliver things in just one way so that DApps/wallets/users only have to understand one model and it only has to work in the L2 ecosystem.

Additionally, there has been a recent push by Vitalik Buterin and others to establish some standards around wallets and bridges, which should help address the application layer side of this fragmentation.

Q3: What does EF think of Max Resnick’s criticism of L2’s increasingly parasitic relationship with Ethereum’s L1? Why aren’t L2’s decentralizing faster? How can we incentivize them to move faster?

Justin Drake: I’ve watched 30 minutes of Max’s recent Bankless episode and I believe he got it wrong. I’ve discussed this with him privately so this post shouldn’t come as a surprise. His core premise, which he has emphasized multiple times, is that L2s have no incentive to decentralize because they’d lose on sequencing fees. He’s also shared this view on Twitter, e.g. هنا و هنا . On the podcast and Twitter, he specifically mentioned that Coinbase makes $200M in annual fees through Base. Counterintuitively, L2s are incentivized to decentralize sequencing to maximize fees — the opposite of what Max advocates 🙂

The term “sorter fee” is indeed unfortunate, as it is misleading. 100% of Base’s revenue comes from enforcing congestion fees. Base fees are determined by a gas mechanism that mimics L1’s EIP-1559 style (see the documentation هنا ). The big difference is that base fees are sent to a Coinbase wallet, rather than being burned like L1.

Coinbase makes so much money because Gas demand on Base is greater than the Gas target. This is a VM throughput problem, and congestion fees are essentially unrelated to ordering. If Base used a decentralized sorter, Coinbase would still charge these congestion fees. For example, if Base used L1 validators for sorting and became Rollup-based, Coinbase would still charge execution congestion fees. Congestion fees are derived from the Base VM gas target. The sorter simply informs users about L2 congestion fees – the sorter plays a cosmetic role. Value creation comes from the mismatch between the Base VM gas target and block space supply and demand.

In my opinion, the only valid use of the term “sorter fee” is MEV, where value capture does derive from sorting, i.e., front-running and strategically positioning transactions in the background. Base’s sorting is first-come, first-served, using a private memory pool, and users send their transactions end-to-end encrypted to the Base sorter. Coinbase does not capture MEV—there is no sorter fee—and I don’t know of any L2 that captures MEV today. Base’s capture of MEV would come at the expense of users, e.g., exchangers would be caught in the middle, or DEX LPs would be more affected by adverse flows, ultimately resulting in worse prices for exchangers. L2s naturally don’t want to degrade execution quality for users, so MEV extraction does not happen on L2s.

The story continues. It turns out that sequencer fees, in addition to being bad for execution quality within an L2, are also bad for cross-L2 composability. In fact, to extract MEV, some kind of proprietary sequencing infrastructure is required, and shared sequencing between two proprietary sequencers is ruled out. Without shared sequencing, the gold standard of composability called synchronous composability is lost. See the talk titled Why is synchronization valuable?. The erosion of composability reduces the opportunities for cross-L2 transactions (e.g. from 1inch DEX aggregators), which ultimately reduces congestion fees. To maximize fees, L2s should maximize congestion fees, which means maximizing composability.

To maximize composability, we need a shared orderer. How do we, as a community, coordinate a canonical Ethereum-wide shared orderer? Two competing L2s (e.g. Arbitrum and Base) will only agree to choose a trusted neutral shared orderer. IMO only decentralized and permissionless orderers can achieve sufficiently trusted neutrality. As some of you may know, I have a stronger argument: in my opinion, the only trusted Ethereum-wide orderer is Ethereum itself, which does not introduce new brands, new tokens, or new security assumptions.

Why hasn’t L2 decentralized faster? Decentralizing sequencers is hard and takes time. L2 currently uses centralized sequencers as training wheels for three different things:

  • Security: A centralized sorter prevents attackers from permissionlessly exploiting fraud proofs or SNARK bugs, even if those bugs exist on mainnet. A decentralized sorter means having multi-proofs, formal verification, or some other security training wheels (like a TEE).

  • MEV: Centralized sequencers provide quick and dirty encrypted mempools to prevent MEV extraction. Decentralized sequencers mean having a fancy encrypted mempool like SUAVE or some other suitable MEV pipeline.

  • preconfs: Centralized sequencers provide a fast user experience. Decentralized sequencers mean having low latency consensus or using crypto-economic preconfirmations which are being actively developed .

Correction: Base is a priority fee auction with a first-come, first-served tiebreaker. This means that CEX-DEX arbitrage MEV will be charged to Coinbase as a sorter fee. Uniswap v4 hooks come with a better DEX design that does not leak MEV to the sequencer (but instead kicks it back to LPs) – see Sorella for example.

Question 4: What are the latest advances in sequencing?

Justin Drake: We now have a Rollup-based component on mainnet – Taiko. This slide shows that many L2s (Gwyneth, IntMax, Keyspace, Puffer, RISE) are also planning to launch Rollups. There is also a thriving industry of pre-conference projects, including Bolt, Espresso, Interstate, Luban, Monea, Primev, Spire, and XGA. RD is progressing steadily behind the scenes.

We have conducted 14 sequencing calls and 3 live sequencing days in London, Berlin, and Brussels. In June, we launched a pre-configured development network and in July launched the test network Helder . In August, Bolt released an alpha version of L1 including pre-configuration . The Commit-Boost neutral sidecar for proposer commitments will soon undergo a security audit for use on the mainnet.

Next up is Sequencing Week at Edge City in Chiang Mai from November 4-8, and the fourth Sequencing Day in Bangkok during Devcon. Optimistically, we may get execution provisioning on Taiko this year. There is also solid progress on complementary infrastructure, such as real-time provers (TEE-based and SNARK-based) and cross-L2 security with AggLayer. I expect the results of our work to start bearing fruit in 2025, and I expect interest in base sequencing to grow. In the long term, there is also an impetus to reduce L1 slot times, which favors base sequencing.

Question 5: Do Rollups have an incentive not to decentralize their sequencers in order to retain sequencing fees?

Vitalik Buterin: Actually, I think that the Rollup of the decentralized sorter is not necessarily the top priority. For me, in the short and medium term, it is OK to focus on the following aspects:

  • Have workforce inclusion channels (this allows L2 to inherit the censorship resistance of L1)

  • Entering Phase 2 (fully trustless, with any council only being able to intervene in the event of a provable error, e.g. two proof systems that should be equivalent disagree, or a proof system accepting two different post-state roots for the same block)

Another thing I would add is that, to me, fee collection and sorter decentralization are orthogonal. If the sorter is centralized, then you sort and collect fees + MEV (but spend the effort to figure out how to get the MEV). If the sequencer is decentralized, then you get revenue from auctioning sequencer slots, which in equilibrium is equal to fees + MEV minus the cost of figuring out how to get the MEV. The situation looks symmetric.

The main asymmetry I see is probably social: while the sorter is centralized, it’s easier to get away with collecting fees + MEV than giving them to your token holders (or whatever distribution is publicly agreed with your community), but decentralization is needed to “do the right thing” from an economic perspective. But I hope L2s don’t actually stay centralized for this reason, and I hope the community (including organizations like L2beat ) takes this into account and is wary of this situation.

This article is sourced from the internet: A 10,000-word review of the Ethereum Foundations latest AMA: ETH value, the current status of the foundation, the future of the mainnet, L2 development and research focus

© 版权声明

相关文章

بدون تعليقات

يجب عليك تسجيل الدخول لتترك تعليق!
تسجيل الدخول على الفور
بدون تعليقات...