Vitaliks new article: Decentralized accelerationism and the one-year outlook for artificial intelligence
Original title: d/acc: one year later
Original author: Vitalik Buterin, founder of Ethereum
Original translation: Leek, Foresight News
Abstract: This article focuses on the concept of decentralized acceleration (d/acc), and explores its application in technological development and the challenges it faces, including AI security and regulation, its relationship with การเข้ารหัสลับcurrency, and public goods funding. It emphasizes the importance of d/acc in building a safer and better world, as well as the opportunities and challenges of future development. The author elaborates on the connotation of d/acc, analyzes its role in coping with AI risks by comparing different strategies, and discusses the value of cryptocurrency and the exploration of public goods funding mechanisms. Finally, the author looks forward to the future of technological development. Although there are challenges, humans still have the opportunity to build a better world with existing tools and concepts.
คำนำ
Special thanks to volunteers Liraz Siri, Janine Leger and Balvi for their feedback and reviews.
About a year ago, I wrote an essay about technological optimism, in which I described my general enthusiasm for technology and the enormous benefits it could bring, but also expressed my caution on some specific issues, focusing primarily on superintelligent AI and the risks of destruction or irreversible loss of human power that could result if this technology was not built properly.
One of my core points in that article was the idea of decentralized, democratic, and differentiated defensive acceleration. Accelerating technology, but focusing on technologies that improve our defenses rather than causing harm, and decentralizing power rather than concentrating it in the hands of a few elites who can judge right and wrong on behalf of everyone. The model of defense should be like democratic Switzerland and the historically quasi-anarchic Zomia, not the model represented by lords and castles under medieval feudalism.
In the year since then, these ideas have grown and matured significantly. I’ve shared them on 80,000 Hours and have received a lot of response, mostly positive, but also some critical.
The work itself continues to advance and produce tangible results: we’ve seen progress in the field of verifiable open source vaccines; people’s understanding of the value of healthy indoor air continues to deepen; “Community Notes” continue to play a positive role; prediction markets have had a breakthrough year as an information tool; zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARKs) have been applied in the fields of government identification and social media (and Ethereum wallets are secured through account abstraction); open source imaging tools have been applied in the fields of medicine and brain-computer interfaces (BCI), and so on.
Last fall, we had our first major d/acc event: d/acc Discovery Day (d/aDDy) at Devcon, a full day of speakers from all of the d/acc pillars (biological, physical, cyber, cyber defense, and neurotech). Over the years, people working on these technologies have become more aware of each other’s work, and outsiders have become more aware of the greater vision: that the same values that drive Ethereum and crypto can be extended to the wider world.
The connotation and extension of d/acc
The year is 2042. You see a media report about a possible new outbreak in your city. You are used to seeing this kind of news: people tend to overreact to every animal disease mutation, which in the vast majority of cases never turns out to be an actual crisis. Two previous potential outbreaks were detected early and nipped in the bud by wastewater monitoring and open-source analysis of social media. This time, however, is different: prediction markets indicate a 60% chance of at least 10,000 cases, and that worries you.
Just yesterday, the genetic sequence of the virus was determined. A software update for the air tester in your pocket was released shortly afterwards, enabling it to detect the new virus (either from a single breath or after 15 minutes of exposure to room air in a room). Meanwhile, open-source instructions and code to generate a vaccine using equipment accessible to any modern medical facility in the world are expected to be released within weeks. Most people are not taking any action yet, relying primarily on widespread air filtration and ventilation measures to keep themselves safe.
Because you have immune problems, you act more cautiously: the open source local personal assistant AI you use, in addition to taking on routine tasks such as navigation, restaurant and activity recommendations, will also take into account real-time air test data and CO2 data to recommend only the safest places. This data is provided by thousands of participants and devices, and with the help of ZK-SNARKs and differential privacy technology, the risk of data being leaked or misused for other purposes is minimized (if you are willing to contribute data to these datasets, other personal assistant AIs will verify that these encryption tools are indeed effective).
Two months later, the epidemic miraculously dissipated: it seemed that 60% of the population followed the basic epidemic prevention protocol, namely wearing masks when the air tester sounded the alarm and indicated the presence of the virus, and isolating at home if the individual tested positive. This measure was enough to further reduce the transmission rate, which was already greatly reduced due to passive high-power air filtration, to below 1. A disease that simulations showed could be five times more serious than the new coronavirus 20 years ago, has not caused serious impacts today.
d/acc Day at Devcon
One extremely positive outcome of the d/acc event at Devcon was that the d/acc concept successfully brought people from different fields together and really sparked their interest in each others work.
Its easy to host an event thats diverse, but its hard to actually connect people from different backgrounds and interests. I still remember being forced to sit through long operas in middle and high school that I personally found boring. I knew I was supposed to enjoy them because if I didnt Id be seen as an uncultured computer science slacker, but I just didnt connect with the content on a deeper level. The vibe at d/acc day was completely different: it felt like people genuinely enjoyed learning about all kinds of work in different fields.
This kind of broad coalition-building is necessary if we are to build a future that is better than domination, deceleration, and destruction. The fact that d/acc seems to be doing so well is a reminder of the value of this idea.
The core idea of d/acc is simple and clear: decentralized, democratic, and differentiated defensive acceleration. Build technology that can tip the balance of attack and defense toward defense, and do so without relying on giving more power to a central authority. These two aspects are inherently closely linked: any decentralized, democratic, or liberal political structure tends to thrive when defense is easy to implement, and struggles when defense is difficult – in those cases, the more likely outcome is a chaotic period of everyone against everyone, and eventually a state of equilibrium where the strongest rule.
One way to understand the significance of trying to achieve decentralization, defensibility, and acceleration simultaneously is to contrast it with the ideas that arise from abandoning any one of these three aspects.
Chart from last years My Tech Optimism
Decentralization is accelerating, but the differentiated defense part is ignored
Essentially, this is akin to being an effective accelerationist (e/acc), but pursuing decentralization at the same time. There are many people who take this approach, some of whom call themselves d/acc, but who helpfully describe their focus as offense. There are many others who express more modest enthusiasm for decentralized AI and similar topics, but who, in my opinion, pay significantly less attention to the defense side.
In my view, this approach may avoid the risk of a particular group of people exercising dictatorial control over the global human race, but it fails to address the underlying structural problem: in an environment that favors offense, there is always a constant risk of disaster, or that someone will position themselves as protector and permanently dominate. In the case of AI, it also fails to properly address the risk of humanity as a whole being disempowered relative to AI.
Differentiated defense accelerates, but ignores decentralization and democracy
Accepting centralized control to achieve security goals will always have a certain appeal for some, and readers are no doubt familiar with many examples of this, as well as the drawbacks they entail. Recently, some have worried that extreme centralized control may be the only way to deal with the extreme technologies of the future: for example, imagine a hypothetical scenario in which “everyone wears a ‘freedom tag’ – a follow-up to today’s more limited wearable surveillance devices, similar to the ankle tags used as prison alternatives in several countries…encrypted video and audio are continuously uploaded and interpreted by machines in real time.” However, there is a question of degree to centralized control. A relatively mild form of centralized control that is often overlooked but still harmful is the resistance to public scrutiny in the biotech sector (e.g., food, vaccines), and the closed-source norms that allow this resistance to go unchallenged.
The risk of this approach is obvious: the center itself often becomes the source of risk. We have seen this during the COVID-19 pandemic, where gain-of-function research funded by multiple major world governments may have been the root cause of the pandemic, centralized epistemology led the World Health Organization to refuse for years to acknowledge that the coronavirus was airborne, and mandatory social distancing and vaccine mandates triggered a political backlash that may last for decades. Similar situations are likely to occur again in any risk scenario related to AI or other risky technologies. In contrast, a decentralized approach will be more effective in addressing risks from the center itself.
Decentralization defends, but exclusion accelerates
Essentially, this is an attempt to slow down technological progress or drive economic recession.
The challenge to this strategy is twofold. First, technology and economic growth are, on balance, so beneficial to humanity that any delay in them carries incalculable costs. Second, in a non-totalitarian world, stagnation is destabilizing: those who “cheat” the most, who can find plausible ways to keep moving forward, will prevail. Decelerationist strategies can work to a certain extent in certain contexts: the fact that European food is healthier than American food is one example; so is the success of nuclear nonproliferation so far. However, these strategies cannot work forever.
Through d/acc we strive to achieve the following goals:
-
In today’s increasingly tribal world, we want to stay true to our principles and not just build things blindly—instead, we want to build specific things that make the world a safer and better place.
-
Recognize that exponential technological progress means the world is going to be a very strange place, and that humanity’s overall “footprint” in the universe is bound to increase. Our ability to protect vulnerable animals, plants, and people from harm must continue to improve, and the only way forward is to keep moving forward.
-
Building technology that actually protects us, rather than based on the assumption that good guys (or good AI) are in control. We do this by building tools that are naturally more effective when used to build and protect than when used to destroy.
Another way to think about d/acc is to return to a framework that emerged from the European Pirate Party movement of the late 2000s: empowerment.
Our goal is to build a world that preserves human agency, achieving negative freedom—preventing others (whether private citizens, governments, or superintelligent robots) from actively interfering with our ability to shape our own destinies—and positive freedom—ensuring that we have the knowledge and resources to exercise that ability. This echoes a centuries-old classical liberal tradition that ranges from Stewart Brand’s focus on “instrumental acquisition” to John Stuart Mill’s emphasis on education alongside freedom as key elements of human progress—perhaps supplemented by Buckminster Fuller’s vision of a global problem-solving process that is participatory and widely distributed. Given the technological landscape of the 21st century, we can think of d/acc as a way to achieve these same goals.
The third dimension: the coordinated development of survival and prosperity
In my article last year, d/acc focused specifically on defensive technologies: physical defense, biological defense, cyber defense, and information defense. However, decentralized defense alone is not enough to build a great world: we also need a forward-looking, positive vision of what humanity can achieve with its newfound decentralization and security.
Last years article did contain a positive vision in two respects:
1. In focusing on the challenges of superintelligence, I propose a path (not original to me) for how we can achieve superintelligence without losing power:
-
At present, artificial intelligence is built as a tool rather than a highly autonomous intelligent entity.
-
In the future, we will use tools such as virtual reality, electromyography, and brain-computer interfaces to establish a closer feedback mechanism between artificial intelligence and humans.
-
Over time, we will gradually move towards the ultimate destination, in which superintelligence is the product of the close integration of machines and humans.
2. When talking about information defense, I also mentioned in passing that in addition to defensive social technologies designed to help communities maintain cohesion and engage in high-quality discussions in the face of attackers, there are also progressive social technologies that can help communities make high-quality judgments more easily: Pol.is is an example, as are prediction markets.
But at the time, both of these points felt disconnected from d/acc’s core argument: “Here are some ideas about building a more democratic, more defensible world at a fundamental level, and by the way, here are some unrelated ideas about how we might achieve superintelligence.”
However, I think in reality there are some crucial connections between the d/acc techniques labeled “defensive” and “progressive” above. Let’s expand on the d/acc chart from last year’s article by adding this axis to the chart (while relabeling it “survive vs. thrive”) and see what that looks like:
There is a consistent pattern across fields: the science, ideas, and tools that help us “survive” in a field are closely related to the science, ideas, and tools that help us “thrive.” Here are some specific examples:
-
Many recent COVID-19 studies have focused on the persistence of the virus in the body, which is seen as a key mechanism for COVID-19. Recently, there are also signs that the persistence of the virus may be a causative factor in Alzheimers disease – if this view is true, then solving the problem of viral persistence in all tissue types may become the key to overcoming the problem of aging.
-
Low-cost and miniaturized imaging tools, such as those being developed by Openwater, have great potential for treating microthrombi, viral persistence, cancer, and also have applications in the field of brain-computer interfaces.
-
The ideas that drive the construction of social tools that work well in highly adversarial environments (such as Community Notes) and social tools that work well in reasonably cooperative environments (such as Pol.is) are very similar.
-
Prediction markets have significant value in both highly cooperative and highly adversarial environments.
-
Zero-knowledge proofs and similar technologies perform computations on data while protecting privacy, increasing the amount of data available for beneficial work such as scientific research while enhancing privacy protection.
-
Solar and batteries are essential for driving the next wave of clean economic growth, while also excelling in terms of decentralization and physical resilience.
Beyond this, there are important interdependencies between the different subject areas:
-
Brain-computer interfaces are essential as an information defense and collaboration technology because they enable a more sophisticated communication of our thoughts and intentions. Brain-computer interfaces are not just a connection between robots and consciousness: they can also be an interaction between consciousness-robot-consciousness. This echoes the value of brain-computer interfaces in the concept of diversity.
-
Many biotechnologies rely on information sharing, and in many cases, people are willing to share information only when they are sure it will be used only for specific applications. This relies on privacy technologies (such as zero-knowledge proofs, fully homomorphic encryption, obfuscation, etc.).
-
Collaborative technology can be used to coordinate funding for any other technology area.
The Conundrum: AI Safety, Tight Timelines, and Regulatory Dilemmas
Different people have wildly different timelines for AI. Chart from Zuzalu, Montenegro, 2023.
The most persuasive objection to my article last year came from the AI safety community. The argument went something like this: “Sure, if we had half a century to develop strong AI, we could focus on building all of these beneficial things. But in reality, it looks like we might have only three years to get to general AI, and another three to get to superintelligence. So if we don’t want to doom the world or otherwise get it into an irreversible mess, we can’t just accelerate the development of beneficial technologies; we must also slow down the development of harmful technologies, and that means strong regulations that might anger the powerful.” In my article last year, I really didn’t propose any specific strategies for “slowing down the development of harmful technologies,” other than a vague call to not build risky forms of superintelligence. So it’s worth addressing the question directly here: if we were in a worst-case scenario, with extremely high risks from AI and a timeline of perhaps only five years, what kind of regulation would I support?
Reasons to be cautious about new regulation
Last year, the major AI regulatory proposal was California’s SB-1047. SB-1047 would require developers of the most powerful models (i.e., those that cost more than $100 million to train, or $10 million to fine-tune) to conduct a battery of security testing before release. In addition, AI model developers would be held accountable if they fail to exercise sufficient caution. Many critics have argued that the bill is a “threat to open source”; I dispute this, as the cost threshold means it only affects the most powerful models: even a Llama 3 model is likely below that threshold. In retrospect, however, I think the bill had a more serious problem: like most regulatory measures, it was overfit to the current state of affairs. The focus on training costs has proven to be fragile in the face of new technologies: the recent state-of-the-art DeepSeek v3 model cost only $6 million to train, and in new models like O1, costs are often shifted more from training to the inference phase.
Actors Most Likely to Be Responsible for an AI Superintelligence Destruction Scenario
In reality, the actors most likely to be responsible for AI superintelligence destruction scenarios are the military. As we have witnessed over the past half century in biosecurity (and earlier), militaries are willing to take some terrible actions, and they are extremely fallible. Today, AI military applications are developing rapidly (e.g., in Ukraine, Gaza). And any security regulation adopted by a government will, by default, exempt its own military and the companies that work closely with the military.
Coping strategies
Still, these arguments are not reasons to sit idly by. Rather, they can serve as a แนะนำ to try to craft rules that raise the fewest of these concerns.
Strategy 1: Responsibility
If someones actions in some way cause legally actionable harm, they can be prosecuted. This doesnt solve the problem of risk from the military and other above the law actors, but it is a very general approach that avoids overfitting, and for this reason is often supported by libertarian-leaning economists.
The main accountability objectives considered so far are as follows:
-
User: The person who uses artificial intelligence.
-
Deployer: An intermediary that provides AI services to users.
-
Developers: People who build AI.
Putting responsibility on the user seems to be most incentive-aligned. Users determine how AI is used, although the connection between how a model is developed and how it is ultimately used is often unclear. Holding users accountable creates a strong pressure to use AI in what I believe is the right way: focusing on building mechanical suits for the human mind, rather than creating new self-sustaining intelligent life forms. The former will respond to user intent on a regular basis, and therefore will not lead to catastrophic actions unless the user wants them to. The latter, on the other hand, presents the greatest risk of getting out of control and triggering a classic “AI runaway” scenario. Another benefit of placing responsibility as close to the end user as possible is that it minimizes the risk that responsibility leads people to take actions that are otherwise harmful (e.g. closed source, know your customer (KYC) and surveillance, state/corporate collusion to secretly restrict users, such as banks refusing to serve certain customers, excluding large swaths of the world).
There is a classic objection to attributing liability solely to users: users are likely to be average individuals, without much money, and perhaps even anonymous, so that no one can realistically pay for catastrophic damage. This argument is probably overstated: even if some users are too small to be liable, the average customer of an AI developer is not, so AI developers will still be incentivized to build products that give users confidence that they will not face high liability risk. That said, it is still a valid point that needs to be addressed. You need to incentivize someone in the pipeline who has the resources to take appropriate care to do so, and deployers and developers are both easy targets who still have a lot of influence over the safety of the model.
Deployer liability seems reasonable. A common concern is that it doesnt work with an open source model, but that seems manageable, especially since the most powerful models are likely to be closed source (if they turn out to be open source, then while deployer liability might not end up being very useful, it wont do much harm either). The same concerns apply to developer liability (although with an open source model theres the hurdle of needing to tweak the model to do something it wouldnt otherwise be allowed to do), but the same counterarguments apply. As a general principle, imposing a tax on control that essentially says you can build something you cant control, or you can build something you can control, but if you build something you can control, 20% of that control must be used for our purposes seems like a reasonable position for the legal system to take.
One idea that seems to be underexplored is to place liability on other actors in the pipeline, who are more likely to have ample resources. An idea that fits well with the d/acc philosophy is to hold accountable the owners or operators of any device that an AI takes over (e.g., through hacking) in the process of performing some catastrophically harmful action. This would create a very broad incentive to work hard to make the infrastructure of the world (especially in computing and biology) as safe as possible.
Strategy 2: Global “soft pause” button on industrial-scale hardware
If I were convinced that we needed something “stronger” than liability rules, I would choose this strategy. The goal is to have the ability to reduce the world’s available computing power by about 90% – 99% during a critical period, for 1-2 years, to buy humanity more time to prepare. The value of 1-2 years should not be overestimated: a year of “wartime mode” in a complacent environment can easily be worth a hundred years of regular work. Ways to achieve a “pause” are already being explored, including some specific proposals such as requiring hardware registration and verified location.
A more advanced approach would be to use clever cryptographic tricks: for example, industrial-scale (but not consumer-grade) AI hardware could be equipped with a trusted hardware chip that would only allow it to continue running if it received 3/3 signatures from major international institutions (including at least one non-military affiliate) every week. These signatures would be device-independent (we could even require zero-knowledge proofs to be published on the blockchain if needed), so it would be all-or-nothing: there would be no practical way to authorize one device to continue running without authorizing all the others.
This seems to fit the bill in terms of maximizing benefits and minimizing risks:
-
This is a useful capability: if we receive indications that near-superintelligent AI is starting to do things that could cause catastrophic damage, we’d want to transition more slowly.
-
Until such a critical moment arrives, simply having the ability to soft-pause does little harm to the developer.
-
Focusing on industrial-scale hardware and only setting targets of 90-99% would avoid dystopian approaches like planting spy chips or forced kill switches in consumer laptops, or forcing small countries to take draconian measures against their will.
-
Focusing on hardware seems to be very resilient to technological change. We have seen in multiple generations of AI that quality is highly dependent on available computing power, especially in the early versions of a new paradigm. Therefore, reducing available computing power by 10-100 times could easily make the difference between a runaway superintelligent AI winning or losing in a quick battle with the humans trying to stop it.
-
The inherent hassle of having to go online every week to get signatures would be a strong disincentive to scaling this scheme to consumer-grade hardware.
-
This can be verified through random checks, and operating at the hardware level will make it difficult to exempt specific users (approaches based on legally enforced closures rather than technical means do not have this all-or-nothing nature, which makes them easier to slide into exemptions for the military, etc.).
Hardware regulation is already being strongly considered, though usually in the framework of export controls, which essentially have a we trust our side, but not the other philosophy. Leopold Aschenbrunner famously argued that the United States should race to gain a decisive advantage, then force China to sign an agreement limiting the amount of equipment they can run. This approach seems risky to me, and could combine the pitfalls of multipolar competition with centralization. If we have to restrict people, it seems better to restrict everyone equally and work to actually cooperate to organize implementation, rather than one side trying to dominate everyone.
The role of d/acc technology in AI risks
Both strategies (liability and hardware pause button) have holes, and it is clear that they are only temporary stopgap measures: if something can be done on a supercomputer at time T, it is likely to be done on a laptop at time T + 5 years. Therefore, we need more stable measures to buy time. Many d/acc techniques are relevant here. We can think of the role of d/acc techniques as follows: If AI takes over the world, how will it do it?
-
It invades our computers → Network Defense
-
It creates a super plague → Biodefense
-
It convinces us (either to trust it or not to trust each other) → Information Defense
As briefly mentioned above, liability rules are a natural regulatory fit with the d/acc philosophy, as they can be very effective in incentivizing adoption of these defenses around the world and taking them seriously. Taiwan has recently been experimenting with liability for false advertising, which can be seen as an example of using liability to encourage information defenses. We shouldn’t be too keen on imposing liability everywhere, and remember the benefits of ordinary freedoms in enabling the little guy to engage in innovation without fear of litigation, but where we do want to push for safety more forcefully, liability can be quite flexible and effective.
The role of cryptocurrency in d/acc
Many aspects of d/acc go far beyond typical blockchain topics: biosecurity, brain-computer interfaces, and collaborative discourse tools seem far removed from what crypto people usually talk about. However, I think there are some important connections between crypto and d/acc, in particular:
-
d/acc is an extension of the fundamental values of cryptocurrency (decentralization, censorship resistance, and an open global economy and society) to other areas of technology.
-
Because crypto users are natural early adopters and there is alignment of values, crypto communities are natural early users of d/acc technologies. The heavy emphasis on community (both online and offline, such as events and pop-ups), and the fact that these communities actually do high-stakes things rather than just talk to each other, makes crypto communities a particularly attractive incubator and testing ground for d/acc technologies that fundamentally operate in groups rather than individuals (such as most information defense and bio defense technologies). Crypto people are all about doing things together.
-
Many crypto technologies can be used in the d/acc subject area: blockchains for building more powerful and decentralized financial, governance, and social media infrastructures, zero-knowledge proofs for privacy, etc. Many of the largest prediction markets today are built on blockchains, and they are gradually becoming more sophisticated, decentralized, and democratic.
-
There are also win-win opportunities for collaboration on crypto-adjacent technologies that are both extremely useful to crypto projects and key to achieving the goals of d/acc: formal verification, computer software and hardware security, and adversarially robust governance techniques. These make the Ethereum blockchain, wallets, and decentralized autonomous organizations (DAOs) more secure and robust, and they also achieve important civilizational defense goals, such as reducing our vulnerability to cyberattacks, including those that could come from superintelligent AIs.
Edge City in Chiang Mai, one of Zuzalu’s many branches, uses Cursive, an app that uses fully homomorphic encryption (FHE) to allow users to identify areas of common interest with other users while preserving privacy.
d/acc and public goods funding
One problem I’ve long been interested in is coming up with better mechanisms for funding public goods: projects that are valuable to very large groups but don’t have a naturally accessible business model. My past work in this area includes my work on quadratic funding and its contributions to Gitcoin Grants, Retro Public Good Funding (retro PGF), and most recently Deep Funding.
Many people are skeptical about the concept of public goods. This skepticism usually comes from two aspects:
-
Public goods have historically been used as a justification for heavy-handed central planning and government intervention in society and the economy.
-
A common perception is that public goods funding lacks rigor and operates on social desirability bias—that is, what sounds good is better than what is actually good—and favors insiders who can play the social game.
These are important criticisms, and valid ones. However, I believe that strong decentralized public goods funding is essential to the d/acc vision, because a key goal of d/acc (minimizing central points of control) is itself a hindrance to many traditional business models. It is possible to build successful businesses on open source—several Balvi grantees are doing so—but in some cases it is difficult enough that important projects require additional ongoing support. So we have to do the hard thing, which is to figure out how to do public goods funding in a way that addresses both of the above criticisms.
The solution to the first problem is fundamentally credible neutrality and decentralization. Central planning is problematic because it hands control to elites who can become abusive, and because it often overfits to current circumstances and becomes increasingly ineffective over time. Quadratic funding and similar mechanisms are precisely about funding public goods in a way that is as credibly neutral and (architecturally and politically) decentralized as possible.
The second problem is more challenging. A common criticism of quadratic funding is that it quickly turns into a popularity contest, requiring project funders to expend a lot of energy on public outreach. Furthermore, projects that are “in front of people’s eyes” (e.g., end-user applications) get funded, while projects that are more behind the scenes (the typical “dependencies maintained by a guy in Nebraska”) get no funding at all. Optimism retroactive funding relies on a smaller number of expert badge holders; here, the popularity contest effect is reduced, but the social effect of having close personal connections to badge holders is amplified.
Deep Funding is my latest effort to solve this problem. Deep Funding has two main innovations:
-
Dependency graph. Instead of asking each juror a global question (“What is the value of project A to humanity?”), we ask a local question (“Is project A or project B more valuable to outcome C? By how much?”). Humans are notoriously bad at answering global questions: in one famous study, when asked how much they would be willing to spend to save N birds, respondents answered roughly $80 for N = 2,000, N = 20,000, and N = 200,000. Local questions are much easier to handle. We then combine local answers into a global answer by maintaining a “dependency graph”: for each project, which other projects contributed to its success, and by how much?
-
AI as distilled human judgment. Jurors are each assigned only a small random sample of all questions. There is an open competition where anyone can submit an AI model that attempts to efficiently fill in all the edges in the graph. The final answer is a weighted sum of the models that are most compatible with the jurys answers. See here for a code example. This approach allows the mechanism to scale to very large sizes while requiring jurors to submit only a small number of bits of information. This reduces the opportunity for corruption and ensures that each bit of information is of high quality: jurors can spend a long time thinking about each question, rather than quickly clicking through hundreds of questions. By using open competitions for AI, we reduce bias from any single AI training and management process. An open ตลาด for AI as the engine, with humans as the steering wheel.
But deep funding is just the latest example; there have been other ideas for public goods funding mechanisms before, and there will be more in the future. allo.expert has done a good job cataloguing them. The fundamental goal is to create a social tool that can fund public goods with a level of accuracy, fairness, and open access that is at least close to that of markets funding private goods. It doesnt have to be perfect; after all, markets themselves are far from perfect. But it should work well enough that developers who work on high-quality open source projects that benefit everyone can continue to do so without feeling the need to make unacceptable compromises.
Today, most of the leading projects in d/acc topic areas: vaccines, BCIs, “edge BCIs” like wrist EMG and eye tracking, anti-aging drugs, hardware, etc., are proprietary projects. This has big disadvantages in terms of ensuring public trust, as we have already seen in many of the above areas. It also shifts attention to competitive dynamics (“our team must win this critical industry!”) and away from the greater competition of ensuring these technologies emerge quickly enough to protect us in a world of superintelligent AI. For these reasons, strong public goods funding can be a powerful promoter of openness and freedom. This is another way the cryptocurrency community can help d/acc: by making a serious effort to explore these funding mechanisms and make them work well in its own context, preparing for a broader adoption of open source science and technology.
อนาคต
The coming decades bring important challenges. I’ve been thinking about two of them lately:
-
A powerful wave of new technologies, especially strong artificial intelligence, is arriving quickly, and these technologies come with important pitfalls that we need to avoid. Artificial superintelligence may take five years to arrive, or it may take fifty. In either case, it is not clear that the default outcome is automatically positive, and as described in this and the previous article, there are multiple pitfalls to avoid.
-
The world is becoming less cooperative. Many powerful actors that previously seemed to at least sometimes act on the basis of high principles (cosmopolitanism, freedom, common humanity, etc.) are now more openly and actively pursuing their individual or tribal self-interest.
However, each of these challenges has a silver lining. First, we now have very powerful tools to do the rest of our work much faster:
-
Current and near-term AI can be used to build other technologies and can be an element in governance (such as in deep funding or information finance). It is also very relevant to brain-computer interfaces, which themselves can provide further productivity gains.
-
Massive coordination is now possible on a much larger scale than before. The internet and social media extended its reach, global finance (including cryptocurrencies) increased its power, and now information defense and collaboration tools can improve its quality, and perhaps soon brain-computer interfaces in the form of human-computer-human can add depth.
-
Formal verification, sandboxing (web browsers, Docker, Qubes, GrapheneOS, etc.), secure hardware modules, and other technologies are improving, making better network security possible.
-
Writing any kind of software is much easier than it was two years ago.
-
Recent fundamental research into understanding how viruses work, particularly the simple understanding that the most important form of transmission is airborne, has shown a clearer path for how to improve biodefense.
-
Recent advances in biotechnology (e.g., CRISPR, advances in bioimaging) have made all kinds of biotechnology, whether for defense, longevity, super-happiness, exploring multiple new biological hypotheses, or just doing really cool things, more accessible.
-
Advances in computing and biotechnology together are enabling synthetic biology tools that you can use to adapt, monitor, and improve your health. Cyber defense technologies, such as cryptography, are making this personalized dimension more feasible.
Second, now that many of the principles we hold dear are no longer held by a select few of the old powers, they can be reclaimed by a broad coalition that welcomes anyone in the world to join. This is probably the biggest benefit of the recent political “realignment” around the world, and it’s worth taking advantage of. Cryptocurrencies have done a great job of capitalizing on this and finding global appeal; d/acc can do the same.
Acquiring tools means we can adapt and improve our biology and our environment, and the “defense” part of d/acc means we can do this without infringing on the freedom of others to do the same. The principle of liberal pluralism means we can have great diversity in how we do this, and our commitment to a common human goal means it should be achieved.
We humans remain the brightest star. The task before us—to build a brighter 21st century, one that protects human survival, freedom, and agency as we reach for the stars—is a challenging one. But I believe we can do it.
This article is sourced from the internet: Vitaliks new article: Decentralized accelerationism and the one-year outlook for artificial intelligence
Original author: Matthew Sigel Patrick Bush Original translation: TechFlow Please note that VanEck may hold the digital assets described below. Before we get into our outlook for 2025, let’s take a look at how our predictions for 2024 performed. Out of the 15 predictions made at the end of 2023, our self-rated score was 8.5/15. While 56.6% accuracy isn’t perfect, it’s good enough to keep us “in the game.” With Bitcoin (BTC) breaking $100,000 and Ethereum (ETH) breaking $4,000, 2024 will be a memorable year in cryptocurrency history, even if some predictions didn’t quite pan out. 2024 Predictions Review In our 2024 forecast, we successfully bet on several key trends, including: First launch of Bitcoin spot ETP Bitcoin halving completed successfully Ethereum still ranks second, behind Bitcoin Bitcoin hits all-time…