Original title: d/acc: one year later
Author: Vitalik, founder of Ethereum; Translator: 0xjs@黄金财经
About a year ago, I wrote an article about technological optimism, describing my enthusiasm for technology and the great benefits it can bring, as well as my caution about some specific issues. These concerns mainly focus on super-intelligent AI and the risk of doom or irreversible human powerlessness if technology is built in the wrong way. One of the core ideas of my article is: decentralized, democratic, differential defensive acceleration. Accelerate technological development, but treat it differently, focus on technologies that improve our defense capabilities rather than our ability to cause harm, and technologies that decentralize power rather than concentrate power in the hands of a single elite who decides what is true, false, good and evil on behalf of everyone. Defense is like democratic Switzerland and historically quasi-anarchist Zomia, not like medieval feudalism with its lords and castles.
In the year since, my philosophy and ideas have matured a lot. I talked about these ideas on 80,000 Hours and saw a lot of reactions, mostly positive, but also critical. The work itself is continuing and bearing fruit: we’re seeing progress on verifiable open source vaccines, growing awareness of the value of healthy indoor air, Community Notes continuing to shine, a breakout year for prediction markets as an information tool, ZK-SNARKs in government IDs and social media (and via href="https://www.erc4337.io/">account abstractionsecuring Ethereumwallets), open source imaging tools for use in medicine and BCI, and more. In the fall, we held our first major d/acc event: d/acc Discovery Day (d/aDDy) at Devcon, a full day with speakers from all pillars of d/acc (biological, physical, cyber, information defense, and neurotech). People who have been working on these technologies for years are becoming more aware of each other's work, and people outside are becoming more aware of the bigger story: the same values that inspired Ethereum and crypto can be applied to the wider world.
Table of Contents
I. What is d/acc and what it is not
II. The Third Dimension: Survival and Development
III. The Dilemma: AI Safety, Tight Timelines, and Regulation
IV. Cryptocurrency in d/acc
V. d/acc and public goods financing
VI. Future Outlook
I. What d/acc is and isn’t
It’s 2042. You see reports in the media about a possible new epidemic in your city. You’re used to this: people get excited about every animal disease mutation, but most of them go to waste. The two previous actual potential epidemics were detected early through open-source analysis of wastewater monitoring and social media, and their spread was completely stopped. But this time, prediction markets show a 60% chance of at least 10,000 cases, so you’re more worried.
The sequence of the virus was determined yesterday. Software updates for portable air detectors are available that can detect the new virus (from a single breath, or 15 minutes of exposure to indoor air). Open-source instructions and code for producing a vaccine using equipment that can be found in any modern medical facility around the world should be available within a few weeks. Most people haven’t taken any action yet, relying primarily on widespread adoption of air filtration and ventilation to protect themselves. You have an immune disease, so you’re more cautious: your open-source locally-run personal assistant AI, which handles tasks like navigation, restaurant and activity recommendations, also takes into account real-time air tester and CO2 data to recommend only the safest venues. The data is provided by thousands of participants and devices, using ZK-SNARKs and differential privacy to minimize the risk of data leakage or misuse for any other purpose (if you want to contribute to these datasets, there are formal proofs that other personal assistant AIs can verify that these cryptographic gadgets do work).
Two months later, the outbreak is gone: it seems that 60% of people followed basic protocols, namely wearing a mask if the air tester beeps and indicates the presence of the virus, and staying home if they personally test positive, which was enough to push the transmission rate (already greatly reduced due to passive heavy air filtration) to below 1. Simulations show that the disease could be five times worse than Covid two decades ago, but it’s not a problem today.
Devcon d/acc day
One of the most positive takeaways from the d/acc event at Devcon was how successful the d/acc umbrella structure was in bringing people from different fields together and getting them genuinely interested in each other’s work.
It’s easy to create events for “diversity” but it’s hard to get people with different backgrounds and interests to really connect with each other. I still have memories of being forced to watch long operas in middle school and high school, which I personally found boring. I know I “should” appreciate them, because if I didn’t then I’d be an uncultured computer science slacker, but I wasn’t connecting to the content on a more authentic level. d/acc day didn’t feel that way at all: it felt like people actually enjoyed learning about different types of work in different fields.
We need this kind of broad coalition-building if we want to create a brighter alternative to domination, deceleration, and destruction. d/acc does seem to be succeeding, and that alone speaks to the value of the idea.
The core idea of d/acc is simple: decentralized and democratic differential defensive acceleration. Build technology that shifts the offense/defense balance toward defense, and doesn’t rely on handing over more power to centralized institutions. The two are intrinsically linked: any kind of decentralized, democratic, or liberal political structure thrives when defense is easy, and suffers most when defense is hard — in these cases, the more likely outcome is a period of war of all against all, with an eventual balance where the strong rule.
The core principles of d/acc cover a number of areas:
Chart from my article last year, “My Technological Optimism”
One way to understand the importance of striving for decentralization, defensibility, and accelerationism simultaneously is to contrast it with the philosophy you get when you abandon all three.
Decentralized accelerationism, but don’t care about the “differentiated defensibility” part.Basically, be an e/acc, but decentralized. There are a lot of people who take this approach, some label themselves d/acc but describe their focus as “offense”, but there are also a lot of people who get excited about “decentralized AI” and similar topics in a more moderate way, but in my opinion don’t focus enough on the “defense” side.
This approach, in my opinion, might avoid the risk of global human dictatorship for a particular tribe that you worry about, but it doesn’t address the underlying structural problems: there is a constant risk of disaster in an environment that favors offense, or someone positioning themselves as protector and permanently entrenching themselves. In the specific case of AI, it also doesn’t do a good job of addressing the risk of humanity as a whole being disempowered vis-à-vis AI.
Differential defensive acceleration, but don’t care about “decentralization and democratization”. Embracing centralized control for the sake of security has a permanent appeal to a small subset of people, and readers will no doubt be familiar with many examples, as well as their shortcomings. Recently, some have worried that extreme centralized control is the only solution to a future where technology goes to extremes: Consider this hypothetical scenario, where “everyone is fitted with a ‘freedom tag’ — a continuation of today’s more limited wearable surveillance devices, such as ankle tags used as prison substitutes in several countries… encrypted video and audio are continuously uploaded and interpreted by machines in real time”. However, centralized control is a spectrum. A milder version of centralized control that is often overlooked but still harmful is resistance to public scrutiny in biotech (e.g., food, vaccines), and the closed-source norms that allow this resistance to go unchallenged. The risk of this approach, of course, is that centralization itself is often a source of risk. We saw this with the COVID-19 pandemic, where gain-of-function research funded by multiple major world governments may have been the source of the pandemic, where centralized epistemology led the WHO to deny for years that the coronavirus was airborne, and where mandatory social distancing and vaccination requirements led to a political backlash that could last decades. Similar scenarios are likely to emerge with any risk associated with AI or other high-risk technologies. A decentralized approach is better able to address the risks that come from centralization itself.
Decentralize defense, but don’t care about acceleration—basically, try to slow down technological progress or economic decline.
The challenge with this strategy is twofold. First, technology and economic growth are great benefits to humanity in general, and any delay will impose costs that are difficult to to measure. Second, in a non-totalitarian world, non-progress is destabilizing: whoever “cheats” the most, finding plausible ways to progress, will come out ahead. Decelerationist strategies can work to a certain extent in certain situations: European food being healthier than American food is one example, and the success of nuclear non-proliferation to date is another. But they can’t work forever.
Through d/acc, we hope to:
In an age where much of the world has become tribal, we stand for principle, not building anything at all - instead, we build something concrete that makes the world a safer and better place.
Acknowledge that exponential technological progress means the world is going to be a very strange place, and that humanity's "footprint" in the universe will continue to increase. We must improve our ability to protect vulnerable animals, plants, and people from harm, but the only way forward is to move forward.
Build technology that keeps us safe, rather than assuming "good guys (or good AI) are in charge". We do this by developing tools that are more effective when used to build and protect nature than when used to destroy it.
Another way to think about d/acc is to look back to a framework used by the European Pirate Party movement of the late 2000s: empowerment.
Our goal is to build a world that preserves human autonomy, both in terms of negative liberty, avoiding active interference (whether from other people as private citizens, from governments, or from superintelligent robots) in our ability to shape our own destinies, and in terms of positive liberty, ensuring that we have the knowledge and resources to do so. This echoes centuries of classical liberal tradition, which also includes Stewart Brand’s focus on “access to the tools,” John Stuart Mill’s emphasis on education and freedom as key components of human progress—and, perhaps, one could add, Buckminster Fuller’s desire to see global settlement processes be participatory and widely distributed. Given the technological landscape of the 21st century, we can think of d/acc as a way to achieve these same goals.
Second, the Third Dimension: Survival and Thrive
In my article last year, d/acc focused specifically on defense technologies: physical defense, biological defense, cyber defense, and information defense. However, decentralized defense is not enough to make the world a better place: you also need a forward-looking, positive vision of what humanity can achieve with its newfound decentralization and security.
Last year’s article did contain positive ideas, in two places:
In response to the challenges of superintelligence, I proposed a path to having superintelligence without disempowering it (which was far from original on my part):
Today, AI will be built as a tool, not a highly autonomous agent
Tomorrow, tools like virtual reality, muscle electroencephalography, and brain-computer interfaces will be used to establish increasingly close feedback between AI and humans
Over time, the end result will be that superintelligence is a tightly integrated combination of machine and us.
When talking about information defense, I also mentioned in passing that in addition to defensive social technologies that try to help communities stay cohesive and have high-quality discussions in the face of attackers, there are also some progressive social technologies that can help communities make high-quality judgments more easily: pol.is is one example, and prediction markets are another.
But these two points seem disconnected from the d/acc argument: "Here are some ideas for creating a more democratic and defensible world at the base layer, and by the way, here are some unrelated ideas about how we can achieve superintelligence."
However, I think there are actually some very important connections between the "defensive" and "progressive" d/acc technologies mentioned above. Let’s expand on the d/acc chart from last year’s post by adding this axis to the chart (again, let’s relabel it “Survive vs. Thrive”) and see what comes up:
Across all fields, there is a consistent pattern that the science, ideas, and tools that help us “survive” in a field are closely related to the science, ideas, and tools that help us “thrive.” Here are some examples:
Much of the recent research into fighting COVID-19 has focused on the persistence of the virus in the body, which is one reason why contracting COVID-19 is so serious. There are also recent signs that viral persistence may be a cause of Alzheimer’s disease — if true, this means that addressing viral persistence in all tissue types could be key to addressing aging.
Some of the low-cost, miniaturized imaging tools being developed by Openwater could be effective for treating microthrombi, viral persistence, and cancer, and could also be used in BCIs.
Very similar ideas have led to the construction of social tools built for highly adversarial environments, such as Community Notes, and for reasonably cooperative environments, such as pol.is.
Prediction markets are valuable in both highly cooperative and highly adversarial environments.
Zero-knowledge proofs and similar techniques for computing on data while preserving privacy both increase the amount of data available for useful work like science and enhance privacy.
Solar power and batteries are great for driving the next wave of clean economic growth, but they also excel in decentralization and physical resilience.
Beyond this, there are important cross-dependencies between disciplinary areas:
BCI is important as an information defense and collaboration technology because it allows us to communicate our thoughts and intentions in much greater detail. BCI is not just communication between robot and consciousness, it can also be communication between consciousness and robot and consciousness. This echoes Pluality’s philosophy on the value of BCI.
Many biotechnologies rely on information sharing, and in many cases people are only willing to share information if they are sure it will be used for one application and one application only. This depends on privacy technologies (e.g. ZKP, FHE, obfuscation…)
Collaborative technologies can be used to coordinate funding in any other technology area
Part III: Hard Problems: AI Safety, Tight Timelines, and Regulation
Different people have very different AI timelines. Chart from Zuzalu, Montenegro, 2023.
I found the most convincing objection to my article last year to be criticism from the AI safety community. The argument goes: “Sure, if we had half a century to get strong AI, we could focus on building all these good things. But in reality, we’re likely still three years away from AGI, and another three years away from superintelligence. So if we don’t want the world to be destroyed or trapped in an irreversible trap, we can’t just accelerate the good things, we have to slow down the bad things, and that means passing strong regulations that might make the powerful uncomfortable.” In my article last year, I didn’t really call for any specific strategies to “slow down the bad things,” just vague calls to not build risky forms of superintelligence. So here, it’s worth answering the question directly: If we lived in the most inconvenient world, the risks of AI were high, and the timeline was probably still five years away, what regulations would I support?
First, be cautious about new regulations
The most prominent AI regulation bill proposed in California last year was SB-1047. SB-1047 requires developers of the most powerful models (those that cost more than $100 million to train, or more than $10 million to fine-tune) to take some security testing measures before releasing them. In addition, it holds developers of AI models accountable if they are not careful enough. Many critics have argued that the bill is a “threat to open source”; I disagree, because the cost threshold means it only affects the most powerful models: even LLama3 is likely below the threshold. Looking back, however, I think the bill has a larger problem: like most regulations, it is overfit to today’s landscape. The focus on training costs has become fragile in the face of new technologies: the recent state-of-the-art Deepseek v3 model cost only $6 million to train, and in new models like o1, the costs are shifting from training to more general inference. Second, the actor most likely to actually be responsible for an AI superintelligence doom scenario is actually the military. As we have seen in the field of biosecurity over the past half century (and beyond), militaries are willing to do terrible things, and they can easily make mistakes. Military applications of AI are developing rapidly (see Ukraine, Gaza). Any security regulations passed by a government will by default exempt its own military and companies that work closely with the military.
That said, these arguments are not a reason to give up and do nothing. Instead, we can use them as a guide and try to craft rules that raise the fewest of these concerns.
Strategy One: Liability
If someone’s actions cause actionable harm, they can be prosecuted. This doesn’t solve the problem of risk from the military and other “above the law” actors, but it is a very general approach that avoids overfitting and is often favored by libertarian-leaning economists for this reason.
The main targets of liability considered so far are:
Users – people who use the AI
Deployers – intermediaries that provide AI services to users
Developers – people who build the AI
Putting liability on users seems to best align incentives. While the connection between how a model is developed and how it is ultimately used is often unclear, users determine exactly how the AI is used. Putting users on liability would create strong pressure to develop AI in the way I believe is right: focusing on building mecha suits for human minds, rather than creating new forms of self-sustaining intelligent life. The former would respond to user intent on a regular basis, and therefore would not lead to catastrophic behavior unless the user wanted it to. The latter would be most likely to get out of control and create a classic “AI runaway” scenario. Another benefit of placing liability as close to the end-use location as possible is that it minimizes the risk that liability leads people to other harmful behaviors (e.g. closed source, KYC and surveillance, state/corporate collusion to secretly restrict users, e.g. unbanking, locking down large parts of the world).
There is a classic argument against placing liability entirely on users: users are likely to be average people without much money, or even anonymous, and no one can really pay for catastrophic harm. This argument is probably overstated: even if some users are too small to be liable, the average customer of an AI developer is not, so AI developers will still be incentivized to build products that assure users that they are not at high risk of liability. That said, it is still a valid argument that needs to be addressed. You need to incentivize people with resources in the pipeline to take appropriate care, and deployers and developers are both easy targets who still have a large impact on the security of the model.
Deployer liability seems reasonable. A common concern is that it doesn’t apply to open source models, but this seems manageable, especially since the most powerful models are likely to be closed source (if they end up being open source, then while deployer liability doesn’t end up being much use, it won’t do much harm either). The same concerns apply to developer liability (although with open source models, the model will need to be fine-tuned to do things it wasn’t originally allowed to do), but the same rebuttal applies. As a general principle, a "tax" on control, essentially saying "you can build something you don't control, or you can build something you control, but if you build something you control then 20% of that control must be used for our purposes", seems like a reasonable position for the legal system.
An idea that seems to be underexplored is to attribute liability to other actors in the pipeline who are more guaranteed access to adequate resources. An idea that would be very favorable to d/acc would be to attribute liability to the owner or operator of any device that the AI takes over (e.g. through hacking) in the process of performing some catastrophically harmful action. This would create very broad incentives for people to work hard to make the world's (especially computational and biological) infrastructure as safe as possible.
Strategy Two: A Global "Soft Pause" Button on Industrial-Grade Hardware
If I were convinced that we needed something "stronger" than liability rules, this is what I would pursue. The goal would be to be able to reduce the world's available computing power by about 90-99% for 1-2 years during critical periods, buying humanity more time to prepare. The value of 1-2 years should not be overstated: in a climate of complacency, a year of “wartime mode” can easily be worth a hundred years of work. Ways to implement a “pause” have been explored, including specific proposals such as requiring registration and verification of hardware location.
A more advanced approach would be to use clever cryptographic tricks: for example, industrial-grade (but not consumer-grade) AI hardware in production could be equipped with a trusted hardware chip that would only continue to operate if it received 3/3 signatures from major international institutions (including at least one non-military affiliate) every week. These signatures would be device-agnostic (we could even require zero-knowledge proofs that they were published on the blockchain if we wanted), so it would be all or nothing: there would be no practical way to authorize one device to continue operating without authorizing all the others.
This feels like “checking the box” in terms of maximizing benefits and minimizing risks:
This is a useful capability: if we get warning signs that near-superintelligent AI is starting to do something that has the potential to cause catastrophic damage, we’d want to make the transition more slowly.
Until this critical moment occurs, just having a soft-pause feature won’t hurt developers too much.
Focusing on industrial-scale hardware, and only aiming for 90-99% can avoid dystopian approaches like installing spy chips or kill switches in consumer laptops, or forcing small countries to take draconian measures.
Focusing on hardware seems to work very well for technological change. We’ve seen that over multiple generations of AI, quality is highly dependent on available computing power, especially in the early versions of a new paradigm. So reducing available computing power by a factor of 10-100 could easily make the difference between a runaway superintelligent AI winning or losing a fast-paced battle with the humans trying to stop it.
Having to sign in online once a week is annoying in itself, and would create strong pressure to scale the program to consumer hardware.
It can be verified through random checks, and verification at the hardware level would make exemptions for specific users difficult (approaches based on legally enforced shutdowns rather than technical ones don’t have this all-or-nothing property, which makes them more likely to slide into exemptions for the military etc.)
Hardware regulation is already being seriously considered, though usually through the framework of export controls, which are essentially a “we trust our side, but not the other” philosophy. Leopold Aschenbrenner once argued that the US should race to gain a decisive advantage, then force China to sign an agreement limiting the number of boxes they are allowed to run. This approach seems dangerous to me, and perhaps combines the flaws of multipolar competition with centralization. If we have to restrict people, it seems better to restrict everyone on an equal basis, and try to cooperate to organize this work, rather than one side trying to dominate everyone else.
d/acc techniques in AI risk
Both strategies (liability and hardware pause button) have holes, and it’s clear that they’re only temporary stopgap measures: if something can be done on a supercomputer at time T, it’s likely to be done on a laptop in time T + 5. So we need something more stable to buy time. Many d/acc techniques are related to this. We can think of the role of d/acc techniques this way: if AI takes over the world, what will it do?
It hacks our computers → Cyber defense
It causes a superplague → Bio defense
It convinces us (either we trust it or we don’t trust each other) → Information defense
As briefly mentioned above, liability rules are a naturally d/acc-friendly form of regulation, because they are very effective in incentivizing the adoption of these defenses around the world and taking them seriously. Taiwan has recently been experimenting with liability for false advertising, which can be seen as an example of using liability to encourage information defense. We shouldn’t be too keen on imposing liability everywhere and remember the benefits of ordinary freedom, which is to allow the little guy to participate in innovation without fear of lawsuits, but liability can be very flexible and effective where we do want to push security more forcefully.
Fourth, the role of cryptocurrency in d/acc
Much of d/acc goes well beyond typical blockchain topics: biosecurity, BCI, and collaborative discourse tools seem far from what crypto people usually talk about. However, I think there are some important connections between cryptocurrency and d/acc, in particular:
d/acc is an extension of the underlying values of crypto (decentralization, censorship resistance, open global economy and society) to other areas of technology.
Since crypto users are natural early adopters and have consistent values, the crypto community is a natural early user of d/acc technology. The heavy emphasis on community (both online and offline, such as at events and pop-ups), and the fact that these communities are actually doing high-stakes things rather than just talking to each other, makes the crypto community a particularly attractive incubator and testbed for d/acc technologies that are fundamentally geared toward groups rather than individuals (e.g., a large portion of information defense and bio defense). Crypto people just do things together.
Many crypto technologies can be applied to d/acc thematical areas: blockchains for building more robust and decentralized financial, governance, and social media infrastructures, zero-knowledge proofs for privacy, etc. Today, many of the largest prediction markets are built on blockchains, and they are gradually becoming more sophisticated, decentralized, and democratic.
There are also win-win opportunities to collaborate on crypto-related technologies that are extremely useful to crypto projects but are also key to achieving d/acc go goals: formal verification, computer software and hardware security, and adversarially robust governance technologies. These things make the Ethereum blockchain, wallets, and DAOs more secure and powerful, and they also achieve important civilizational defense goals, such as reducing our vulnerability to cyberattacks, including those that could come from superintelligent AI.
Cursive is an app that uses fully homomorphic encryption (FHE) to allow users to identify areas of shared interest with other users while preserving privacy. Edge City, one of Zuzalu's many branches in Chiang Mai, uses this technology.
Beyond these direct intersections, there is another important point of common interest: funding mechanisms.
V. d/acc and Public Goods Funding
One of my long-standing interests is coming up with better mechanisms for funding public goods: projects that are valuable to large groups of people but don’t have a naturally viable business model. My past work in this area includes my contributions to quadratic funding and its application to Gitcoin funding, Retro PGF, and more recently Deep Funding.
Many people are skeptical of the idea of public goods. This skepticism generally comes from two sources:
Indeed, public goods have historically been used as an excuse for heavy-handed central planning and government intervention in society and the economy.
It is widely believed that public goods funding lacks rigor, is based on social desirability bias (sounds good, not actually good), and favors insiders who can play social games.
These are important criticisms, and good ones. However, I believe that strong decentralized public goods financing is essential to the d/acc vision because a key goal of d/acc (minimizing central points of control) inherently defeats many traditional business models. It is possible to build successful businesses on open source—many Balvi grantees are doing so—but in some cases it is hard enough that important projects require additional ongoing support. So we have to do the hard thing and figure out how to do public goods financing in a way that addresses both of the criticisms above.
The solution to the first problem is fundamentally credibly neutral and decentralized. Central planning is problematic because it hands control to elites who can abuse their power, and it tends to be overly aligned with the current situation, becoming increasingly inefficient over time. Quadratic funding and similar mechanisms are precisely what it takes to fund public goods in a way that is as credibly neutral and (architecturally and politically) decentralized as possible.
The second problem is more challenging. A common criticism of quadratic funding is that it quickly turns into a popularity contest, requiring project funders to expend a lot of energy on public outreach. Furthermore, projects that are “in plain sight” (e.g., end-user applications) get funded, but projects that are more obscure (typically “dependencies maintained by people in Nebraska”) get no funding at all. Optimism’s retro funding relies on a small number of expert badge holders; here, the impact of popularity contests is weakened, but the social impact of having close personal relationships with badge holders is amplified.
Deep Funding is my latest effort to address this problem. Deep Funding has two main innovations:
Dependency graphs. Instead of asking each juror a global question (“What is the value of project A to humanity?”), we ask a local question (“Is project A or project B more valuable to outcome C? How much more valuable?”). Humans are notoriously bad at answering global questions: in one famous study, when asked how much they would be willing to spend to save N birds, respondents answered about $80 for N=2,000, N=20,000, and N=200,000. Local questions are much easier to handle. We then combine local answers into a global answer by maintaining a "dependency graph": for each project, which other projects contributed to its success, and to what degree?
AI is distilled human judgment. Jurors are only assigned a small random sample of all the questions. There is an open competition where anyone can submit an AI model that attempts to efficiently fill in all the edges in the graph. The final answer is a weighted sum of the models that are most compatible with the jury's answers. See code example here. This approach allows the mechanism to scale to very large sizes while requiring jurors to submit only a small number of "bits" of information. This reduces the chances of corruption and ensures that every one is of high quality: jurors can think long and hard about each question, rather than quickly clicking through hundreds. By using open competition for AI, we reduce bias in any single AI training and management process. The open market for AI is the engine, and humans are the steering wheel.
But deep funding is just the latest example; other ideas for public goods funding mechanisms have been floated before, and there will be more to come. allo.expert categorizes them well. The underlying goal is to create a social gadget that can fund public goods with an accuracy, fairness, and openness that is at least close to how markets fund private goods. It doesn’t have to be perfect; after all, the market itself is far from perfect. But it should work well enough so that developers working on top open source projects that benefit everyone can continue to do so without feeling the need to make unacceptable compromises.
Today, most of the leading projects in d/acc subject areas: vaccines, BCIs, “edge BCIs” like wrist muscle electrical signals and eye tracking, anti-aging drugs, hardware, and more, are proprietary. This has big disadvantages in terms of ensuring public trust, as we’ve already seen in many of the above areas. It also shifts attention away from the competitive dynamic (“our team has to win this critical industry!”) and away from the greater competition of ensuring these technologies develop fast enough to protect us in a world of superintelligent AI. For these reasons, strong public goods funding can be a powerful boost to openness and freedom. This is another way the crypto community can help d/acc: by making a serious effort to explore these funding mechanisms and make them work well in their own contexts, preparing the way for broader adoption of open source science and technology.
VI. FUTURE OUTLOOK
The coming decades will present major challenges. I have been thinking about two challenges recently:
Powerful new waves of technology, especially strong AI, are emerging rapidly, and these technologies come with important pitfalls that we need to avoid. “AI superintelligence” may take five years to achieve, or it may take fifty. In any case, it is not clear that the default outcome is necessarily positive, and as this and the previous article have shown, there are multiple pitfalls to avoid.
The world is becoming less cooperative. Many powerful actors that seemed to act at least sometimes in accordance with high-minded principles in the past (cosmopolitanism, freedom, common humanity, etc.) are now more openly and actively pursuing personal or tribal self-interest.
There are silver linings to each of these challenges, however. First, we now have very powerful tools to do the remaining work faster:
AI can be used to build other technologies and as a component of governance (such as deep finance or information finance) both now and in the near future. It is also very relevant to BCI, which itself can further increase productivity.
Large-scale coordination is now more possible than before. The internet and social media have expanded the scope of coordination, global finance (including cryptocurrencies) has increased its power, and now information defense and collaboration tools can improve its quality, and perhaps soon human-to-human forms of BCI can increase its depth.
Formal verification, sandboxing (web browsers, Docker, Qubes, GrapheneOS, etc.), secure hardware modules, and other technologies are constantly improving, allowing for better cybersecurity.
Writing any kind of software is much easier than it was two years ago.
Recent fundamental research on how viruses work, especially the simple realization that the most important form of virus transmission is airborne, has provided a much clearer path for how to improve biodefense capabilities.
Recent advances in biotech (e.g., CRISPR, advances in bioimaging) are making all kinds of biotech more accessible, whether for defense, longevity, super happiness, exploring multiple novel biological hypotheses, or just doing something really cool.
Advances in computing and biotechnology are together enabling the emergence of synthetic biology tools that you can use to tweak, monitor, and improve your health. Cyber defense technologies such as cryptography make personalization more feasible.
Second, now that many of the principles we hold dear are no longer held by a conservative minority, they can be reclaimed by a broad coalition that welcomes anyone in the world to join. This may be the biggest benefit of the recent political “realignment” around the world, and it’s worth taking advantage of. Cryptocurrencies have already taken advantage of this well and gained global appeal; d/acc can do the same.
Access to tools means we can adapt and improve our biology and our environment, and the "defense" part of d/acc means we can do this without infringing on the freedom of others. The liberal pluralism principle means we can have great diversity in how we do this, and our commitment to common human goals means it should be achieved.
We humans are still the brightest star. The task we face to build a brighter 21st century, protecting human survival, freedom, and autonomy as we move toward the stars, is a challenging one. But I believe we can do it.