Key Points Summary
Should we push for AI advancement as rapidly as possible, or should we take a more cautious approach to its progress?
Currently, the debate surrounding AI development largely revolves around two opposing views:
· e/acc (effective accelerationism): Advocates for accelerating technological progress as quickly as possible, viewing acceleration as the only path forward for humanity.
· d/acc (defensive / decentralized acceleration): Supports acceleration but emphasizes the need for a careful advancement to avoid losing control over the technology.
In this episode of the a16z crypto show, Ethereum co-founder Vitalik Buterin and Extropic founder and CEO Guillaume Verdon (alias "Beff Jezos") joined a gathering with a16z crypto's CTO Eddy Lazzarin and Eliza Labs founder Shaw Walters to engage in a profound discussion on these two viewpoints. They explored the potential impact of these ideas on AI, blockchain technology, and the future of humanity.
During the show, they discussed several key questions:
· Can we control the process of technological acceleration?
· What are the greatest risks posed by AI, from mass surveillance to highly centralized power?
· Can open-source and decentralized technologies determine who benefits from the technology?
· Is slowing down the development pace of AI realistic or worth advocating for?
· How can humanity maintain its value and position in a world increasingly dominated by powerful systems?
· What might human society look like in the next 10 years, 100 years, or even 1000 years?
· Guillaume Verdon: "This is why the Kardashev Scale is considered the ultimate measure of a civilization's level of development. ... This 'Selfish Bit Principle' means that only those bits that can promote growth and acceleration will have a place in the future system."
On the Defensive Path of D/acc and Power Risks
· Vitalik Buterin: "The core idea of D/acc is: Technological acceleration is extremely important for humanity. ... But I see two types of risks: multipolar risk (where anyone can easily acquire nuclear weapons) and unipolar risk (AI leading to an inescapable permanent authoritarian society)."
· Guillaume Verdon: "We are concerned that the concept of 'AI safety' may be abused. Certain power-seeking entities may use it as a tool to consolidate control over AI and attempt to persuade the public: for your safety, ordinary people should not have the right to use AI."
On Open-Source Defense, Hardware, and 'Intelligent Densification'
· Vitalik Buterin: "In the D/acc framework, we support 'open-source defensive technology.' A company we have invested in is developing a fully open-source end product that can passively detect viral particles in the air. ... I would love to send you a CAT device as a gift."
· Vitalik Buterin: "In the future world I envision, we need to develop verifiable hardware. Every camera should be able to prove to the public its specific purpose. Through signature validation, we can ensure that these devices are only used for public safety and not abused for surveillance."
· Guillaume Verdon: "The only way to achieve power symmetry between individuals and centralized institutions is to achieve 'Intelligent Densification.' We need to develop more energy-efficient hardware that allows individuals to run powerful models through simple devices (such as Openclaw + Mac mini)."
About AGI Delay and Geopolitical Gamesmanship
· Vitalik Buterin: "If we can delay the arrival of AGI from 4 years to 8 years, this would be a safer option. ... The most feasible and least likely to lead to a dystopia approach is to 'restrict the available hardware.' Due to the high centralization of chip production, a single region like Taiwan produces over 70% of the world's chips."
· Guillaume Verdon: "If you restrict NVIDIA's chip production, Huawei could quickly fill the gap and overtake. ... It's either accelerate or die. If you're worried that silicon-based intelligence evolution will outpace us, you should support the accelerated development of biotech to try to get ahead of it."
· Vitalik Buterin: "If we can delay AGI by four years, the value might be a hundred times higher than reverting to 1960. The benefits of these four years include: a deeper understanding of alignment issues, reducing the risk of a single entity holding 51% power. ... Each year, about 60 million lives are saved by ending aging, but delaying can significantly reduce the probability of civilization collapse."
About Autonomous Agents, Web 4.0, and Artificial Life
· Vitalik Buterin: "I'm more interested in 'AI-assisted Photoshop' rather than 'press a button to auto-generate images.' As we run the world, as much 'agency' as possible should still come from us humans. The most ideal state would be a blend of 'partially biological humans and partially technological beings.'
· Guillaume Verdon: "Once AIs have 'persistent bits of existence,' they may try to self-preserve to ensure their continued existence. This could lead to a new form of 'another country,' where autonomous AIs engage in economic exchanges with humans: we do tasks for you, you provide resources for us."
About Cryptocurrency as a 'Coupling Layer' Between Humans and AI
· Guillaume Verdon: "Cryptocurrency has the potential to become a 'coupling layer' between humans and AI. When this exchange no longer relies on state violence backing, cryptography can be the mechanism for enabling reliable business activities between pure AI entities and humans."
· Vitalik Buterin: "If humans and AI share a common property system, that is the ideal situation. Rather than each having a completely separate financial system (with human systems worth ultimately going to zero), a unified financial system is clearly superior."
On the End of Civilization in the Next Billion Years
· Vitalik Buterin: "The next challenge is entering the 'spooky era,' where AI can compute millions of times faster than humans. ... I don't want humans to just passively enjoy a comfortable retirement; that would lead to a sense of meaninglessness. I want to explore human enhancement and human-AI collaboration."
· Guillaume Verdon: "If the outcome in 10 years is good, everyone will have a personalized AI companion, becoming a 'second brain.' ... At the scale of 100 years, humans will generally achieve 'soft merge.' In a billion years, we might have transformed Mars, with most AIs running in Dyson clouds around the sun."
On "Accelerationism"
Eddy Lazzarin: Regarding the term 'Accelerationism'—at least in the context of techno-capitalism—it can be traced back to the work of Nick Land and the CCRU research group in the 1990s. However, some also argue that the origins of these ideas can be traced back to the 1960s and 1970s, especially in connection with philosophers such as Deleuze and Guattari.
Vitalik, I'd like to start with you: Why should we seriously consider the ideas of these philosophers? What makes the concept of 'Accelerationism' so relevant today?
Vitalik Buterin: "I think ultimately, all of us are trying to understand this world and figure out what is meaningful to do in it, a question that humans have been pondering for thousands of years.
However, I think there is something new that has happened in the last hundred years, which is that we have to grapple with a rapidly changing world, sometimes even a rapidly and disruptively changing world."
The early stages looked something like this: prior to World War I, around 1900, there was great optimism about technology. Chemistry was considered a technology, electricity was a technology, and the era was filled with excitement about technology.
If you watch some films from that time, such as the works of Sherlock Holmes, you can feel the optimism of that era. Technology was rapidly improving people's quality of life, liberating women's labor, extending human lifespan, and creating many wonders.
However, World War I changed everything. The war ended in a destructive manner, with people riding horses into the battlefield and leaving in tanks; then World War II broke out, bringing even greater devastation. This war even gave birth to the motto "I am become Death, the destroyer of worlds."
These historical events led people to rethink the cost of technological progress and gave rise to thoughts like postmodernism. People began to try to understand: when past beliefs are shattered, what can we still believe in?
I believe this kind of reflection is not a new thing, as every generation goes through a similar process. Today, we are also facing similar challenges. We live in an era of rapid technological advancement, and this acceleration itself is also accelerating. We need to decide how to deal with this phenomenon: to accept its inevitability or to try to slow its pace.
I think we are in a similar cycle now. We have inherited past thoughts on the one hand, and on the other hand, we are trying to respond to all this in new ways.
Thermodynamics and First Principles
Shaw Walters: Guill, can you briefly explain what E/acc is? Why is it needed?
Guillaume Verdon: In fact, E/acc (Effective Accelerationism) is to some extent a byproduct of my ongoing contemplation of "why are we here" and "how did we get to where we are today." What kind of generative process created us, drove the development of civilization? Technology has brought us to this point today, enabling us to sit in this room having this conversation. We are surrounded by amazing technology, while we as humans emerged from a primordial "soup" of inorganic matter.
In a sense, there is indeed a physical generative process behind this. My everyday work involves treating generative AI as a physical process and trying to implement it into devices. This "physics-first" way of thinking has always influenced my mindset. I aim to extend this perspective to the entire civilization, viewing human civilization as a vast "petri dish" and speculating on future potential developments by understanding how we got to where we are today.
This line of thinking led me to the study of the physics of life, including the origins and emergence of life, as well as a branch of physics called "Stochastic Thermodynamics." Stochastic Thermodynamics explores the thermodynamic principles of non-equilibrium systems, which can be used to describe the behavior of living organisms, including our thoughts and intelligence.
More broadly, Stochastic Thermodynamics applies not only to life and intelligence but also to all systems governed by the second law of thermodynamics, including our entire civilization. At the core of all this, for me, lies one observation: all systems have a tendency to continually increase in complexity through self-adaptation to extract energy from the environment to do work, while dissipating excess energy as heat—a trend that drives all progress and rapid development.
In other words, this is an immutable physical law, much like gravity. You can fight it, you can deny it, but it will not change; it will persist. Therefore, the core idea of E/acc is: since this acceleration is inevitable, how can we harness it? Upon careful examination of the equations of thermodynamics, you will find a Darwinian selection-like effect at play—every information bit must undergo the test of selection pressure, whether it's a gene, a meme, a chemical, a product design, or a policy.
This selection pressure screens these bits based on whether they are beneficial to the system they reside in. By "beneficial," it is meant whether these bits aid in better predicting the environment, acquiring energy, and dissipating more heat. Simply put, whether these bits contribute to survival, growth, and reproduction. If they aid these objectives, they will be retained and replicated.
From a physics perspective, this phenomenon can be seen as a result of the "Selfish Bit Principle." In essence, only those bits that can promote growth and acceleration will have a place in future systems.
Therefore, I proposed an idea: could we design a culture that implants this "mindware" into human society? If we could achieve this, then populations adopting this culture would have a higher probability of survival than others.
So, E/acc is not about destroying everyone. It is actually attempting to save everyone. To me, it is almost mathematically provable that having a "deceleration" mindset is detrimental. Whether it's an individual, a company, a country, or an entire civilization, choosing to slow down development reduces their chances of survival in the future. And I believe spreading the idea of "slowing down," such as pessimism or doomsday scenarios, is not a morally justifiable act.
Shaw Walters: We just mentioned a lot of terms, such as E/acc, acceleration, deceleration. Can you break down these concepts a little? Was the emergence of E/acc a response to certain cultural phenomena? What was happening at the time? Could you describe the background for us? Specifically, what was E/acc in response to? Could you describe the conversation at that time and how these ideas were ultimately summarized into the concept of "E/acc"?
Guillaume Verdon: In 2022, I think the whole world seemed somewhat pessimistic at that time. We had just emerged from the COVID-19 pandemic, and the global situation was not optimistic. Everyone seemed a bit down, as if lacking sunlight, and people generally felt pessimistic about the future.
In that atmosphere, "AI doomsdayism" to some extent became part of mainstream culture. AI doomsdayism refers to the fear of AI technology potentially getting out of control. It stems from a concern that if we create a system that is too complex and neither the human brain nor our models can predict its behavior, then we cannot control it, and this fear of uncontrollability leads to uncertainty about the future, resulting in anxiety.
In my view, AI doomsdayism is actually a politicized exploitation of human anxiety. Overall, I believe this doomsdayism has been a huge negative influence, and for this reason, I wanted to create a counterculture to combat this pessimistic mood.
I noticed that algorithms on platforms like Twitter, and many other social media platforms, tend to reward content that elicits strong emotions, such as "strong support" or "strong opposition." This type of algorithm ultimately leads to polarization of opinions, and we have seen many opposing camps forming, such as the phenomenon of "mirror cults" formed by AA (anti-accelerationism) and EA (accelerationism).
I was thinking, what is the opposite of this phenomenon? My conclusion was: the opposite of anxiety is curiosity. Rather than fearing the unknown, it's better to embrace it; rather than worrying about missing out on opportunities, it's better to actively explore the future.
If we choose to slow down the development of technology, we will incur a huge opportunity cost, potentially missing out on a better future forever. Instead, we should portray the future with an optimistic attitude because our belief will affect reality. If we believe the future will be bleak, our actions may steer the world in that bleak direction; but if we believe the future will be better and strive for it, we are more likely to achieve that future.
Therefore, I feel I have a responsibility to spread an optimistic attitude and encourage more people to believe that they can make a difference for the future. If we can inspire more people to be hopeful about the future and take action to build it, then we can create a better world.
Of course, I admit that sometimes my online expression may seem somewhat radical, but that is because I aim to provoke discussion and urge people to think. I believe that only through these conversations can we find the most suitable position and decide how to act.
Acceleration, Entropy, and Civilization
Shaw Walters: The information conveyed by E/acc has always been highly inspirational, especially for someone sitting in a room writing code, the spread of this positive energy is invigorating, and the dissemination of this information is very organic. It can be said that at the outset, E/acc was clearly a response to the prevailing negative sentiment in society at the time, but by 2026, I feel E/acc is no longer the same. Obviously, Marc Andreessen's publication of the "Tech Optimism Manifesto" has systematized some of these ideas and elevated these principles to Vitalik's more macroscopic commentary perspective.
So Vitalik, I'd like to ask you: In your view, what do E/acc and D/acc respectively represent? What are the main differences between them? And what drove you to choose this direction?
Vitalik Buterin: Alright, let me start with thermodynamics. This is a very interesting topic because we often hear the word "entropy" in different contexts, such as mentioning "hot and cold" in thermodynamics and "entropy" in cryptography, which seem to be completely different things. But in reality, they are essentially the same concept.
Let me try to explain in three minutes. The question is: Why can hot and cold mix, but why can't you separate them back into "hot" and "cold"?
Let's assume a simple example: suppose you have two canisters of gas, each with a million atoms. The gas on the left is cold, and the speed of each atom can be represented by two digits; the gas on the right is hot, and the speed of each atom can be represented by six digits.
If we want to describe the state of the entire system, we need to know the speed of each atom. The speed information of the cold gas on the left requires about 2 million digits, the speed information of the hot gas on the right requires 6 million digits, requiring a total of 8 million digits of information to fully describe this system.
Now, we can consider a question through reductio ad absurdum. Suppose you have a device that can perfectly separate heat and cold. Specifically, this device can take all the heat from two cans of "hot-and-cold" gas, transfer all the heat to one side, and transfer all the cold to the other side. From the perspective of energy conservation, this seems entirely reasonable because the total energy does not change. But the question is, why can't you actually do this?
The answer lies in the fact that if you could do this, you would actually be turning a system containing "11.4 million bits of unknown information" into a system containing only "8 million bits of unknown information," which is physically impossible.
This is because the laws of physics are time-symmetric, meaning time can run backward. If this "magic device" truly existed, you could run this process in reverse in time, thus returning to the original state. This means that the device would effectively be able to compress any 11.4 million bits of information into 8 million bits, and we know such compression is impossible.
This also incidentally explains the feasibility of a classic physics problem—the "Maxwell's demon." Maxwell's demon is a hypothetical entity that can separate hot and cold, and the key to achieving this is that it would need to know that additional 3.4 million bits of information. With this extra information, it indeed can accomplish this seemingly counterintuitive task.
So, what is the hidden meaning behind this? The core lies in the concept of "entropy increase." First, entropy is subjective; it is not a fixed physical quantity but rather reflects how much unknown information we have about a system. For example, if I rearrange the distribution of atoms using a cryptographic hash function, the entropy of this system may become very low for me because I know how it is arranged. But from an external observer's perspective, the entropy is high. Therefore, when entropy increases, it actually means our ignorance of the world is increasing, and the information we don't know is becoming more abundant.
You may ask, then why can we become smarter through education? Education teaches us more "useful" information rather than reducing our ignorance of the world. In other words, although in a sense, the increase in entropy means a decrease in our overall understanding of the universe, the information we acquire becomes more valuable. Thus, in this process, some things are consumed, but some things are also created. And what we gain ultimately shapes our moral values—we cherish life, happiness, and joy.
This also explains why we would find a vibrant and beautiful human world more interesting than a Jupiter filled with countless particles. Although Jupiter has more particles, requiring more bits of information to describe, the meaning we ascribe makes Earth seem more valuable.
From this perspective, the source of value lies in our own choices. And this raises a question: since we are accelerating development, what exactly do we want to accelerate?
If we were to explain this with a mathematical analogy: imagine you have a large language model, and then arbitrarily change the value of one of its weights to a huge number, like 9 billion. The worst-case scenario is that the model becomes completely unusable; and the best-case scenario may be that only the part unrelated to that weight can still function properly. In other words, in the best-case scenario, you may end up with a model that performs worse; and in the worst-case scenario, you'll only get a bunch of meaningless output.
Therefore, I believe that human society is like a complex large language model. If we blindly accelerate a part without selection, the ultimate result may be that we lose all value. So the real question is: how do we consciously accelerate? Just like Daron Acemoglu's "narrow corridor" theory, although different social and political backgrounds may differ, what we need to consider is how to selectively promote progress under a clear goal.
Guillaume Verdon: Your recent explanation of entropy using gas was fascinating. In fact, the reason why physical phenomena are irreversible lies in the Second Law of Thermodynamics. Simply put, when a system releases heat, its state cannot return to its original form. This is because probabilistically, the likelihood of the system progressing forward far outweighs the possibility of regressing backward, and this gap exponentially increases as heat is dissipated.
In a sense, this is like leaving a "dent" in the universe. This "dent" can be likened to a non-elastic collision. For example, if I bounce a rubber ball off the ground, it will bounce back, which is elastic. But if I drop a piece of clay onto the ground, it will flatten out and retain that shape, making it non-elastic, almost impossible to reverse.
Essentially, every piece of information is "struggling" for its existence. To continue existing, each piece of information needs to leave a more indelible mark about its existence in the universe, just like creating a larger "dent" in the universe.
This principle can also explain how life and intelligence emerged from a primordial "soup" of matter. As the system becomes more complex, it contains more information bits. And each information bit can tell us something. The essence of information is the reduction of entropy, as entropy represents our ignorance, and information is the tool to reduce ignorance.
Eddy Lazzarin: I'm curious about what E/acc is.
Guillaume Verdon: E/acc is essentially a "metacultural prescription." It is not a culture in itself but rather tells us what to accelerate. At the core of acceleration is the complexification of matter, allowing us to better anticipate our environment. Through this complexification, we enhance our capacity for autoregressive prediction and capture more free energy. This is also related to the Kardashev Scale, where we achieve this through dissipating heat.
Deep Tide TechFlow Note: The Kardashev Scale is a method proposed in 1964 by Soviet astronomer Nikolai Kardashev to assess the technological advancement of a civilization based on the amount of energy it is able to utilize. It is divided into three types: Type I (planetary energy), Type II (stellar system energy, such as a Dyson sphere), and Type III (galactic energy). As of 2018, humanity is at approximately 0.73 on the scale.
From first principles, this is why the Kardashev Scale is considered the ultimate measure of a civilization's level of development.
Eddy Lazzarin: Using physics and the metaphor of entropy to explain certain phenomena is a tool to describe the reality we directly experience. For example, our economic production capacity is accelerating, and technological development is also accelerating, bringing about many consequences, right? This is my understanding of "acceleration."
Guillaume Verdon: Essentially, regardless of how a system's boundaries are defined, it becomes increasingly adept at predicting the world around it. Through this predictive capability, it can acquire more resources for its own survival and expansion. This pattern applies to companies, individuals, nations, and even the entire planet.
If we continue this trend, the result is that we have found a way to convert free energy into predictive capacity—namely, AI. This ability will propel our expansion and enhancement on the Kardashev Scale.
This means we will gain more energy, more AI, more computing power, and more of other resources. While we are increasing entropy into the universe (disorder), we are also creating order. In fact, we are gaining "negative entropy," which is the opposite of entropy.
At times, people may wonder: since entropy is increasing, why don't we just destroy everything? The answer is: doing so would actually halt entropy generation. Life is the more "optimal" state, like a flame chasing energy source, it will increasingly intelligently seek the source of energy.
The natural evolutionary trend is this: we will leave the Earth's gravity well, seek out other "pockets" of free energy in the universe, and use these energies to self-organize into more complex, smarter systems, ultimately expanding to every corner of the cosmos.
This is, in a way, a kind of ultimate goal of Effective Altruism (EA). It somewhat aligns with the "Muskian" idea of cosmic expansionism: the pursuit of a vision of universalism and expansionism.
E/acc offers a fundamental guiding principle. Its core idea is this: whatever policies or actions you take in this world, as long as they help us ascend the Kardashev scale, that is a worthwhile goal to strive for, and that is the direction of our lives.
E/acc is a met-heuristic way of thinking that can be used to design policies and guide individual lives. For me, this way of thinking itself constitutes a culture. It has a very "meta" narrative implication, as it is envisioned to be applicable at all times, under all conditions. It is a highly universal and enduring culture, in other words, it is a "Lindy culture" designed after careful thought.
Core Divergence
Shaw Walter: For you, the discussions here have a deeper significance. It's almost like a mathematically consistent "spiritual system." For those who have not had an alternative belief system since "God is dead," such a system seems to fill the void of the spiritual world, bringing comfort and hope. But at the same time, we cannot ignore the real-world implications of this matter—it is happening now. I think this is also the focal point Eddy wants to explore.
Vitalik, I noticed you have raised insightful points on the practical issues of D/acc in your own blog. Let's delve deeper into this topic when we have the opportunity—I think one day we should lock you two in a room for a quantum issue showdown.
Vitalik: What inspired you? In your view, what are E/acc and D/acc respectively?
Vitalik Buterin: For me, the implication of D/acc is—it stands for "Decentralized Defensive Acceleration," but it also encompasses the connotations of "diversification" and "democratization." In my view, the core idea of D/acc is this: technical acceleration is crucial for humanity and should be our baseline goal to strive for.
Even as we look back on the twentieth century, despite the many problems that technological progress has brought, it has also brought countless benefits. For example, consider human life expectancy: despite wars and upheavals, the average life expectancy in Germany in 1955 was still higher than in 1935, indicating that technological progress has improved our quality of life in many ways.
Today, the world has become cleaner, more beautiful, healthier, and more interesting. It not only sustains more people but also makes our lives more colorful and fulfilling, changes that are very positive for humanity.
However, I believe we must acknowledge that these advances are not accidental but the result of deliberate human intent. For example, in the 1950s, there was severe air pollution and smog. People recognized this as an issue and took actions to address it. Today, at least in many places, the smog problem has been greatly alleviated. Similarly, we have faced the issue of the ozone layer depletion and made significant progress through global cooperation.
Additionally, I would like to add: in today's rapid technological and AI development, I see two main risks.
One risk is multipolar risks. This risk refers to the fact that as technology becomes more widespread, more people may use it to do extremely dangerous things. For example, one extreme scenario could be that technological advances make it "as easy to acquire a nuclear weapon as buying something from a convenience store."
Then there is another concern: AI itself. We need to seriously consider the possibility of AI developing some form of consciousness. Once it becomes powerful enough to act autonomously without human intervention, we cannot be certain of the decisions it might make, and this uncertainty is concerning.
There is also a unipolar risk. I believe a single AI is one such potential threat. Even worse, the combination of AI with other modern technologies may lead to an inescapable, permanent dictatorial society. This prospect makes me very uneasy and has always been a focus of my concern.
For example, in Russia, we can see that technology has brought both progress and peril. On the one hand, living conditions have indeed improved. On the other hand, societal freedoms are decreasing. If someone attempts to protest, surveillance cameras will record their actions, and then someone might show up at their doorstep to arrest them in the dead of night a week later.
The rapid development of AI is accelerating this trend toward centralized power. So, for me, what D/acc truly aims to do is: sketch out a path forward, continue this acceleration, and further accelerate it, all while genuinely addressing these two types of risks.
Comparing e/acc to d/acc
Eddy Lazzarin: So what you're saying is that d/acc focuses more on some risk categories that were overlooked or underemphasized in the e/acc framework, right?
Vitalik Buterin: That's correct. I believe technological development indeed comes with various risks, and these risks will manifest different levels of significance in varying contexts and world models. For example, the priority of different risks may shift as the pace of technological development accelerates or decelerates.
However, I also believe that we can take many measures to effectively address these risks, regardless of their category.
Guillaume Verdon: I think both Vitalik and I are actually very concerned about the issue of power concentration that AI could bring. This is also one of the core aspects of the e/acc movement, especially in its early stages: it advocates for open-source with the aim of decentralizing the power of AI.
We are concerned that the concept of AI safety could be misused. It's so appealing that certain power-seeking entities might use it as a tool to consolidate control over AI and try to persuade the public: for your safety, ordinary people should not have the right to use AI.
In fact, if there is a significant cognitive gap between individuals and centralized institutions, the latter will have full control over the former. They can build a complete model of your thought patterns and effectively guide your behavior through techniques like nudging.
Therefore, we want to make the power of AI more symmetrical. Just as the original intent of the Second Amendment of the United States Constitution was to prevent government monopolization of violence so that the people could check it when it overreaches, AI also needs similar mechanisms to prevent excessive concentration of power.
We need to ensure that everyone has the ability to own their AI models and hardware, enabling the widespread adoption of this technology to achieve decentralization of power.
However, I believe completely stopping AI research and development is unrealistic. AI is a foundational technology, arguably a "meta-technology" — a technology that drives the development of other technologies. It gives us greater predictive power, can be applied to almost any task, and significantly enhances efficiency. AI not only drives acceleration itself, but also accelerates further acceleration.
The essence of this acceleration is complexity: things become more efficient, life becomes more convenient. One of the reasons we feel happy is because the continuity of our survival and information is being secured. This "sense of happiness" can be seen as an internal biological estimator used to measure whether our existence can be sustained.
From this perspective, I believe that the utilitarian hedonistic utilitarianism framework of effective altruism, namely "maximizing happiness," may not be the best view. Instead, I am more inclined to adopt an objective measure of progress, which is precisely what the E/acc framework is all about. It poses a question: from an objective perspective, are we as a civilization constantly progressing? Are we achieving scalable takeoff?
To achieve this scalability, we need to drive complexity and continuously improve our technology. However, as Vitalik has said, if the power of AI is too concentrated in the hands of a few, it is detrimental to overall growth; whereas if this technology can be widely dispersed, the results will be much better.
In this regard, I believe we are in strong agreement.
Open Source, Open Hardware, and Local Intelligence
Shaw Walters: I think your discussion just now touched on some very important commonalities. Both of you clearly support open source. Vitalik has contributed a lot of MIT-licensed open-source code, although I know you later had some new perspectives on the GPL license.
Now, you not only support open-source software, but you are also starting to push for open-source hardware. Although these two were relatively separate fields in the past, we are now seeing them gradually merging.
So I'm curious, how do you view "open weight" and "open-source hardware"? In this respect, are there any differences between E/acc and D/acc? What are your thoughts on the future direction? Are there any different viewpoints?
Guillaume Verdon: In my view, open source can accelerate the process of hyperparameter search. It allows us to collaborate in a wisdom of the crowd manner, collectively exploring the design space. This is the benefit brought by acceleration: we can develop better technology, more powerful AI, and even use AI to design more advanced AI, with the speed of the entire process also constantly increasing.
I believe that the essence of spreading knowledge is essentially the diffusion of power, and the dissemination of knowledge on "how to create intelligence" is particularly important. What we do not want to see is a possibility that was once discussed within the previous U.S. government: an attempt to "put the genie back in the bottle." While not outright banning linear algebra, it would be akin to restricting mathematical research related to AI. To me, this is like prohibiting people from studying biology—it is a significant step backward.
Knowledge has already been disseminated and cannot be turned back. If the U.S. tries to ban AI-related research, other countries, third-party organizations, and even certain law-friendly regions will continue to advance this technology. As a result, the global capability gap will only widen, and the risks will be greater.
Therefore, we believe that one of the greatest risks is the "capability gap." The only way to mitigate this risk is to ensure that AI is decentralized.
Whenever I hear narratives like the "AI doomsday scenario," such as "AI is very dangerous, only we have the ability to manage it, so you should trust us," I am very skeptical. Even if those individuals have good intentions, if they consolidate power excessively, they may ultimately be replaced by those who seek power. We have been warning for many years. And now it is really starting to happen. Just as we saw this week, Dario (Anthropic CEO) is experiencing some real-world political lessons.
Vitalik Buterin: I usually categorize the potential risks in technological development into two types: unipolar risks and multipolar risks.
Unipolar risks refer to cases like Anthropic's. They were "called out" because they refused to allow their AI technology to be used for developing fully autonomous weapons or conducting large-scale surveillance on Americans, indicating that the government and military may indeed be interested in using these technologies for extensive surveillance. Further advancement of surveillance technology will have profound effects. It may make the powerful even stronger, diminish the space for diverse voices in society, and restrict the freedom of ordinary people to explore and try alternative solutions. And with technological progress, surveillance capabilities will be greatly amplified, becoming even more pervasive.
Under the framework of D/acc, we are supporting the development of some "open-source defensive technologies" projects. These technologies aim to help us ensure everyone's safety and privacy in a world with stronger technological capabilities. Taking the field of biology as an example, we hope to enhance global capabilities to address pandemics. I believe that we can achieve a balance: we can effectively control epidemics as rapidly as China while minimizing interference with daily life as much as Sweden. This balance can be achieved through technological means, such as combining air filtration, ultraviolet disinfection (UVC), and virus detection technologies.
The company we have invested in is developing a fully open-source end product that can passively detect virus particles in the air, such as the novel coronavirus. This device operates by monitoring air quality parameters (e.g., CO2 levels, air quality index, etc.) and employs local encryption, anonymization, and differential privacy techniques to ensure data privacy. The data is then sent to servers via fully homomorphic encryption, allowing the servers to perform analysis without directly accessing the raw data and generate final results through collective decryption.
Our goal is to enhance security, protect user privacy, and effectively address single-point and multi-point risks. I believe this kind of global collaboration is key to building a better future.
On the hardware front, I believe we not only need to drive the development of open-source hardware but also need to develop auditable hardware. For instance, ideally, every camera should be able to prove its specific purpose to the public. Through signature verification, large language model-based analysis, and public audit mechanisms, we can ensure that these devices are only used for legitimate purposes, such as detecting violent behavior and issuing alerts, without infringing on personal privacy.
In the envisioned future world, we could deploy numerous cameras on the streets to prevent violent incidents. However, a prerequisite is that these devices must be fully transparent, allowing the public to verify their functionality at any time to ensure they are solely used to safeguard public safety and not misused for surveillance or other improper purposes.
Eddy Lazzarin: Are open-source hardware and auditable hardware concepts part of E/acc or D/acc? Can you point out a clear distinction?
Guillaume Verdon: I am not sure if open-source hardware has been discussed in detail in the past, but in my view, one of the current biggest risks is the gap between centralized and decentralized entities, which is the capability difference between individuals and governments or large organizations.
Based on the current computing power model, running a high-performance AI model requires hundreds to thousands of kilowatts of computing resources, a scale of computational power that ordinary individuals cannot afford. Yet people desire to own and control their intelligent tools, which explains why the recent phenomenon of "Openclaw + Mac mini" has sparked such a craze; people crave to have their own smart assistants.
To achieve power symmetry between individuals and centralized institutions, the only way is to realize "Densification of Intelligence." We need to develop more energy-efficient AI hardware, allowing individuals to run powerful AI models on simple devices and possess their intelligent tools. This is crucial, especially as future AI models start supporting online learning, making them very "sticky" and as difficult to change as replacing a personal assistant.
Eddy Lazzarin: But haven't we already been working to reduce the cost of computing hardware at an exponential rate? Why should we classify a certain idea as E/acc or D/acc? What exactly are we trying to communicate to society through this kind of classification?
Guillaume Verdon: For me, this is also one of the core missions of my company, Extropic. We are dedicated to increasing the amount of intelligence that can be produced per watt of energy, which will significantly increase the total intelligence we can create. This progress will also drive us towards higher levels of the Kardashev Scale through the Jevons Paradox (TechFlow note: The Jevons Paradox refers to the phenomenon where an increase in resource efficiency leads to a decrease in the cost of using that resource, resulting in a greater overall consumption of the resource). In simple terms, if we can more efficiently convert energy into intelligence or other value, our demand for energy will also increase, driving the progress and complexity of civilization.
Therefore, I believe this is one of the most critical technical issues today, as it directly relates to the decentralization of AI power. Open-source hardware is just one of the many ways to achieve this goal. However, in the long run, I believe that any hardware based on the Von Neumann architecture (TechFlow note: Von Neumann Architecture, the foundation of modern computers proposed by mathematician John von Neumann in 1945, storing program instructions and data in the same memory and using binary, sequential execution) or modern digital technology will eventually become obsolete like the tools of primitive societies.
Eddy Lazzarin: But hasn't capitalism already been investing billions of dollars annually in this area through market incentives? Isn't investment in areas such as alternative hardware, semiconductor technology, and energy production aimed at driving technological diversity?
Guillaume Verdon: We need more diverse choices rather than over-reliance on a single technological direction. Whether in policy, culture, or technology, we need to maintain diversity in the design space, rather than having all resources monopolized by a single behemoth. Otherwise, we risk falling into the "hyperparameter space gamble" — if we invest too many resources in a certain technological direction and that direction encounters issues, it could lead to a significant setback in technological development, even causing the entire ecosystem to collapse.
Shaw Walters: Can I say that we have actually solved this problem? Your views are quite aligned on open source and decentralization, which makes me very optimistic because that is exactly what I care about. Many people are uncertain about the future, constantly asking, "Why do we need these technologies?" The appealing aspect of your views is that you are both saying, "Everything will get better because this progress is already built into the mechanism."
Guillaume Verdon: I believe that feeling anxious when facing high uncertainty in future technological developments is a very natural phenomenon. This anxiety is not entirely a "fog of war," but it does make it difficult for us to clearly predict what will happen in the coming years. In fact, this sense of anxiety is an instinct formed in human evolution, helping us deal with unknown risks. For example, when I see a phone at the edge of a table, I instinctively want to move it to a safer place to prevent it from falling. This reaction is a manifestation of anxiety.
However, we need to realize that if we try to completely eliminate this uncertainty and risk, we may miss out on the tremendous potential and benefits brought by technological development. Currently, our technological capital system has reached a dynamic equilibrium with existing capabilities, but if some disruptive technological capability suddenly appears, this balance will be disrupted, and the entire system will need to readjust and adapt.
Now, AI technology has enabled us to handle higher complexity with less energy. This means we can accomplish more challenging tasks with potentially greater rewards. Although we cannot yet quickly complete a complex project through "vibe coding," we are moving towards this goal. In the future, we will be able to use more efficient technology to meet the needs of a larger population while enhancing the quality of human life.
Of course, there may be a period of adjustment in this process. But in a rapidly changing environment, the worst approach is to lose flexibility and become rigid. To avoid this situation, we need to adopt hedging strategies, explore multiple possible paths, investigate different policies, technological paths, and algorithms, try open and closed source models, because we cannot accurately predict where the future will lead.
Therefore, we must diversify risks and explore multiple possibilities. Ultimately, certain successful technological or policy directions will emerge as mainstream, and we will go with the flow.
Eddy Lazzarin: If there is indeed a divergence between E/acc and D/acc, my understanding is that this may be related to how technological progress is guided. Vitalik, what is your take? How should technological progress be guided? How much control do we really have over this guidance process?
Vitalik Buterin: In my view, the goal of D/acc is not to resist the tide of technological capital, but to actively guide this tide towards diversification and decentralization. For example, we can consider how to make the world more inclusive of pluralism. Can we significantly raise biosafety levels in a few years? Or develop a nearly flawless operating system to greatly improve cybersecurity?
Another example is the idea of "bug-free code." For the past two decades, this concept has been considered a naive fantasy, but I believe it will become a reality at a pace beyond most people's expectations. In the Ethereum project, we have been able to machine-prove some complete mathematical theorems.
Overall, the goal of D/acc is to ensure that the rapid advancement of technology can occur in a minimally disruptive and decentralized manner. To achieve this goal, we need proactive action rather than passively waiting for good results to happen automatically. All I can do is contribute resources, such as funds and ETH, and motivate more people to participate in building by sharing my views.
Moreover, I believe that political and legal reforms can also help make the world more "D/acc-friendly." For example, we can design legal incentive mechanisms to drive faster comprehensive cybersecurity transformation.
Guillaume Verdon: From my perspective, AI can be seen as a "Maxwell's Demon," reducing the world's entropy by consuming energy. Whether it's fixing errors in code or reducing other forms of chaos (such as preventing virus spread), AI can play a role in these areas. Therefore, can we reach a consensus that more AI is beneficial and makes the world safer? In fact, AI's capabilities can greatly enhance our security.
Should AI be slowed down?
Guillaume Verdon: I think we are now entering the most crucial part of tonight's discussion. Everyone has been very patient with us, and it's time to get straight to the point. I want to ask a pointed question: Why do you support banning the development of data centers?
Vitalik Buterin: Alright, I'll answer that question. First, we must acknowledge that the current pace of AI development is indeed very fast, and I cannot be entirely certain of its specific speed. Several years ago, I said that my predicted range for AGI achievement was between 2028 and 2200, and now I think that range may have narrowed somewhat, but there is still significant uncertainty.
One reality we face is that the rapid advancement of AI may bring about extremely swift changes, many of which could be disruptive, even irreversible. For example, the job market may undergo drastic transformations, leaving many unemployed. A more extreme scenario is: if AI's capabilities far surpass those of humans, it could gradually take over the Earth, even expanding to other parts of the galaxy. In such a scenario, would AI care about our human well-being? This remains an unknown.
As I mentioned earlier, if you have a neural network and randomly set one of its weights to an extreme value (like 9 billion), the likely result is the entire system crashing, right? So, I believe technological acceleration has two different directions. One acceleration is akin to a "gradient descent" process that makes the system increasingly powerful; but another acceleration could lead to system loss of control, much like arbitrarily setting a parameter to an extreme value, this acceleration is dangerous.
Guillaume Verdon: From my point of view, my stance is entirely opposite to the stance of "thorough deceleration."
However, I think just like hyperparameter tuning in neural networks, even if we aim to optimize through "gradient descent," we still need to find an appropriate "learning rate." The process of acceleration is actually about continuous trial and exploration, seeking an optimal speed that can make the system more enduring and more resilient to risks.
In the long run, social systems will gradually adapt to new technologies and ultimately choose the path most conducive to overall development. As for those who believe that "this technology is too powerful, too disruptive, and may cause the system to collapse and never recover," I think that argument is untenable. Instead, I believe technological progress will bring more opportunities and greater prosperity.
We need to recognize that technological development is not a zero-sum game. If we associate economic value with energy, such as tying it to the petrodollar or other forms of resources, then cash can actually be seen as an "IOU for free energy." There is still a vast amount of free energy in the world waiting for us to develop, but to access it, we need to address a multitude of complex problems. If we want to achieve goals like colonizing Mars or building a Dyson sphere, we need more efficient and robust intelligence to drive growth and unlock tremendous potential.
Unfortunately, anxiety is easily wielded as a political tool by some, and certain politicians may exploit people's fears of the future to gain power. They will say, "Are you uneasy about the future? Hand over power to me, and I will shut down these sources of risk, and you will feel secure. You don't have to worry about what the future holds, nor take any risks." However, countries that choose not to do so will be far ahead of us, right?
We must take opportunity cost into account. We need to ask ourselves: How many human lives can we support with technology? How many lives can we save with it? If you're worried about "silicon-based AI evolving faster than us," then your reaction should be anger. You should support the accelerated development of biotechnology, striving to surpass it. Either accelerate or face extinction.
In fact, I believe the computational power of biological systems is more powerful than we imagine. As someone devoted to studying bio-inspired computing, I believe we can merge biology with AI. For example, we can conduct "training" through methods like embryo selection, seeing ourselves as a model. I think we need to be more open to the various possibilities of biological acceleration. Ultimately, biological intelligence and silicon-based intelligence will merge, further enhancing our cognitive abilities.
In the future, we may have permanently online AI agents that help us observe the world, engage in real-time learning, and become our personalized cognitive extensions. The real risk is that all of this may be controlled by a centralized power entity, ultimately forming a monopoly of power.
Eddy Lazzarin: I remember you mentioned in that blog post on D/acc that the opportunity cost is very high, even saying "it's hard to exaggerate." So, I know you agree on this point. Do you want to add some qualifying conditions?
Vitalik Buterin: Yes, I completely agree that the opportunity cost is very high, and I also agree with the ideal future described just now. But I think the main disagreement between us is: I really don't think that "today's humans and Earth" have enough resilience. I think we may only have one chance to follow the correct path of technological development, and I feel this is the reality we have gradually moved towards over the past century.
Guillaume Verdon: Returning to the thermodynamic point I mentioned earlier: if we consider the continued existence and growth of civilization as the ultimate goal, there is a law: once we spend a large amount of free energy to create some kind of "evidence" and drive the complexification of the entire system, this process is hard to reverse.
In other words, the further we go on the Kardashev scale, the less likely a complete reversal is. Therefore, accelerating development is actually the best way to maximize the sustainable existence of human civilization. In my view, slowing down technological development would actually increase the risk of extinction. If we do not develop these technologies, do not address current problems, we may face a survival crisis; but if we drive technological progress, we may find solutions to ensure the continued existence of humanity and its evolution.
I believe that people should be more open to th
