Editor's Note: At 3:45 am on April 10th, a 20-year-old individual threw a Molotov cocktail at Sam Altman's residence, then proceeded to walk to the OpenAI headquarters and threatened arson.
This attack quickly sent shockwaves through the tech and investment community. It not only concerned individual safety but also propelled an extreme narrative that had long lingered in text and online communities into reality.
Starting from the highly deterministic assertion of "AI will cause human extinction" and reasoning through "we must reduce risk at all costs," this logic gradually slid towards justifying real-world action. When a worldview continuously reinforces its narrative of "existential threat" and uses it to reconstruct moral priorities, the boundaries of action are redefined—the once low-cost speech begins to carry the possibility of enforcement.
This article reviews the evolutionary path within the AI doomsday community: from the "purification spiral" driving escalating risk assessments to the ethical judgment of technology builders, and then to simplifying complex reality into the "trolley problem" decision model. These seemingly rational deductions ultimately converge into a consistent yet perilous mindset: as long as the outcome is defined as "saving humanity," the means can continue to expand.
In this sense, this event is not isolated. It is more like an early stress test—testing not the technology itself, but when the narrative, beliefs, and actions surrounding the technology begin to lose restraint.
The following is the original text:
Who is the Arsonist?
On Friday at 3:45 am, a 20-year-old man threw a Molotov cocktail at Sam Altman's residence. He then walked about three miles to the OpenAI headquarters and threatened to burn it down. Currently, he has been arrested by the police on suspicion of attempted murder.

He is not a "lone wolf." He is an active member of PauseAI, holding six roles in the community. His username on Discord is "Butlerian Jihadist."
His Instagram is almost entirely filled with doomsday content: a power law curve captioned "If we don't act soon, we're all dead," along with a Venn diagram placing reality at the intersection of The Matrix, Terminator, and Idiocracy.
Four months before the attack, he also recommended to his followers the article "The Coming Technological Singularity" by Yudkowsky and Soares.


His name is Daniel Moreno-Gama.
He also has his own Substack. As early as January this year, he published an article titled "AI Existential Risk," estimating the probability of "human extinction caused by AI" as "almost certain." He refers to this technology as "an active threat to anyone using it, especially to those building it." His conclusion is: "We must address this threat first before asking other questions."
He has also written a poem imagining the children of AI developers dying and questioning their parents' inaction. He even describes the creators of these technologies as: "May hell have some pity on such vile creatures."
PauseAI has already removed his related messages from their Discord.

I know this is not what most readers expect to see in an investment newsletter. I write this to explain where my worldview comes from, making it easier to understand the longer-term judgments that follow. As for the "New New Deal" proposal I put forward, it is a direct response to this development.

What I did was simply extrapolate their model one step further and connect the dots.
Doomsday Narrative of AI Doomers
Let's start with "Determinism." Yudkowsky's (the book mentioned earlier) position is: once someone creates a sufficiently powerful artificial intelligence, every single person on Earth will die. Not "maybe," not "possibly," but everyone — including your child and his repeatedly mentioned daughter Nina.
He has expressed this view in Time magazine and also wrote about it in a book titled "If Someone Made It, We Will All Die." He even argued for bombing data centers and believed that the risk of nuclear conflict was more acceptable than a full training run.
The "Purification Spiral," a continuously escalating radical behavior. Within this community, members prove their "resolve" by continuously raising the intensity of their stance: estimates of the "Human Extinction Probability (P(doom))" have climbed from 50% to 90% and all the way up to 99.99999%.
A national spokesperson for the Center for AI Safety once said in front of the camera that the right response would be to "walk into labs across the country and burn them down." PauseAI even initiated a so-called "Warning Shot Protocol," designating a certain AI model as an "extinction-level weapon." A leader at PauseAI even said that an Anthropic researcher "deserved everything about to happen to her."
When someone questioned such statements in PauseAI's Discord, the administrator directly deleted that message.

The day before the attack, Neth Soares, a co-author of Yudkowsky's book, tweeted that Altman was "doing some really bad things."

Next, "Cheap Talk" began to face a reality check.
In game theory, there is a term called "Cheap Talk": referring to statements with little to no cost, but ultimately constrained by reality. Initially, everyone was just making low-cost extreme statements, but once the issue was framed as a "human survival crisis," these views could be taken seriously, thus legitimizing extreme behavior.
These are not isolated incidents, but a series of escalating, mutually reinforcing claims around a somewhat apocalyptic ideology. If taken to its extreme, this logic could even entertain "sacrificing 99% of the population to save the last 1%."
As things progressed, someone took these ideas literally and acted upon them—it was only a matter of time. That young man read that book, joined that community, and penned his manifesto. In a self-reflection essay for a community college English class, he defined himself as a consequentialist: "If the outcome doesn't match up, I'll hardly believe in the motive." He adopted the moniker "Butlerian Jihadist" for himself. On December 3, he wrote on PauseAI's Discord: "We are nearing midnight, it's time for real action."
And then, he took action.
They presented him with a "trolley problem": one life versus all of humanity. He pulled the lever.


There's one final ironic note worth mentioning. If these "doomers" truly believe in their judgments to the degree they claim, then they ought to be more forthright about the implications derived from those beliefs.
Just weeks before the attack, a journalist had asked Yudkowsky: Since AI is so dangerous, why don't you attack data centers? The answer, relayed via Suarez, was: "If you saw a news report that I did that, would you think, 'Wow, AI has been stopped, we are safe' ? If not, then you already know it wouldn't work."

Notice that this response didn't say anything. It wasn't "because violence is wrong," but "because it doesn't work right now." This restraint is based on strategic considerations, not moral constraints. And this community knows it. Beneath the surface lies an unspoken consensus: that young person's biggest "mistake" was simply mistiming.
This is exactly what I mean: Intelligence does not equate to power. This is also the deepest flaw in the entire "doomsdayist" worldview.
Yudkowsky's framework is built on a paradox: as long as AI is smart enough, it will inevitably gain the ability to destroy humanity because "intelligence automatically converts to capability." But many of his followers lack a technical background. They don't build AI systems or engage in alignment engineering. What they possess is a particular kind of "linguistic intelligence" that can construct elaborate risk arguments and thus convince themselves of possessing a kind of "priestly authority" over technology. They can build arguments but cannot build systems.
This is no accident but a setting written into its foundational text. Yudkowsky's "Harry Potter and the Methods of Rationality" fundamentally depicts a world where the best reasoner should reign above all systems. "The Sequences" then offer a whole set of "doctrines": a small group of "correct thinkers" superior in both cognition and morals; their rationality entitles them to decide what others can build. Rather than being a safety movement, this is more like a "clerical system" with a creation myth.
Yudkowsky may distance himself from that young person throwing a Molotov cocktail, but he can't distance himself from that syllogism. If the builder will kill everyone, then stopping the builder is self-defense. This is his core proposition, straightforward and clear. The only question has always been: when will someone take it seriously.
So when their own logic shows up at 3:45 a.m. with a bottle of gasoline, they shouldn't act so surprised anymore.

