In May 2026, at the Oakland Federal Court, the veil over OpenAI was gradually lifted.
What unfolded before the jury was a tangled web of deceit:
Greg Brockman's diary, intertwining anxiety and calculation, Elon Musk's unwavering stance on authority, Sam Altman dancing on the edge of integrity, Microsoft's looming presence between computing power and capital, and the boardroom coup that took place at the end of 2023, leaving everyone breathless yet hastily resolved.
Amidst this chaos, there was one seemingly grand but oddly specific question that made its way to the courtroom: Did OpenAI live up to its promise of "benefiting all of humanity"?
As of May 15, 2026, the trial has not reached a final verdict, and the jury's opinion still hangs in the balance. But one thing has undeniably occurred: OpenAI has been dragged out of the realm of myth and back into reality.
Over the past few years, OpenAI has often been portrayed as a story about the future. ChatGPT took the world by storm, Altman traveled the globe, large models seeped into offices, schools, phones, and corporate processes. It was a company born with a quasi-religious sense of grandeur, always speaking about humanity's destiny, AI's awakening, the boundaries of safety, and the dawn of tomorrow, like a lighthouse pre-built for humanity.
But the court cared not for these narratives. The court sought the truth.
The "All Humanity" Witness Stand
In 2015, when OpenAI was founded, it was still pristine.
It claimed to be a non-profit AI research company with a goal to advance digital intelligence for the maximum benefit of all humanity, free from the pressure of financial returns.
Altman and Musk served as co-chairs, Brockman as CTO, and Ilya Sutskever as the head of research. Back then, OpenAI seemed to harbor a hint of Silicon Valley's final speck of idealism, where the brightest minds weren't serving a specific corporation but rather safeguarding the future for humanity.

Ten years later, this commitment was brought to trial.
On Musk's side, it was argued that Altman, Brockman, and OpenAI leveraged the non-profit mission to obtain his funding and trust, only to later shift towards a for-profit structure benefiting individuals and Microsoft.
OpenAI said that Musk's money was a donation without specific conditions; he had long known that the for-profit structure was being discussed but did not receive control; he is now suing because he regrets leaving and because his own xAI has become a competitor to OpenAI.
Both sides' remarks are quite harsh.
Musk places himself in the position of a mission guardian. OpenAI places him in the position of a founder who lost control. One side says, "You stole from a charity organization," while the other says, "You just couldn't control it." In the end, the most awkward thing is not which side is better at storytelling, but rather the repeatedly mentioned "entire humanity," which has never truly sat at the table.
The term "entire humanity" appears in the founding announcement, charter, speeches, and media reports, occupying the moral high ground.
Yet in court, it was broken down into evidence: Does Brockman's diary reflect the true intent? What does the 2017 email reveal? What did OpenAI LP transfer in 2019? Did Microsoft's cloud and money change the company's direction? Can Altman's integrity issue continue to support the company in saying "trust us"?
The more an AI company likes to claim it represents humanity, the more it should be asked specific questions: Who is included in the humanity you mention? Who signs on behalf of these people? Who can remove you? Who can audit you? Who can say no to you?
The court did not answer these questions on behalf of the public, but it forced these questions to be raised.
As a result, OpenAI's story is no longer like the growth history of a future company but more like an old score settled. After the ledger was opened, people found that the cracks did not appear only after ChatGPT became popular.
Cracks in 2017
OpenAI did not change suddenly.
If one only looks from the ChatGPT onwards, they might mistakenly think that OpenAI was pushed by money after success, like many companies, starting with ideals and then doing business.
However, the trial rewound the clock back to 2017. At that time, OpenAI did not have the prominence it has today, and AGI had not yet become a commonly mentioned term. However, the founding team had already encountered a problem: if they were to truly achieve general artificial intelligence relying on donations and enthusiasm, it was far from enough.
This was Silicon Valley idealism's most challenging moment. The bigger the ideal, the larger the bill. The larger the bill, the harder it is for an organization to stay clean. Those words filled with a vision for all of humanity spoken on stage ultimately come down to chips, servers, engineering salaries, cloud resources, and long-term capital. Without these, AGI remains just a wish; with these, nonprofit work begins to become unsustainable.
In 2017, OpenAI internally began discussing various paths, such as a for-profit affiliate, B-corp, collaboration with existing companies, or attaching to Tesla. Musk had suggested that OpenAI rely on Tesla as a source of funding. However, OpenAI countered that Musk's opposition was not purely against for-profit status; rather, control was his non-negotiable demand.
That year also saw a scene worth remembering: Dota.
After OpenAI's AI defeated top human players in Dota 1v1, the team realized more strongly for the first time that this technology might indeed have great potential. During a hearing, a discussion at Musk's San Francisco home took place, later referred to as the haunted mansion meeting, where they celebrated the technological breakthrough and debated whether OpenAI should move towards for-profit.
Many companies reinterpret themselves only after their product succeeds. OpenAI did it earlier. Before it became the behemoth it is today, the founders already knew that a non-profit structure couldn't sustain the AGI narrative. OpenAI's vision from the beginning required a heavier machine to feed it.
Thus, an organization that seemed to be about scientific safety quickly entered into control negotiations.
Who would steer the ship? Musk or Altman? Or the non-profit board or future investors? Or the "global public" that never truly appeared?
Looking at Musk again, he was certainly an early significant funder and indeed participated in establishing OpenAI's non-profit narrative. But at the same time, he was also one of the earliest to see how much power AI could bring in this story. Once he saw it, he wanted to firmly grasp it.
Musk's Steering Wheel
In the trial, Musk repeatedly emphasized one thing: OpenAI was stolen.
This wording is powerful. It turns a complex organizational turn into a sentence that anyone can understand. A charity that was meant to serve humanity morphed into a massive commercial machine. It sounds like asset appropriation and also like a moral betrayal.
But the courtroom had a more nuanced story.
OpenAI's lawyers, during the cross-examination of Musk, focused on dismantling his pure victim image. They presented emails and documents, asking him whether he had known early on that OpenAI might need a for-profit structure and whether he had considered absorbing OpenAI through Tesla or gaining control in other ways.
Musk does not like this dissecting approach. He said in court that the other party's issue was a "trick me." The judge repeatedly asked him to answer directly. When he tried to steer the conversation towards the risk of AI extinction, the judge also reminded that this case would not focus too much on extinction.
These scenes vividly illustrate Musk himself.
He is used to telling grand narratives. The destiny of humanity, AI risks, Mars, freedom of expression, and the continuation of civilization are all topics he loves to discuss. However, the court required him to answer smaller, sharper questions: when did you know, did you consent, did you intend to control, is your contribution to OpenAI a donation or an investment...
The contradictions within Musk are precisely the contradictions of the OpenAI story. He may genuinely fear AI getting out of hand, and he may also genuinely believe that OpenAI has deviated from its mission. Yet, this does not prevent him from wanting this company to operate according to his own will.
The more a person believes they are saving humanity, the more likely they are to stubbornly believe they should be the ones at the wheel.
This is not just Musk's issue. This is the underlying theme of many grand narratives in Silicon Valley. They like to portray personal will as a human mission, control as a sense of responsibility, and organizational power as a future necessity. Musk just externalizes and intensifies this matter, making it more visible.
Therefore, in this case, Musk is not only the accuser but also the evidence itself.
Brockman's Diary
Greg Brockman was not originally the most prominent figure in this drama.
Musk is too dramatic, Altman is too central, Sutskever is too tragic, Microsoft is too big. Brockman is caught in the middle; he was an early core founding member of OpenAI and later played a crucial role in the company's actual operations. But this trial has thrust him into the spotlight because his private diary has become evidence.
In the second week of the trial, Brockman was repeatedly questioned about his diary, emails, and messages. Musk's side used this material to prove that he and Altman had selfish motives long ago. The OpenAI side argued that Musk was taking things out of context.
The diary contains wealth goals, anxieties about the company's revenue path, and sentences like "making the billions." More strikingly, the diary includes self-reminders about not taking away "non-profit" from Musk or risk of moral bankruptcy. Musk's lawyer repeatedly seized on these contents for questioning. Brockman denied deceiving Musk and stated that this personal text was not meeting minutes but rather a stream-of-consciousness personal writing.
A diary is not a verdict. It cannot directly prove their involvement in fraud. It may also contain rough thoughts written down by a person in a state of exhaustion, anxiety, and self-reflection. Every writer knows that a personal note is not equivalent to a final stance, let alone the complete truth.
The real significance of the Brockman diary is not in proving any crime but in illustrating that they knew where the boundaries were. The early key figures at OpenAI did not blindly move towards commercialization. They knew the ethical weight of the "non-profit" facade, knew that Musk's early funding came with a trust relationship, and knew that if they were to switch to a different structure a few months later while still claiming to be committed to non-profit, it would seem dishonest.
Knowing does not mean stopping.
In court, Brockman disclosed that his stake in OpenAI is valued at close to $30 billion.
Although this amount is not cash, it is not a realized wealth. It is the equity value based on valuation, still dependent on the company's prospects and transaction structure. But the symbolism is strong enough. A person who once worried about ethical boundaries in a private diary later sat in court and was asked about their stake in OpenAI worth nearly $30 billion. The mission for public good and personal wealth were placed on the same table at that moment.
Like many key figures in excellent organizations, Brockman is smart, dedicated, capable, and has a sense of shame, convincing himself bit by bit to keep moving forward.
The most complex aspect of OpenAI lies here. It is not a group of villains conspiring to destroy ideals. It is more like a group of intelligent individuals who can find a reason to continue at every juncture, ultimately trapping the initial commitment in a system that they themselves may not fully control.
And at the heart of this machine is Altman.
Altman's Trust Debt
In this trial, Sam Altman was not just questioned about the veracity of certain statements. Musk's side truly attacked his leadership credibility.
In the closing argument, Musk's lawyer, Steven Molo, placed Altman's integrity at the core. He told the jury that Musk, Sutskever, Murati, Toner, and McCauley—five individuals who had worked with Altman for years—all referred to him as a "fraud."
These five names are more significant than the accusation itself.
Musk is a rival and can be seen as having a conflict of interest. However, Sutskever is a co-founder and former Chief Scientist of OpenAI; Murati was the CTO and briefly served as interim CEO in 2023; Toner and McCauley are former board members. They are all part of the internal power structure at OpenAI.
We cannot simply say that Altman is a good or bad person.
The feelings towards Altman within OpenAI are evidently complex. He has been able to propel the organization onto the world stage but has also made some key figures uneasy. His strong organizational, fundraising, media, and political acumen has brought the company to where it is today.
When Altman was ousted from the board in 2023, the official reason given by OpenAI was his "lack of consistently candid communication" with the board. Altman returned a few days later. In 2024, OpenAI released a summary of the WilmerHale investigation, acknowledging a breakdown of trust between the former board and Altman, but also suggesting that the board had acted hastily without prior notice to key stakeholders, without a thorough investigation, or an opportunity for Altman to respond.
All these stories weave together to form Altman's true trust deficit.
He is not a hero in the traditional sense. He has the demeanor of a Silicon Valley nouveau riche: he can articulate a mission, raise money, organize talent, handle the media, negotiate with big companies, and turn a lab into a world-class company.
The more capable he is, the bigger the problem: if a company relies on his personal credibility to promise the world "we are here to benefit all of humanity," then his credibility becomes not just a matter of personal character but a matter of public governance.
Altman has his own counterattacks in court as well. He stated that Musk had tried multiple times to have Tesla acquire OpenAI, which goes against OpenAI's mission. He also claimed that OpenAI had actually created significant charitable value at scale.
This is the dilemma of OpenAI. It can claim to still be controlled by a nonprofit, or it can say that commercialization has allowed the nonprofit to have a greater impact; however, when the average person hears this, it is hard not to ask: if a public mission relies on a massively valued company and a strong-willed CEO to uphold it, is it a mission or a trust loan?
The board attempted to call in this loan in 2023. It failed.
Mission Yields to Reality
The OpenAI board is not entirely powerless.
On paper, a nonprofit board holds mission oversight. When OpenAI LP was established in 2019, OpenAI explained that this was a capped-profit structure, where employee and investor returns were capped, with the excess going to the nonprofit, which still retained overall control. This design sounded like a compromise that could raise funds without fully relinquishing the mission.
The problem is that reality has evolved much faster than the bylaws.
After 2019, OpenAI became increasingly intertwined with Microsoft. Microsoft provided funding, offered cloud and supercomputing resources, and obtained commercialization rights. Court documents show that a significant amount of OpenAI's intellectual property and employees shifted to the for-profit entity. By the time of ChatGPT, OpenAI had become not just a research institution, but a commercial system connecting users, clients, developers, cloud resources, investors, and global competition.
Such a system is not something that can be switched off at the push of a button.
During a court hearing, Microsoft CEO Satya Nadella was asked about Microsoft's $13 billion investment in OpenAI and the potential $92 billion return if successful. His response essentially indicated that if the pie grew, the nonprofit would also benefit.
This logic is quite typical: commercialization is not a deviation from the mission but an expansion of the mission's funding sources.
However, in the same set of testimonies, Nadella and Altman's text messages regarding the launch of a paid version of ChatGPT were also mentioned. Nadella inquired about the timing of the paid version, to which Altman replied that the computing power was insufficient and the experience was not good enough yet, but Nadella was eager, saying the sooner, the better.
After OpenAI and Microsoft became intertwined, product pace, customer commitments, computing limitations, and business returns were already intertwined. While the board could discuss the mission, Microsoft had to ensure customer experience; the board could worry about security issues, but users and businesses were already using the product; the board could dismiss the CEO, but employees, investors, partners, and public opinion would immediately react.
Nadella's perspective on the 2023 board crisis is also crucial. He mentioned that he was not given a clear reason for Altman's dismissal and criticized the board's handling as being like "amateur city." More importantly, he was prepared at the time that if Altman and other employees could not return to OpenAI, they would go to Microsoft.
This is the reality. The nonprofit board seems to be holding the steering wheel, but the engine, the accelerator, the fuel, and the passengers in the car are no longer under its control. When an AI company is already connected to a massive valuation, cloud providers, enterprise clients, employee options, and global users, the mission-driven board finds it difficult to truly hit the brakes.
The larger the AGI narrative, the bigger the compute bill; the bigger the compute bill, the more reliant on cloud behemoths; the more reliant on cloud behemoths, the mission is less likely to be solely safeguarded by the bylaws.
In the AI era, compute is not just a backend resource. Compute itself is power. Whoever provides the compute has a say in defining how fast a company can go, where it can go, and who it can serve. Whoever can bear the cost of training failures can then demand a share of the success. Whoever can ensure ongoing enterprise client signatures will have more say in a crisis than the board.
This trial has truly allowed us to see the whole picture clearly, informing us that it's not just one person who has compromised the ideal. If the ideal lacks a robust institutional framework, it will eventually develop a framework of reality.
That framework may not be evil, but it is certainly no longer naive.
Users Are Not Bystanders
Musk, Altman, Brockman, Nadella—these are names far removed from our daily lives. Claims of hundreds of billion dollars in damages, nearly $30 billion in equity value, $13 billion in investments, $920 billion in potential returns—these numbers are so large that they become distorted. Ordinary people sit in offices, squeeze onto the subway in the morning, scroll through TikTok at night, and their interaction with AI may simply involve opening an app and asking: help me edit a proposal, write a piece of code, translate an email.
But here lies the problem.
OpenAI is no longer a distant laboratory. Its models are entering writing, translation, programming, search, customer service, education, office software, and business processes. An ordinary person may not necessarily know whether OpenAI is an LP, LLC, or PBC, or care much about whether Altman or Musk tells a better story, but they have been using AI.
A child uses it to do homework, schools have to decide how to deal with AI-generated essays; programmers use it to write code, companies have to decide how to measure human output; journalists use it to research, outline, and edit headlines, while readers are faced with more content whose sources are increasingly indistinguishable; companies integrate it into customer service and approval processes, and employees find their time and performance being reshaped by the system.
We once thought of ourselves as mere users. However, users use tools, and tools are also shaping users.
What the model can and cannot answer; what content is considered safe and what is considered risky; which companies can access more powerful models, and which can only use encapsulated versions; which languages, professions, regions, and knowledge are better supported, and which are treated roughly. These questions may seem very technical, but in the end, they all come down to the lives of ordinary people.
So, the OpenAI trial is actually a window. Through this window, people can see that the construction site of the future infrastructure is not clean or transparent. There are smart people, ideals, fears, ambitions, equity, cloud bills, boardroom conflicts, and some private documents that were never thought to be publicly read.
Water, electricity, roads, schools, hospitals, search engines, mobile operating systems—once these things enter daily life, they are no longer just commercial products. AI is also moving towards this position. It may not be as stable as water and electricity yet, but it has already begun to be relied upon like water and electricity. A person may not use a particular chatbot, but it is very difficult to permanently avoid the AI-transformed workflows, information gateways, and organizational rules.
Regardless of who wins in this trial, the ordinary user will most likely continue to use AI the next day. Students will still use it to revise their essays, programmers will still use it to patch code, businesses will still integrate it into their systems, and entrepreneurs will still build applications around it.
But the court has at least torn open a layer of packaging. It tells us that the AI entering daily life did not grow out of a machine that operates transparently, stably, and solely for the public good. They come from a specific group of people, a set of complex contracts, cloud computing bills, a boardroom coup, some private diaries, and a power struggle.
This is not a story that can be summed up by the phrase "capital corrupts ideals." The more real and unsettling part is that AI is becoming the infrastructure of ordinary people, but the steering wheel is still held by a few.
When the future starts to be turned into a product, ordinary people cannot just be users.
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Official Twitter Account: https://twitter.com/BlockBeatsAsia
