
On October 23rd, 2022, Elon Musk texted Sam Altman a link to a story from The Information about OpenAI seeking more funding from Microsoft. This was five weeks before ChatGPT arrived, and about five months before Musk launched xAI. “I was disturbed to see OpenAI with a $20 billion valuation,” Musk wrote then. “De facto, I provided almost all the seed A and most of B round funding.”
“This is a bait-and-switch,” he added.
That’s the core allegation underlying the spectacle that’s been playing out in federal court, and across our Twitter timelines, for the past two weeks. Musk alleges both he and the entire world were duped by OpenAI’s founding mission as a nonprofit, betrayed by the company’s decision to create a for-profit entity, and that OpenAI and its founders have been unjustly enriched thanks to their initial deception and subsequent maneuvers to make OpenAI one of the most valuable for-profit ventures on earth (though actual profits may still be theoretical).
Altman responded to Musk in 2022 by writing the following:
I agree this feels bad—we offered you equity when we established the cap profit, which you didn’t want at the time but we are still very happy to do if you’d like.
We saw no alternative to a structure change given the amount of capital we needed and still to preserve a way to ‘give the AGI to humanity’ other than the capped profit thing, which also lets the board cancel all equity if needed for safety.
Fwiw I personally have no equity and never have. Am trying to navigate tricky tightrope the best I can and would love to talk about how it can be better any time you are free. Would also love to show you recent updates.
The first two paragraphs above are the essence of the OpenAI defense. On one hand, the corporate transformation that appears inconsistent with OpenAI’s original mission was in fact existentially urgent and unavoidable given the capital intensive nature of AI development. On the other, Musk, who provided $38 million in funding to OpenAI between 2015 and 2020, was offered equity in the for-profit entity and declined to participate. As evidenced by the text message above, Musk received the same offer in 2022 and still declined to participate.
Closing arguments in the trial came Thursday and the case will go to jury deliberations next week, before later being decided by Judge Yvonne Gonzalez Rogers (the jury provides an “advisory” verdict that’s not binding on Rogers, though she has said she will likely follow the jury’s recommendation). There will be a separate phase of the trial where Judge Rogers presides over the remedies phase, independent of the jury.
I’ll spare everyone the suspense as to my own judgment: Musk is wrong here and he should lose. He brought an impressively insane 26 claims at the outset and his proposed remedies (may it please the court, fire Sam Altman) are admittedly hilarious. Still, his allegations are an insult to the intelligence of anyone who has the energy to review the evidence. Across a two-week trial where Musk and his lawyers have worked every single day to put Altman’s dishonesty at the forefront of the jury’s mind, Elon is the most dishonest character in that courtroom, at least with respect to the facts at hand.
The Truth About OpenAI and Elon
First, the shape of the trial. From Bloomberg:
Musk is seeking as much as $134 billion in damages that he has asked be directed to OpenAI’s charitable arm, if he wins at trial. He also wants a court order restoring the firm’s status as a nonprofit research organization and wants a judge to order that Altman and Brockman both be removed from their roles at OpenAI. Altman is chief executive officer and Brockman serves as president.
The trial will be divided into two phases. During the first portion, a jury will hear arguments and testimony about Musk’s allegations, which now focus on two claims — unjust enrichment and breach of charitable trust.
The panel will issue an “advisory verdict” that will not be binding on Gonzalez Rogers, who will ultimately decide whether Musk proved his claims.
The key thing to know about this case, and about OpenAI’s transition from a pure nonprofit to a nonprofit that also ran and controlled a for-profit enterprise: Elon himself recognized that such a transition was the only way OpenAI and all its talent could possibly remain relevant in AI.
Altman’s first message to Musk promoted a company that could “stop Google from doing [AGI] first.” That piqued Elon’s interest. But by 2017, it was clear the mission would require tremendous amounts of compute, which would be incredibly expensive. Creating a for-profit enterprise, and trading equity for compute commitments with a company like Microsoft, would be the only way to attract the money and infrastructure required to develop leading-edge AI.
There’s lots of contemporaneous evidence, surfaced by OpenAI in 2024, that makes clear that Elon understood the problem, supported a plan to develop a for-profit entity, and that his relationship with OpenAI only broke down when the founders did not want to put Elon in charge or subsume OpenAI into Tesla:
June 13, 2017: “Let’s figure out the least expensive way to ensure compute power is not a constraint…”
July 13, 2017, via Shivon Zilis: “[A conversation with Elon] turned into talking about structure (he said non-profit was def the right one early on, may not be the right one now — ilya and I agree with this for a number of reasons”
July 21, 2017: “[China] will do whatever it takes to obtain what we develop. Maybe another reason to change course [with respect to structure]. I have a tentative game plan that I’d like to run by you.”
September 13, 2017: “The three common stock seats (you, Greg and Sam) should be elected by common shareholders. … I think that the Preferred A investment round (supermajority me) should have the right to appoint four (not three) seats. … like I said I would unequivocally have initial control of the company, but this will change quickly.”
September 2017: Elon creates Open Artificial Intelligence Technologies, Inc, a for-profit Delaware corporation.
September 20, 2017: (After OpenAI’s co-founders reject his terms): “Guys, I’ve had enough. This is the final straw. Either go do something on your own or continue with OpenAI as a nonprofit.”
January 31, 2018 (proposes absorbing OpenAI into Tesla): “OpenAI is on a path of certain failure relative to Google. There obviously needs to be immediate and dramatic action or everyone except for Google will be consigned to irrelevance. The only paths I can think of are a major expansion of OpenAI and a major expansion of Tesla AI. Perhaps both simultaneously.”
December 26, 2018: “My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%. Not 1%. I wish it were otherwise. Even raising several hundred million won’t be enough. This needs billions per year immediately or forget it.”
Anything can happen in a jury trial, and Judge Rogers will ultimately make the final decision, but the evidence above (and supporting testimony) should be problematic to both of Musk’s claims.
With respect to unjust enrichment, Musk has to show that there was a quasi-contract with OpenAI arising from his original donations and OpenAI’s founding commitments, and that OpenAI unjustly repurposed those donations to derive inequitable benefits from a new for-profit entity that deviated from the original purpose. The problem is that the conduct alleged as unjust—using charitable assets to develop IP that was later diverted to a for-profit entity—was suggested by Musk repeatedly and characterized as essential to the future of the company and its mission.
OpenAI’s founders agreed with him, but faced with Musk’s demands for control of the new entity, they were content to remain a nonprofit until he left on his own accord. When the for-profit entity was later created without him, Musk was informed of the change and offered equity in that business. He declined. Nothing that happened there appears all that deceptive or inequitable. (There’s also a two-year statute of limitations on unjust enrichment claims that may leave this charge DOA regardless).
As to the claim that OpenAI breached Musk’s charitable trust: leaving aside whether there was an explicit and narrowly defined charitable purpose and whether Musk’s donations were conditioned upon said purpose, there is a more basic problem. It’s hard to make the argument that OpenAI’s creation of a for-profit subsidiary breached any charitable terms from its largest donor when the documents make clear that the donor himself was not only attempting to create a for-profit entity and did not consider it a breach, but also repeatedly warned the founders, in writing, that their entity and its mission was “on a path of certain failure relative to Google” without “immediate and dramatic action” and “a major expansion of OpenAI.”
I should emphasize here that I’m not predicting what the jury or judge will do, nor am I an expert in California state law. We’ll see what happens. But at a general level, everyone should be clear that Musk’s zingers at trial (“How can I have equity in a nonprofit?” he said when asked about OpenAI’s offer of equity) are directly undermined by his actions in 2017 and 2018, when he literally created a corporate entity to house a for-profit subsidiary of OpenAI and proposed a capital structure that included himself as the majority shareholder with unilateral control.
Heads Musk Wins, Tails Sam Loses
Musk, to be clear, has already succeeded. Regardless of what actually happened and what’s decided from here, he’s inflicted real pain. The trial has been an opportunity to make Altman and OpenAI’s founders look like greedy, sociopathic liars, which is a narrative that much of the public wants to believe.
Elon’s attorneys are playing the hits. Ousted board member Helen Toner sat for a video deposition played in court, while former OpenAI chief scientist Ilya Sutskever took the stand this week and testified that he had been plotting for a year to fire Sam before going to the board with his concerns over Altman’s candor (and drawing on former OpenAI CTO Mira Murati’s evidence as he did so).
We also learned that OpenAI President Greg Brockman kept a diary in 2017, ignoring Stringer Bell’s advice, and wrote to himself while the company was still a nonprofit, “We truly have a chance to make this happen financially. What will take me to 1,000,000,000?.” Later, after a conversation with Sutskever, Brockman lamented in his diary that, “the true answer is we want [Musk] out,” “it’d be wrong to steal the nonprofit from him. To convert to B Corp without him,” and “[I] can’t see us turning this into a for-profit without a very nasty fight.”
On that last one, he was right!
But Brockman’s diaries, while damning out of context and one big reason this case went to trial at all (they were cited by Judge Rogers at the summary judgment stage), reflect real and understandable tension that arose from a) OpenAI realizing the scale of its opportunity and its capital requirements, and b) Elon’s desire to control the company as it transitioned. Brockman felt guilty pushing him out, but how could Musk have stayed if he wasn’t willing to bend on his demands?
Sutskever testified this week that in 2018 he thought merging OpenAI with Tesla “would kill the dream” that of the founders. Brockman testified about an eruption where he thought Musk would hit him, and elsewhere in Brockman’s diary, he wrote of a phone call with Musk and said, “I found this [part] of the conversation to be so distasteful that I had a knot in my stomach writing these notes.”
Musk has had plenty of wins too. Brockman’s diaries will leave a mark, while Altman was forced to publicly confirm that he “misspoke” when he told Congress he has no equity in OpenAI (he holds a passive stake through Y Combinator). Likewise, Musk’s lawyers introduced evidence highlighting Altman’s investments in OpenAI partners to create the appearance of self-dealing, and now that story is national news again as Congress prepares for its own inquiry into those deals.
With an OpenAI IPO forthcoming sometime in the near future, none of this is convenient, and all of it looks like very satisfying revenge for Musk. The past two weeks have been a public addendum to the New Yorker profile I wrote about last month. Can you really trust Sam? Was OpenAI founded by liars? Let’s ask some of the biggest names in tech and AI, all of whom were called to testify to those questions: Musk, Brockman, Murati, Toner, Tasha McCauley, Satya Nadella, Sutskever, Bret Taylor, Kevin Scott, and Altman himself (among others).
I have to confess, though: reading highlights from what in theory is wildly entertaining showdown between two of the most polarizing humans in the history of technology, I’ve mostly been bored.
The case offers good, prurient fun for anyone who’s been reading the news over the past few years and somehow isn’t tired of these people, but I’m not one of them. Musk and his attorneys can’t pound the law or the facts, so they are pounding the table and calling Sam Altman a liar, putting on a show for the jury and the world, creating a record that will make Altman’s life difficult even if OpenAI wins the case. I get it. My problem is that the lessons here are stale and have never held up to much scrutiny.
A Moment of OpenAI Clarity
In a December 2015 blog post, OpenAI announced itself to the world:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right.
It turns out the structure was not quite right, and now here we are. So let me ask: at what point can we all move on?
It goes without saying that every bit of unhelpful OpenAI evidence seems to have been amplified on Twitter, another company Musk owns. The trial has delighted the throngs of Musk cultists who cheer Elon in everything he does, as well as wide swaths of the political spectrum that are convinced that Sam Altman is the antichrist. But personally, I find myself wishing everyone could, please, just grow up.
OpenAI’s not a perfect company. This isn’t a blanket defense of everything the company’s ever done or Altman’s ever said (his equity answers were obviously misleading). I’m sure they’ll do plenty to drive me crazy in the future. But with respect to the company’s origins, specifically, the story here is actually pretty simple. The company realized its original charter was unworkable as it began to make progress scaling toward superintelligence, and from there, leaders made mostly rational choices when faced with difficult questions that arose from the original promise to forgo profits and inevitably foreclose the large scale investments required to succeed.
Former OpenAI VP Dario Amodei, now CEO at Anthropic, once complained of Altman in his notes and wrote, “Everything was a rotating set of schemes to raise money.” That raises two questions. First of all, did all these guys keep journals? And second, I’m sure it’s true Altman was trying to raise money in all kinds of different ways, but is that wrong? AI spending in the first quarter of 2026 from Google, Meta, Microsoft, and Amazon was more than three times that of the Manhattan Project. Developing this technology is a very expensive hobby!
Imagine if, tomorrow, a group of researchers announced that they were starting a nonprofit to develop AGI, raise tens of billions of dollars in investment every year, and open-source AI products that would never be commercialized. Who in the world would take that project seriously? Meanwhile, OpenAI does commercialize its products, nearly a billion people use ChatGPT today, while OpenAI’s nonprofit entity still exists and is one of the most valuable charitable enterprises on earth.
So go back to 2017. Would the world be better off if OpenAI had remained a nonprofit and slowly withered on the vine while being outspent 100 to 1, as Google secured a monopoly in the most powerful technology of the 21st century? And in that scenario, how much longer does it take for us to get great AI products if Google has no competitive pressure to gets it act together? There was also, of course, the Elon option. Perhaps Musk could have succeeded at subsuming OpenAI into Tesla and aligned with Tesla’s mission. Would that have been a better outcome than a deal for Microsoft’s compute? Or if Altman, Brockman and Sutskever had relented and Musk had been named CEO of a standalone OpenAI for-profit entity, how does that story end? Musk is brilliant in a variety of ways and last month I compared him to Steve Jobs, but his instincts for pure software are consistently atrocious and xAI has been a mess for three years as researchers run for the hills.
Now consider the employees that OpenAI, with Elon’s help, recruited for the original mission. Another line from OpenAI’s founding blog post reads: “We hope this is what matters most to the best in the field.” Indeed, the primary benefit of the original structure, and why it wasn’t necessarily a mistake, was that the novel structure and noble purpose served as a recruiting tool for the most brilliant researchers in the world, all of whom could have made far more money at Google. Researchers like Amodei, then, have a much better case for arguing there was a bait-and-switch when OpenAI pivoted to prioritizing money and fundraising. On the other hand, many of those employees left OpenAI to work for Anthropic, itself a trillion-dollar company, and, notably, not a charity.
There’s a tendency to confuse cynicism with insight in mainstream tech conversations. Because OpenAI’s CEO has said evasive and misleading things and the company itself has evolved from its original mission to become something almost entirely different, it’s very easy to be cynical about what’s happened here and confidently conclude that OpenAI is just another big tech scam. It’s perhaps the most audacious scam of them all! That’s a good story that feels emotionally true to people who are worn out by tech and scared of AI, but it’s not quite what happened.
The reality is knottier. Had the OpenAI founders not launched with a nonprofit structure in 2015, they probably never recruit the talent required to compete with Google. And had they done anything else other than exactly what they did in 2018 and 2019, all of computing would be less interesting today, and the company probably wouldn’t exist eight years later. Musk’s trial has been clarifying on that point, at least for me.
We’ll see what the jury thinks.
For now, one other point of clarity for everyone this weekend—if there’s a chance you may encounter litigation in the near future, do not, under any circumstances, keep a diary.
Note: I’ll be off next week because of scheduling constraints, but we’ll get back to the regular schedule the following week. Also, if you’re reading this site most weeks but haven’t yet subscribed, you can do so here. And thank you very much to everyone who already has; the response has been great so far.
Sharp Text is extension of the Stratechery Plus podcasts Sharp Tech, Greatest of All Talk, and Sharp China. We’ll publish once a week, on Fridays. To subscribe and receive weekly posts via email, click here.
