$1 billion is not enough; OpenAI Foundation must start spending tens of billions each year

By Davidmanheim @ 2026-03-25T10:53 (+52)

OpenAI is now a public benefit corporation, with a charter that demands they use AGI for the benefit of all, and do so safely. To justify this structure to the Attorneys General of Delaware and California, they split off the nonprofit OpenAI Foundation, and instead of full ownership, gave it 27% equity, worth well over $150 billion - what some have called the largest theft in human history. They also convened a commission to advise them on how to give away that money, and last year announced the first tranche of that giving, evidently funded with their equity. 

Announcement

This week, they announced a team, and an even larger commitment; giving "at least $1 billion" over the coming year; I argue everyone should agree that this is far too little.

That's becase their plan is to use their massive endowment, currently over $150 billion, for charity - though technically, it’s not an endowment. Which is convenient, because if it was, they would need to give 5% of their assets every year, currently over $7 billion per year. But even that seems quite conservative, given the possible trajectory of their holdings.

Possible Futures

It seems like there are three relevant possibilities:

Regardless of which one it is, the argument for faster spending seems clear.

Actual Plans

So, are they doing that? They have started. Last year they announced a $50 million program, or 0.2% of the commitment, they have announced making $40.5m in grants from that program. And this week, they announced $1b in planned giving this coming year, with program officers who have some experience doing that type of giving. 

Obviously, the planned "at least $1 billion" in 2026, starting a year after they launched, is a slow start for such a large program. And at the start, they committed to giving away $25 billion… eventually, with no mention of planned sales. But why would they even need to commit to giving away $25 billion? They are a charity that thinks ASI is coming soon, so they should be planning on spending everything quickly, not eventually. 

But in the real world, the (unfortunately typical) process of committing money without significant action is not acceptable, especially given their legal commitments and the demands of the attorneys general overseeing the Foundation. 

Yes, $25 billion would be a good start; if it’s done in the next two years, I will admit they are doing their jobs[1]. Unfortunately, what I expect instead is that either the $25 billion is more than they give in the coming few years, and also far less money than their stock holding goes up, so that the foundation grows, or perhaps that even the $25 billion commitment is made impossible by a crash. Either way, they would be failing their mission if they do not use a substantial portion of the wealth very soon.

I hope I’m wrong, but we’ll see.

Questions and Answers

Q: Shouldn’t the nonprofit save money to spend on AI alignment when it’s more needed?
A: That’s explicitly the purpose of the public benefit corporation. Given the structure of OpenAI, the Foundation should absolutely demand that such work be done, but should not need to fund it separately.

Q: Won’t the OpenAI Foundation dilute their influence or lose control of the company if they sell too many shares? 

A: Not according to the agreement with the Attorney Generals about the structure. As long as they hold Class-N shares, they have sole authority to appoint the board members of the company.

Q: Isn’t the OpenAI Foundation board identical to the corporation’s board, so they have no incentive to do this correctly? 

A: Yes, this seems to be a severe drawback of their current governance model and the board’s composition.

Q: Even if they sell shares, why spend the money immediately?

A: Optimal timelines for giving are complex, but given the expected trajectory of OpenAI - especially if they are correct that we’re close to beneficial ASI - it seems very hard to justify saving most of the money for later.  But as argued above, regardless of what they believe the future holds, they should be rapidly giving away money - and the opportunities exist already. 

Q: Couldn’t selling shares cause the price to collapse, making this a self-fulfilling reason for OpenAI to decline?
A: If liquidating shares itself could collapse the price, then the fundamental value of the product and expected revenue doesn't support the valuation of the company, and it's a bubble. It could be seen as evidence of OpenAI not having confidence, but that’s more likely to be the case if they don’t have clear reasons, i.e. they aren’t actually spending the cash.

Q: Even if the company is solid, wouldn’t liquidating the shares cause the value of employee and investor shares to go down?
A: Probably, but that’s not the responsibility of the Foundation, and contrary to their recent agreements with state attorneys general. Their responsibility is to do charitable things with their assets, and if they nevertheless decided that safeguarding OpenAI employee’s short-term profit overrides the mission, the nonprofit is explicitly not doing its job.

(Edit to add) Q: Is there room for this much marginal funding?

A: Yes, and they could do so effectively! As Givewell notes, they have very significant room for funding at their current bar for funding - and that is around $5,000 per life saved. How much? Maybe it's only a billion dollars - but if they had ten times that, they could increase the bar to, say, $6,000 per life saved, and instantly have room to fund many more projects at organizations they already support, and a huge number of organizations that they evaluated and think could be promising, but don't currently make their top tier. 

Givedirectly could also use additional billions.

And Coefficient just reorganized to enable them to partner with other organizations that want to do significant giving - another $10 billion per year would certainly stretch their capacity for any of their programs, but I would be surprised if they could not quickly use a reasonable fraction of that.

Thanks to Max Dalton, Ozzie Gooen, Jakob Graabak, and Nuño Sempere for comments on an earlier draft.

  1. ^

    They gave $50m in 2025 and plan $1b in 2026, so if the keep accelerating at their recent trajectory, that's 20x per year. Perhaps they'll plan on giving $20b in 2027, and given that they expect AGI before then, keep going, they could maintain the acceleration and give $400bn in 2028.


Tobias Häberli @ 2026-03-25T17:29 (+7)

I agree it would be bad if the OpenAI Foundation were still giving under 5% per year several years from now. But I don’t think 'they should spend 5%+ in year one' follows.

Directing billions well is really hard, especially for a new foundation. Coefficient Giving says it directed over $4 billion from 2014 to mid-2025, and that 2025 was the first year it directed more than $1 billion. Their 'endowment' is much smaller (~10x smaller?) than OAF’s but it still points towards allocating money well at that scale being genuinely hard. I wouldn't call a new foundation planning to deploy $1 billion in its first year "conservative".

What I'd most like to see is OAF committing to an aggressive, public ramp-up targets, maybe something like reaching 5% of assets by 2028.

Davidmanheim @ 2026-03-26T09:10 (+2)
  1. This is year two, not year one.
  2. See the new Q&A item addressing the ned to build capacity; they could give to Givewell, Givedirectly, or via Coeff's funds specific to their goals. They could also give via Gates foundation, etc. They can do this while building up their internal capacity, so they really don't need to delay additional years.
  3. They have incredibly short AGI timelines, so per their own views, they can't afford to move slowly. If they are giving less than 5% of assets after they already claim AGI, that's a huge failure. So in my view, your proposed 2028 target for giving so little that they are more than doubling assets yearly is insanely conservative, not at all "an aggressive, public ramp-up targets."
  4. That said, yes, I already agreed that actually ambitious public ramp-up commitments could be sufficient; as I said in the post "if it’s done in the next two years, I will admit they are doing their jobs" - but they didn't announce any such plans, and as noted in the post, the total giving commitment is a cash total certainly worth less than 1/6th of their (current rapidly growing) funds; that's insanely low given that it is their total eventual commitment!
Tobias Häberli @ 2026-03-26T18:30 (+2)

Good points, thank you!

They have incredibly short AGI timelines, so per their own views, they can't afford to move slowly. If they are giving less than 5% of assets after they already claim AGI, that's a huge failure.

Do we know whether this is true for the OAF board?[1] Sam Altman is on it, and he definitely believes something along these lines but it's less clear for the others. Here's a ChatGPT and a Claude answer on this, which points towards the others being less bullish & concerned (but also a lack of information about what they believe). I expect there to be a range of views on timelines & transformativeness of AGI among the board members – which probably makes it more likely that their spending targets are compatible with the foundations mission.

  1. ^

    Bret Taylor (Chair), Adam D’Angelo, Dr. Sue Desmond-Hellmann, Dr. Zico Kolter, Retired U.S. Army General Paul M. Nakasone, Adebayo Ogunlesi, Nicole Seligman, Sam Altman

Davidmanheim @ 2026-03-26T18:48 (+2)

That's a really good point, thanks! Though if they don't have short timelines, it seems like they are being quite irresponsible as board members not preventing Sam from making increasingly large bets on scaling. Of course, they might not be willing to cross him; the current board presumably learned the lesson from Ilya's ill-fated decision.

Also, you need what are currently considered almost implausibly long timelines to not think that them spending more quickly makes sense.

Davidmanheim @ 2026-03-25T11:53 (+6)

To forestall an obvious objection, I do not endorse the decision of OpenAI to use this structure, and there are many other problems. However, the above arguments should apply according to the views they profess, which seems important.

Michael Townsend🔸 @ 2026-03-25T15:41 (+4)

First, OpenAI may be successful in turning a large profit based on continued marginal improvements in AI systems, so that their company continues getting much more valuable, far faster than 5% annual growth, meaning that the endowment would grow in value, that is, on net accumulating rather than distributing wealth. In this case, OpenAI’s equity will appreciate greatly; it would be irresponsible for the nonprofit not to try to spend large parts of that increased value.

In this scenario, wouldn't it be much better if the non-profit didn't spend its money now? By holding onto the money now, it'd have much more to give later. Put another way: imagine if the grantees receiving the money were asked "would you prefer $100 today or $10,000 in 6 years?" many would take the latter. 

One frame that might make this argument more compelling is that if OAI ends up building AGI and ends up having astronomical value, then the foundation is sitting on humanity's endowment. Spending it down now before it's realized its value could be very costly.

Davidmanheim @ 2026-03-25T16:16 (+2)

No, it would not. Per the frame that makes the argument more compelling, as I said; "Secondly, they may be even more successful in building significantly more powerful AI, transforming the world. Obviously, the nonprofit would become far wealthier, but given OpenAI’s mandate, it also becomes irrelevant."

But within the first option, if they are actually more than doubling their value yearly (as implied by 100x in 6 years, which matches their current revenue growth continuing at the current rate,) if they give away $20 billion per year, starting at their current valuation of $150 billion, they end up giving away only a small fraction of their eventual endowment - about 13%. And in that case, given that it's hard to spend 13% of $150b effectively, it's going to be far harder to spend any large percentage of their $15 trillion endowment in later years!