The OpenAI Cash Burn: Why the Rocketship is Leaking

December 31, 2025

OpenAI rocketship metaphor showing massive fire and cash burn smoke during launch

When we see OpenAI in the news it always appears to be a trailblazer, a rocket ship of progress that other companies would dream of emulating. Well, the thing about that is it isn’t as clear cut as it seems. That rocket ship has more than a few fuel leaks, but its speed is still accelerating.

This is not to suggest that the company is about to explode into history’s most spectacular firework display, not even close, as too many have already invested too much for that to be allowed to happen. However, there are some aspects of this company that don’t look great when you look closely and when you add competitors into the mix, it gets more concerning.

Let’s look at the hard data points first.

In 2025 OpenAI achieved revenue of $13B with a run rate of $20B expected at year end. Amazing right? Well, not exactly, but let’s keep going on the positives. They would have hit that $20B milestone faster than Google and Facebook, who each took five and six years respectively to achieve the same result.

So what’s the problem? Well, this isn’t a high margin SaaS business (yet) and the simple truth is OpenAI pays more to answer a prompt than it generates in revenue. For every dollar in, more than a dollar went out just to keep the servers on. In the simplest math terms, that’s unsustainable long term.

The $1.4 Trillion Bet OpenAI is doing something new and that “new” is getting everyone’s attention.

They are effectively transitioning from a software company into something that resembles a tool for society, something so profoundly applicable that it sheds the product label entirely, or at least could do one day.

None of this comes easily however, and the power requirements for their servers alone are a monstrous 30-36 gigawatts of compute power. To put that into perspective, that’s enough to power 22.5 million homes, all at a cost of 1.4 Trillion dollars over ten years. Current plans are to add 1GW of capacity each week with a cost of up to $50B per GW.

Interestingly, OpenAI are moving away from over-reliance on Nvidia, toward a new partnership with Broadcom. What is currently driving Nvidia’s success might also be their undoing, as there are numerous reports of other AI companies such as Meta looking to use custom silicon in place of Nvidia, due to cost.

The interesting point about this was that Nvidia was always a middle man with a roadmap pointing toward being cut out at some point. Given their crazy high margins, it is an obvious move. So OpenAI are preparing to integrate vertically, working with Broadcom to take more direct control of that cost and remove the margin previously placed in Nvidia’s pockets.

The End of Model Hegemony

For a long time, it was a no-brainer that OpenAI had THE product to use, with the other options being “also-ran” entrants. Today that is not the case, so much so that the difference between Google’s Gemini and ChatGPT is negligible at best and Gemini being better at worst.

The halo around ChatGPT is flickering and users are increasingly shopping around for the best model for their needs. OpenAI has responded to the increased competition with 5.2, but they have a history of making “improvements” that are perhaps not as well received by end users as they might have hoped. I know I personally found GPT-5 & 5.1 to be somewhat annoying for a few reasons vs GPT-4o, and I’m sure I wasn’t the only one.

Enter DeepSeek, an unexpected entrant into the market with 98% less cost to train, and its R1 version matched OpenAI’s reasoning in benchmarks. Not only that, but it’s open source and can be downloaded easily for local installation which pleased the enthusiast community… well, until they asked it about certain topics.

So overall, the once open seas have now become somewhat stormy and congested for OpenAI, and it doesn’t appear to be getting better with time. So, what then can they do to tip the balance of cost vs revenue into a more favourable state?

The Monetization Playbook

At present, 95% of ChatGPT users simply don’t pay a penny; instead, they opt for the free version because it meets their needs. This isn’t surprising and it isn’t going to remain unchanged for long. We have seen many upstarts offer a very compelling product at a low price point and then steadily hike pricing or pare back the lowest tier offering over time, I’m looking at you Netflix and Spotify.

So what’s the plan for OpenAI? Well, Agents are one of the next avenues that they hope is paved with gold. Already they are partnering up with commercial outlets such as Walmart and Etsy to offer instant checkout from the chat window. Whether this will be a big success remains to be seen, but it’s an ongoing pursuit for now.

The other main venture is AgentKit, and if this one is successful, it will not only drive improved revenue for OpenAI but also job losses for humans too, wow, thanks Sam! AgentKit offers companies the option of automating certain roles with agents that beat humans in 70.9% of professional tasks. Now, this must be taken with a pinch of salt for now, as many of these agents still need a lot of babysitting, but it won’t stay that way forever. However, once these agents do not need babysitting, an industry shift of cost-cutting will begin, and the major cost to be cut will be jobs.

And speaking of Jobs, that brings us nicely on to the former best mate of Steve Jobs himself, Sir Jony Ive, and oh has he been busy. Named Project Gumdrop, Jony has been working on something called the “anti-smartphone”. What this product is we still aren’t sure on, but if Jony still has a knack for making cool things, then it could be a game-changer for the consumer as well as the market. That being said, I can’t see any new halo product that holds such promise being alone in the market for long. Depending on the success of the new toy, it wouldn’t take Google or Meta long to bring out their own versions and thus dilute the impact for OpenAI, so let’s put a pin in that one for now.

The ROI or ROAI Cycle So then where does this leave us?

Are OpenAI a safe bet for continued dominance into 2027? I personally don’t think so, not unless they can find a way to decouple from the current compute cost – revenue growth model.

The competitive market is only getting tougher, and we are now seeing for the first time alternatives beating OpenAI’s offerings. I get Microsoft Internet Explorer vibes from this one, like it’s 1999 again. While it can be rightly argued that the comparison is not apples to apples, the point still holds water. We have seen trailblazers run out of steam before and eventually be overtaken by a better-executed alternative that was once considered a contender.

I don’t believe that OpenAI has the right talent at the top in Sam Altman and I also feel the board is pulling the strings with Mr Altman being a bit of a figurehead. History has shown us many times before that front-men CEOs don’t do well when innovation is the top requirement.

Time will tell, but one thing is a positive about all of this: competition drives innovation, and that is a good thing. Hopefully.

Why not sign up to our budding rocket ship below while we are still on the ground 🙂

Leave a comment