Most AI startups are doomed
Just because it matters doesn’t mean it’s defensible or profitable
The statement that most AI startups are doomed can be fairly mundane. After all, most startups are doomed, just by the numbers.
I’m trying to say something more provocative. Almost all startups formed from post-ChatGPT hype and who specifically label themselves as “AI startups” are doomed.
Now, I am a VC who has been investing in AI for a long time—and, in fact, originally left the hedge fund world because I saw so much happening in AI. So, I’m definitely not an AI skeptic.
That being said, I fundamentally think most of what’s getting funded in the current hype cycle is valueless from an investor’s perspective.
If you built it over a weekend, so can someone else
Let’s tackle the easiest case.
I’ve met numerous startups who essentially glue together a few generative AI APIs, do some prompt engineering, and slap a front-end user interface on it. Some of these products are quite impressive in terms of polish and what they can do.
These companies are also all doomed to either be perfectly ok businesses (but not startups, by Paul Graham’s classic definition) or die.
Obviously, if you built it over a weekend, someone else can do the same. Now, let’s say you’re a coding genius. A veritable 10X programmer prodigy! It might take everyone else in the world several weekends… but it’s going to get built.
If you basically give your project’s product away for free and are just having fun, no big deal.
However, if you start charging for it and customers start relying on it a lot, others can come in and just slightly undercut you. Maybe yours is still nicer. Being nicer does often drive product adoption and choice of one over another.
But if it’s actually important (i.e., commands a high willingness to pay and is used frequently), this is where the curse of economics and competition come into play. People are going to copy you and compete away your profits.
No defensibility and no differentiation = no profits. That’s basic economics.
Not even Alphabet, Meta, or OpenAI have any defensibility
Ok, so that was Econ 101 and also Startup 101. It’s not particularly unique to this area. Every hype cycle is essentially characterized by people forgetting that these rules exist, and then rediscovering them to their chagrin at the end of the cycle.
However, note that I’ve mainly been talking about these startups that just glue together APIs like those from ChatGPT together into UIs. Those obviously have very little differentiation and defensibility. Even if your UI is nicer, someone else can just come along and copy it.
My point is broader than just those trivial examples, though.
Let’s now apply this same logic to the underlying technology itself for LLMs like ChatGPT, Bard, LlaMA, and the like.
If I told you I had a fantastic technology that everyone will want to use, and to create it, what I had to do was:
Gather all the text on the internet
Train it all using tons of GPUs and millions of dollars
Built it on well-known technologies, most of which are open-source
Is that defensible? Point 1 and 2 might have some level of technical or logistical difficulty for small startups, but neither of those are particular insurmountable for other large companies—especially when combined with the fact of Point 3. All of these things are built on the same underlying architecture in transformers and LLMs. These LLMs have no real moat. Any large internet company can replicate them.
And, indeed, even Alphabet/Google internally have said this.
The same applies for all image and video generative AIs. Just replace Point 1 with either images or video instead (side note: video may be an exception if Alphabet can choke off easy access to YouTube).
But what if I have the best version of an AI?
Ok, so we’ve established that it’s not super useful to just build an API in front of other people’s technologies (our trivial case). We’ve now also talked about why the less trivial case of LLMs is fundamentally indefensible.
What if I flex Point 3 above, and come up with the best version of an LLM? Or something like it in terms of some other field of AI?
Well, in theory, that’s interesting. Except, of course, for the fact of how fast the technological frontier of the entire industry is moving.
It’s like having the fastest CPU… in the 90s
What if I told you in the 1990s that I had the best CPU? Mine is, like, 3 times the speed of Intel!
Given the expense and incredible difficulty of developing a CPU, that is indeed quite technically impressive! Of course, then the question is, can you repeat that feat year after year? Because your problem is, given how fast semiconductor technologies are moving at the time (Moore’s Law), you have an advantage for a year or two (maybe). Intel—and everyone else—will equal your performance. If you have some special sauce that lets you continuously stay ahead, that’s one thing, but more likely than not, you simply stumbled on a particular set of optimizations that everyone else will adopt quickly.
The same issue exists in AI today. The frontier is moving too fast, and the frontier of the entire AI academic and industry research community almost certainly has more firepower than your single company.
By the way, when we’re talking about firepower, this challenge applies even at the most massive scales. For example, China—by all quiet, anecdotal accounts—is not keeping up with the global (concentrated in the US) research community in the speed of AI development. Basically, everyone who branches off proprietary models falls behind quickly and ends up adopting the global state-of-the-art anyway. AI is just even worse than semiconductors because all of it tends to be open source, which just makes it even harder to hold any sort of long-standing advantage in algorithmic prowess.
As such, you don’t get any lasting value unless you’re able to make that year or two advantage actually count in building a lasting moat—which, if you think about it, is actually extremely hard.
Wait, so what IS defensible?
Ok, so we’ve gone through this process of elimination. What’s actually left?
Monstrous, Godzilla-scale compute
Well, you can either have something that’s so compute intensive that only you could possibly do the training or inference economically. That is unlikely, in my opinion, given how AI has progressed in bringing down the quantity of data and compute required to achieve a certain result. However, note that my opinion is somewhat unpopular. You can decide for yourself whether this point is true (which I talk about in the linked article above). However, least from an investor’s perspective, even if it is a true advantage, I’m uncertain whether I’m thrilled about a startup strategy to accumulate more GPUs/ASICs/FPGAs than Google, Facebook, Baidu, what have you…
Real-world, proprietary data
Secondly, you can be operating in a place where you can’t simply harvest the data off the internet. For example, healthcare data that is siloed in hospitals, or not even collected at all today. Or, protein folding or pharmacokinetic reaction data that has to be painstakingly collected through real-world experiments. Or a ton of other things… all of which share the characteristic that it doesn’t exist in the pure digital world and can’t simply be scraped off the internet.
That is where I see the value of most of AI startups being generated. Places where you can’t simply decide to go collect the data without prohibitive cost, time, and simply physical world messiness. These startups can simply ride the wave of AI improvements—it doesn’t matter, the algorithms are all commoditized anyway—but are the only ones who own and hold that proprietary, next-to-impossible to get real-world data.
Value created doesn’t mean value captured
Note, I mentioned startups. Many forget that just because value is created on a societal level, that doesn’t necessarily mean that the value is captured by a company at all. The Internet boom in the 1990s created tons of networking infrastructure, but the companies realized massively negative ROI from their investments. It was great for bringing communities online though, but that’s a societal benefit, not a company ROI.
In a more recent case, did you know that Azure actually runs tons of private blockchains? It’s hard to actually break out on their financial results for various reasons, but many large companies run this stuff on Azure, making Microsoft one of the big winners in blockchain. (Yes, there’s a separate question of whether a private blockchain is really any different from, say, a database. But that’s irrelevant to my point here.)
The same thing is likely to happen with OpenAI, where OpenAI mostly looks like an R&D lab for Microsoft. Microsoft provides the computing resources in Azure, and in return, OpenAI develops the tools that Azure will then provide as hosted services. Azure can then make a bunch of money from ChatGPT and other stuff that can be pay-as-you-go API calls. This, of course, also looks the same with Bard and Google Cloud Compute, etc. as we go.
This principle runs through most of the AI sector today. There’s going to be a lot of value generated that just accrues to society, and isn’t captured by any private player. Which is wonderful, by the way—this is how technology becomes one of the few “free lunches” we have in society and macroeconomics.
There’s also going to be a lot of value generated that is simply captured by existing industry incumbents, using their market power and scale. That’s not a free lunch for society, but is also just how capitalism works and often still generates “surplus” (Econ speak for “good stuff”) for society.
Finally, there’s a fairly narrow slice that will be both value generated, and accrue to new, young companies, that ideally then can come along and replace the incumbents (which is how healthy market turnover happens).
Those are the companies that will generate outsized returns and become tomorrow’s well-known tech names—and, of course, are what VCs are theoretically looking for. In reality, most investors are being a bit more indiscriminate in slinging money at AI startups (and even big, public companies that claim to have an “AI strategy”) right now. And, as such, are mostly flushing money down the drain.
AI is going to change the world. But most AI startups are doomed.
I think AI startups become defensible as soon as they can leverage the positive feedback where user interaction with AI can lead to improving the model. For example midjourney can leverage user image preferences to collect training data, use the data to improve the model, and use the better model to attract even more users and collect even more training data for the next run. So if you built it in a weekend and get to market first, and the benefit of finetuning on user preferences is sufficient, it's quite defensible.
I also think talent/culture is a moat of its own. OpenAI is nothing without its people, but it's also unstoppable with them. Given the right culture/team a company can consistently produce banger models and stay ahead of the curve, which attracts more top talent and investment, and so on.
I generally agree with the article's point about the necessity of unique, proprietary data MOATs for AI startups. However, it's important to note the speed at which these startups need to act. A notable trend in AI adoption is the willingness of many to try new applications at least once. When ChatGPT was released for free, everyone and their aunt tried it. It had record user signups for a reason.
The appeal of AI, in its basic form, lies in task automation. This is particularly true for sectors yet untouched by AI and possessing significant data MOATs. These areas seem more open to considering AI solutions than before. A personal observation supporting this is my mother's experience. She struggles with basic digital tools like Gmail, yet she found ChatGPT intuitive and easy to use. The simplicity of typing a query and receiving a response resonated with her immediately. Most importantly, she immediately understood how this could evolve to automating so much of her previous job at the US Postal Service.