One of the first things lost in a gold rush or hype cycle is the basics of a successful business or startup. Amid dreams about massive markets and all the money to be made, it’s often lost that profitability (how much more you make than you spend to make it) is what matters in the long run.
But how does that square with startups? They rarely make money at the start.
The difference is they will make significant profits. And the reason they will make profits—or not—is defensibility because that’s what gives you pricing power.
We’ll discuss the types of defensibility and apply them (and assess their strength) in several AI areas and how they fundamentally change the strategies companies require for their go-to-market.
Basic economics
Between political rhetoric about “greedflation” and widespread misunderstandings about economic theory, pricing power has gotten a bad name. However, pricing power is what draws startup companies to tackle hard problems that require significant time, capital, and effort to solve.
In reality, the world is complicated and there are different levels of pricing power. To give an example, those who have gone to popular tourist destinations—especially ones in hot areas that require lots of walking—will be familiar with independent water hawkers, walking around selling bottles for $1-2.
This could be a case of perfect competition. In reality, each water bottle from a nearby drugstore or supermarket is likely $0.30. If one hawker is successful in selling bottles for a very high price, say, $10, and bottles are still selling, someone else might decide to also make the trip to the store to also buy water bottles to sell. Competition brings down the price.
However, we never get to $0.31. $1-2 is probably reasonable because you also have to consider the hawker’s labor and time lugging those heavy bottles out to somewhere the passerby’s can’t buy it themselves. If it’s annoying enough to get to the location, $5 might be a perfectly reasonable price—which, may be annoying to the tourists, but it says something that they still buy.
However, this isn’t a defensible market. Anyone can become a seller, and the sustainable price is proportional to direct costs—meaning the labor and time required to bring them out.
There are levels of defensibility
Weak Barriers
Most B2B SaaS (business-to-business software-as-a-service) startups do not actually have hard barriers to entry. They have weak ones, mostly involving lock-in.
For example, if you’ve picked Slack as your messaging provider, there is friction in moving your organization to Microsoft Teams or Discord or something else. However, there’s no fundamental reason or (significant) learning curve, or anything else that would make changing providers that hard.
To build intuition, if it were you as the economic decision maker, you probably would never bother if it were a few dollars difference per month. If the service were insignificant to your overall costs, you might not even care if it were twice or 3X as expensive as an alternative if it’s working fine—don’t fix it if it isn’t broken.
However, if it were actually something that cost you a lot, were related to your scaling costs (i.e., part of your marginal cost to serve customers, say, as an API call to an LLM), you would suddenly care a lot. In that case, a bit of inconvenience is nothing compared to price savings.
This is a weak barrier that means you can never charge that high, and the only saving grace in B2B SaaS is that these companies usually have near zero marginal cost for serving customers anyway.
Medium Barriers
There’s another level of barrier that is quite difficult to surmount. This typically has to do with overall convention, education/learning curves, or similar kinds of more painful switching costs. These barriers almost always incorporate a “light” network effect as well, which in its strong form is a fairly powerful moat.
For example, Adobe software has relied on this kind of barrier for years. Adobe software costs a lot, but many students are used to using it (due to generous educational licensing, the company pushing it in an educational context), and industry players are also used to using it. File formats, etc. add to this, but more importantly, users hate changing the modality of how they work with an everyday tool after gaining mastery. This is self-reinforcing and thus creates a bit of a network effect as well.
That being said, there’s nothing really stopping everyone simply coordinating to boycott and move away from the Adobe ecosystem. If they did that, Adobe’s barrier and their pricing power would immediately fall. The problem is coordination. Or, alternatively—without coordination—finding a new use case that allows for a “restart” that allows another standard to take root (this is actually part of why Canva has been such a massive deal for Adobe).
I’ve talked a lot about how CUDA is a pretty high barrier that actually reinforces NVIDIA’s hardware (more than their actual hardware does). Even with all of that, it’s still just a “medium” barrier. Low-level mathematical libraries are written to be optimized in CUDA. New AI model and algorithm writers write their bottleneck code with CUDA. It works well. It’s a lot of effort to learn a new technology instead… and most of them aren’t great either.
However, in theory, as many folks on Hacker News pointed out commenting on my article, there’s nothing actually stopping everyone from just… using something else. The problem is: they don’t. Coordination is too hard. There’s no individual incentive to switch.
On the other hand, if you ended up with an entirely different paradigm to the massively parallel compute that most of these things run on—which I argue is “co-evolutionary” in a way between GPUs and AI models/algorithms—you could have an opportunity to introduce a new standard that would require a learning curve to switch away from. That is part of what a lot of these AI hardware startup companies are counting on (though, as I keep saying, I’m skeptical of).
Hard Barriers
The last class of barriers are basically insurmountable once established. These are the “classic” barriers in economic theory (weak and medium in a more abstracted, perfectly “Econ” world shouldn’t really exist that much or make much impact, even though in reality they often exist a lot and especially for medium can create pretty material pricing power).
These are:
Economies of scale: The biggest company with the most expensive fixed cost infrastructure has the lowest cost base and can undercut anyone. (Note: this does require the market to require heavy fixed costs to actually work)
Tech or IP moat: No one else can do what you do. You’re just underpricing it to grow faster and will jack up prices once you get more market penetration.
Regulated monopoly: The government says no one else can do what you do—or effectively does it with so much onerous regulation that it’s impossible for any new startup ever to pay the cost and scale (Europe is the master at granting these to US tech companies)
In general, most deep tech companies, including AI companies, exhibit these effects, especially the first two.
For example, the foundational model companies (to their folly, in my opinion) are competing on economies of scale, where high fixed cost of computing to bring down the cost of inference makes it more winner-take-all. Unfortunately for them, Microsoft, Meta, Alphabet, Amazon, etc. have all beaten them to the punch in already having the infrastructure.
For AI companies with hard-to-get proprietary data—for instance, specific uncompressed/unprocessed medical data that can be used to drive AI clinical decision support or diagnosis—the data is the IP barrier that can’t be replicated. And, of course, for things involving hard biotech or materials science, if these companies succeed, they also have strong barriers within their exact area thanks to patents and sheer difficulty of the tech.
A Longer Side Note on Network Effects
There is a final one that is newer in economic theory, which is:
True network effects: the purest examples of these are social networks, where after you acquire customers, each customer basically generates value for every subsequent customer (i.e., People used to be on Facebook to see other people’s Facebook posts and have other people see their Facebook posts. If the people left, there is no value, but the more people you know who are on it, the more valuable it is).
Ironically, few companies actually have this true flywheel of “more users = more value.” Network effects are usually in the weaker category of a medium effect of “everyone is kind of already using it” which generates some of this effect, but isn’t nearly as direct. Additionally, the jury is a bit out how many markets can actually exhibit “true network effects” outside social media.
Uber/ride-sharing apps, Airbnb, and other two-sided markets are supposed to have it. And it certainly does exist to some extent, but these effects (as Uber found to its detriment) are fairly localized and lock in Uber only within certain markets. This allowed Gojek/Didi/Grab to muscle into other markets first and lock Uber out without Uber having a translatable advantage.
Anyway, theoretically this should be strong, but practically most networks have turned out a bit weaker than expected, especially in terms of the “each user = more value” thing. Most cases, including the “sharing/two-sided marketplaces,” are more of a “threshold” (upon which additional users are marginal in network value) vs true reinforcing cycles that don’t drop off. As such, most are not worth the level of user acquisition spending of a true network effect startup, and do not generate as much economic value as investors in these companies hoped.
How do these barriers change strategy?
Weak barriers relying on lock-in are mainly a game of cost-efficient customer acquisition and ability to serve customers cheaply. This is why so many B2B SaaS companies rely so heavily on sales/marketing (it’s where all of their spend is), and why they must be low-touch/low-customer-service software with near-zero marginal cost.
Medium barriers are usually sustained through continuous investment. If Adobe stopped pushing their software in education/industry, or if NVIDIA stopped investing in CUDA, their advantage would survive for a while but likely fade over time (first slow, and then fast). It’s hard to just “magically arrive” in this position as a startup, though. To some degree, it requires some level of incumbency to make this play. It also requires continuously investment—though, on the bright side, relative to strong barriers, this means you can “renew” your moat.
Finally, hard barriers vary a bit based on the type. However, apart from regulated monopoly (which is a government fiat, which I won’t discuss too much), they generally require higher upfront capital, but enjoy more significant protection, and have both high earnings and lower customer acquisition costs on the tail end.
A natural monopoly (limited market with very high fixed costs) is the most powerful of the economies of scale. Morris Chang, as one of the founders of the modern semiconductor industry, saw fabs as a natural monopoly and eventually ended up creating TSMC—at eye-watering costs up front, but huge amount of profits on the backend with the monopoly achieved. Weaker version of economies of scale have similar trends of lots of up-front spending (necessary through high fixed costs, and inherently the barrier itself), just at lower scale.
Tech/IP barriers are, if assessed correctly, a “unique” technology/dataset/etc that is the sole way (or best way by a wide margin) to tackle a certain problem. They usually require significant effort to be developed and commercialized (classic deep tech) but then are extremely challenging to replicate or catch up with. The nicest part of AI application versions of these with proprietary data is they also can incorporate a self-reinforcing cycle as well—for example, if usage of the platform itself generates useful data, the leader will always sustain a permanent (and likely growing) advantage the longer the market leadership is held. However, in general, these types of barriers also disappear once the tech leadership is gone or the tech is surpassed. The ultimate example is therapeutics, where there is an absolute barrier (due to patents) that rapidly disappears and cannot be renewed once it expires.
Many AI companies exhibit none of these
My now quite popular and quoted/misquoted article on AI companies are doomed still does capture my feelings on much of the industry, but now that we have this framework, let’s walk through some broad categories.
AI Foundational Models: For those that aren’t effectively owned by a big tech company already (i.e., OpenAI to Microsoft, Anthropic to Amazon, and Gemini which is definitely owned and not just pseudo-owned by Alphabet/Google), the proposition of giving them tons of money is usually that they will hand that money over to NVIDIA and build up compute. The problem is they will, at best (and probably not) match the incumbent big-compute companies. That sounds like a terrible plan for successfully creating a barrier and generating profits.
AI Frontends: Although slightly less popular than they were some months ago, there are still many companies that are effectively front-end wrappers for LLMs, image generation, or similar. The best they can achieve is weak lock-in, but unlike B2B SaaS, they have fairly higher marginal costs due to API calls or (if they run their own open-source version) inference costs. As such, their fate is either determined by the pricing decisions of their provider, or at best they are doomed to be lower-margin versions of the SaaS companies of the past.
AI Companies Without Enough Data: I’ve turned down hundreds of AI companies within drug discovery at this point. It isn’t that I think this is either impossible or a bad business—It’s just that there isn’t detailed enough, good data within biology and chemistry. There are numerous industries like this, where “pure AI” startups cannot succeed. A better approach would be to have a company that can collect the data whilemaking money from doing so. Unfortunately, that usually means having a physical or otherwise costly component that takes it away from “pure AI.”
So what does work? It’s been my hobbyhorse for a while, but AI companies that have proprietary data (or are creating it) from high-friction applications where data is difficult to get. The friction is the defensibility. If you think about it, tech/IP barriers are all like that.
That’s why I’ve been watching those sectors, whether in physical dexterity (robotic control), biotech (drug discovery and its numerous offshoots), healthcare (specific indications related to biomarkers), or many others.
Applications have become a deeply underestimated area of AI—too many people think about AI in the same way as software, wanting platforms that seem to scale infinitely and at no marginal cost. This attracts them to foundational models. But you have to go back to basics—SaaS itself never worked that way, and historically, all companies that generated significant value had to find where their moat was to maintain pricing power. It’s an old way of thinking… but that’s just because software has dominated the VC and startup ecosystem for so long that people have mistaken the mental shortcuts of “tech startups” and applied them too generally.