Who's Vulnerable to AI Disruption?
Some companies will be more vulnerable to AI disruption that others.
In Part 1, I argued that AI-first startups will be as different from existing tech companies (think Airbnb) as they are different from their predecessors (think Hilton) because they will be both enabled by AI and directly benefiting from it.
In Part 2, I argued that the core of AI-first companies that allows them to directly benefit from the improvements in underlying large language models (LLMs) is that they rely directly on quality decision-making to create value.
In Part 3, I argued that there are four specific factors underpinning good decision-making (raw intelligence, good data, collective intelligence and wisdom) that can guide AI-first entrepreneurs as they’re building their startups.
In this essay, we will look at what makes some existing companies more vulnerable to AI disruption than others.
We can think of choosing what AI-first companies to build in two different ways. One is to start with what products or services are enabled by AI that couldn’t be built before. For example, Limitless AI wearable pendant is an AI-enabled innovation.
But another way is to look at companies that serve proven markets in predictable ways and see which of them are particularly good targets for AI disruption. An example of this approach is Visual Electric, a high quality stock photo generator for designers.
An AI vulnerability framework
Let’s build a framework to help us think about AI disruption targets. While every company will be affected by the transition from cheap compute to cheap intelligence that we’re going through thanks to AI development, some will be affected more than others. Let’s consider six factors:
Intelligence-heavy cost structure (the biggest one)
Low risk profile
Clear inputs and outputs
Overpriced existing products
An improvement pathway
Early adopters in the market
If at the core of AI-first startups lies higher-quality decision-making, it stands to reason that the companies that will be most affected are characterised by their reliance on expensive human intelligence to make high-quality decisions. This is the biggest factor.
But that’s not nearly enough. It should also have a low risk profile for failure. It’s far easier to adopt an AI system you don’t need to trust. For example, if Visual Electric fails and generates a bad image, it’s not a problem at all. The user will try again. But the same mistake in a medical setting might have bigger consequences.
It’s also important to have input and output that are easy to define and process. For example, translating books is a process with a clearly defined input and output: digital, specific, easy to work with. But doing, say, a strategy review at a business would be harder: lots of potential inputs (documents across many different systems, interviews, etc.) with plenty of opportunities for things going wrong.
Another important factor is potential for a cheaper, “good enough” solution. Instead of using AI to make something 10x better, we may start by building a solution that competes with an expensive incumbent by offering a “good enough” product at a fraction of the price.1 If an expensive headhunter won’t accept a job for less than £30,000 fee, will an AI recruiter offer a “good enough” service for a fraction of the price? Btw, my friend
is building Alfa AI doing exactly that. Try it.It’s also important to see how an AI product can evolve from doing simple tasks to learning how to do complex tasks over time. For example, an AI software development system like Replit Agent can build relatively simple things today2, but over time it’ll learn to build far more complex projects.
Finally, it’s far easier to work with customers and markets who are ready to try AI-powered solutions. Some customers are more risk averse. For example, UHNWIs who are used to trust their Swiss investment bankers they’ve personally known for decades will probably not be ideal early adopters for AI-first financial management products.
Let’s look at specific examples
So, which kinds companies tick all the boxes? Let’s start with obvious ones because that’s already happening: software development and legal services. A good example is DraftPilot by my friend
.Both software development and legal services currently rely on smart and expensive humans making complex decisions. Both have a low risk profile because there’s a human in the loop (a lawyer or a developer can always reject a suggestion). They have pretty clear inputs and outputs (e.g. a contract to be reviewed).
They have expensive incumbents (no lawyer will join a zoom call for less than a few hundred quid) and a clear pathway from simple to complex tasks. Today, DraftPilot helps lawyers review contracts, but tomorrow it’ll learn to do far more complex legal projects, e.g. handing the entire Series A fundraise.
Here are other examples that come to mind that fit all or most of the criteria:
Professional services. A “good enough” AI McKinsey for a price of a coffee?
Financial advisory. Do wealth managers really have to have minimum wealth expectations?
Creative services. Sure, hire Ogilvy if you have the budget or use AI.
Market research. Do these Forrester reports have to cost a few grand each? I’m sure Google’s Deep Research will catch up soon.
Equity research. Why do humans still listen to public companies quarterly calls?
Course content creation. The days of anyone writing a course curriculum from scratch are counted.
Grant advisors. Professional grant writers can really help win grants, but does it need to be so expensive? Surely not.
Patent lawyers. Filing patents is a pain and also expensive. I’m sure it doesn’t need to be.
All these examples have a validated market and are ripe for AI disruption with “good enough” and cheap solutions. AI-first companies can go to those buying them and offer them an AI alternative. And, over time, eating the incumbents’ lunch.
I’m currently speaking to a number of startups looking to incorporate AI into their businesses, and to a few founders looking to start AI-first companies.
Check out the presentation I built to outline what I offer and what I’m looking for.
If you would like to discuss bringing AI into your business, drop me a line. Right now I’m in the middle of a 10 day meditation retreat, so I’ll get back to you in February.
Read the Innovator’s Dilemma by Clayton Christensen, it’s as relevant as ever.
I say that, but if you asked me two years ago, I wouldn’t have believed how complex these “simple things” are… It’s all relative.