What is an AI-first startup?
AI-first companies will be as different from today's tech companies as Uber is different from a taxi firm.
(This is Part 1. Here is Part 2.)
I have little doubt that AI will have a significant influence on nearly every aspect of our lives going forward. In particular, I imagine that we’ll soon see many AI-first companies. That is, companies that are enabled by and benefitting from AI progress. Let’s imagine what they might look like.
This essay is longer than usual. Sorry. We’ll cover:
Why it’s important for a startup to be enabled by and benefit from tech
Cheap compute vs cheap intelligence
The shape of AI-first startups
They will naturally benefit from AI progress
They will operate in an ecosystem of agents
They will thrive in unprecedented uncertainty
Examples of AI-first ideas
Where does it leave humans
Photo by Tomasz Frankowski on Unsplash
Enabled by and benefitting from technology
First, let’s consider how tech companies of our times are different from their predecessors, founded before the internet. They are enabled by and benefit from tech progress, in particular exponentially cheap compute, storage and bandwidth. That is, they couldn’t have existed before and they get better as technology develops.
YouTube could not have existed before cheap broadband internet. It gets better as the more people get cheap and fast internet because some of them become creators, which attracts more people to the platform.
Ride-hailing apps could not have existed before smartphones with GPS and internet. The more people have smartphones and the better the technology, the better the service they can provide. The next step is autonomous driving, making taking a taxi cheaper and safer.
You can easily think of many more examples that follow the same pattern: a business is enabled by and benefits from progress in a certain field.
It’s important that these businesses have, by and large, been started from scratch instead of evolving out of incumbents.
Airbnb could not have evolved out of Hilton.
Wikipedia could not have evolved out of Britannica.
Spotify could not have evolved out of Sony Music Entertainment.
Why? Because you can’t just “add technology” on top of a business that doesn’t have technology in its DNA and magically turn a bookstore into Amazon1.
So, what does it mean to “have technology in DNA”? It means that every key aspect of the business is enabled by a certain technology and benefits from its progress.
From cheap compute to cheap intelligence
AI, by which I mean Large Language Models (LLMs) like GPT4 or Gemini, is a game-changer for how startups are built because it threatens companies that rely on intelligence2 being rare and expensive.
If it takes a lot of time, effort and money to assemble a team of smart and capable people who will spend more time, effort and money to build a business, what happens if intelligence becomes cheap?
We’ve seen a similar shift enabled by technology that made compute3 and connectivity cheap and therefore enabled a radically new kind of tech companies that wasn’t possible before. Google replaced Yellow Pages and TikTok replaced 9 o’clock news.
Now, AI is making intelligence cheap, which will disadvantage companies that rely on intelligence being expensive.
The shape of AI-first startups
In the next few years, we will see AI-first startups that have AI in their DNA, just like tech companies like Google have technology in their DNA. This means that they will be both enabled by AI and benefitting from AI development.
This is starkly different from many companies on the planet today, which are not enabled by AI (duh!) nor are benefitting from AI4. Chegg, that lost 99% of its value almost overnight thanks to ChatGPT, might be an outlier, but a lot of companies are already on that trajectory if they don’t benefit from AI progress.
It’s important to note that nothing below depends on AGI (Artificial General Intelligence), singularity or other magical event that “changes everything”. I’m convinced that arguing about what exactly is AGI and when exactly we’ll see it is a pointless distraction. Instead, we’ll see steady and accelerating progress, as
from wrote today:The evolution of AI systems is likely to accelerate exponentially over the next 12 months, rendering any discussion of a singular ‘AGI’ moment moot. The transition from narrow AI to increasingly capable systems will be measured in months, weeks, or even days.
It will simply get more powerful gradually, with inevitable setbacks and breakthroughs. But we aren’t waiting for any Day X. The future is already here.
So, let’s look at a few ways in which AI-first companies will be different.
AI-first startups will naturally benefit from AI progress
Sure, every company now has a board deck slide that shows how they’ll take advantage of AI by making parts of their operations more efficient. That’s good. However, most are doing it because they know that if they don’t, they’ll be outcompeted. Deep down, I suspect many executives are more scared than excited about AI: it threatens the entire playbook of how to build a business in a world of expensive intelligence.
The alternative is antifragility, which means that a system is becoming stronger as the result of an external shock5. Or, in our case, AI-first startups will actually become stronger as intelligence gets cheaper and faster.
Lots of founders are wondering right now about what makes a startup defensible if AI models get better every month. If your business is a sophisticated wrapper on top of an LLM model that compensates for that LLM’s shortcomings, the next version might make your efforts obsolete. It will simply do the job out of the box. This is not an AI-first startup.
An AI-first startup would naturally become stronger every time they get access to stronger intelligence. This is the most important property of an AI-first company.
AI-first startups will operate in an ecosystem of AI agents
Just like GPT4 or NotebookLM or MidJourney would look like science-fiction to nearly everyone 3 years ago, most people think AI agents are science fiction6. Soon, they will not be.
There’s no universal definition of an AI agent, but it’s safe to say that we’re a step away from an AI bot that can plan and execute complex tasks, delegating sub-tasks to other agents and coordinating with them. Sam Altman, CEO of OpenAI, recently said (h/t to Azeem’s brilliant essay on AI agents):
I expect that in 2025, we will have systems that people look at, even people who are sceptical of current progress, and say ‘Wow, I did not expect that’. Agents are the thing everyone is talking about for good reason. This idea that you can give an AI system a pretty complicated task, the kind of task you’d give to a very smart human, that takes a while to go off and do and use a bunch of tools and create something of value. That’s the kind of thing I’d expect next year. And that’s a huge deal. If that works as well as we hope it does, that can really transform things.
However this unfolds, going forward we’ll routinely work with a large number of increasingly intelligent AI agents, both next to us personally, within companies we work for, at other companies and at large.
AI-first companies will have people who are skilled at working with AI agents: configuring and managing them, just like today we know how to build and manage teams of humans. These people and their agents will interact with other people and other agents in the world in an impossible to predict way.
Today, a CEO can ask their CMO to plan a complex marketing campaign, which the CMO will plan with their team and external agencies and execute in different markets. It might involve dozens of people, hundreds of meetings and thousands of emails.
Tomorrow, the same task might involve three times fewer people and a few hundred agents across several companies all collaborating to make it a reality. Managing this process has to be different from managing people and everyone in an AI-first company will need to learn it7.
Likewise, we’ll interact with agents externally. We’ll learn to sell to them, and they will sell to us. Instead of humans writing phishing emails, we’ll be facing malicious agents looking for ways to steal our private keys or poison our training data.
So, an AI-first company will naturally have a high ratio of AI agents to humans, who will be skilled at running teams of agents and interacting with similar teams of agents externally.
AI-first companies will assume constant change
Of course, every startup navigates uncertainty all the time. However, given accelerating technical progress, there remain fewer and fewer things entrepreneurs can be confident about long-term.
AI-first startups will take the game to an entirely new level that’s not possible today. Can an app design and build a unique UI for every single user on the fly? Can a website be AI-generated on the fly for every visitor? Can a team of agents be configured to look for opportunities to build a business in a real world in a way trading bots are constantly looking for arbitrage opportunities on the markets?
This might challenge the very foundations of entrepreneurship. What if instead of coming up with an idea, an entrepreneur’s idea is to build a system that will do what’s possible today to make money in a legal and ethical way?
What if instead of doing customer development before building one app, it’s easier to AI-build 500 different apps and see which does better?
What if your customers are AI agents: how do you market, sell and build for them?
At this level of uncertainty, existing structures that we have as a template for building and running companies might require a serious update. Yearly planning, quarterly OKRs and weekly management meetings — all of this was built for the era of human intelligence, not AI intelligence. I don’t know what my agents will use, but probably not Gantt charts.
These are provocations, not predictions, but my point is simple. AI-first companies will thrive at the level of uncertainty that’s unimaginable to today’s companies thanks to cheap and powerful intelligence.
What AI-first startups might be built?
So, let’s imagine what some AI-first companies might look like.
AI investment fund. Instead of operating on a 10 year horizon, it operates on a monthly or quarterly horizon. AI agents pitch their requirements to an AI investors, explaining how and why they will make money. AI agents invest in other AI agents that operate on much faster timescales delivering faster returns (or failing fast).
AI dark web hunter. A team of AI agents go on dark web to trick criminals into revealing their identities and automatically collecting incriminating evidence that can be used in court later.
AI agents locating rare knowledge. If there’s an old or obscure piece of equipment or code, it might not have a manual anymore. But an old engineer living in a retirement community might remember how to use it. An AI agent might make hundreds of phone calls to trace the path to that rare knowledge.
AI interface to real people. For every question or action that an AI agent needs a human, there needs to be an interface to make it happen. Can someone take a photo of Empire State Building right now? Can someone describe their experience of visiting a restaurant in real time as they’re eating? How does the same perfume smell in different climates? Like, Mechanical Turk but for the entire world.
AI requests broadcast. If an AI agent doesn’t know how to do something, some other AI agent probably does. How do you find them? Could there be a decentralised protocol enabling agents to broadcast their needs and capabilities to other agents who might then “make an intro” if they come across a suitable match?
AI emergency helper. If there’s a climate- or war-related emergency where you need to leave your home, maybe forever, your AI agent should monitor all data sources in real time, telling you where to go, what to take and what to pay attention to. Hyper-personalised real-time assistance in emergencies.
AI real-time insurance. As you’re going about your day, your AI agent constantly and temporarily insures you. If I’m flying for 3 days, I get travel insurance in the background for three days. If I’m driving on a road with higher accident rates, I get extra insurance for that half an hour. All negotiated in real time with other AI insurance agents.
AI maintainer of threatened languages. AI agent tasked with preserving a language finds every available source to train on, traces people who still speak this language and talks to them on the phone in their language, learning it in the process. Then, it teaches this language to others, ensuring it doesn’t die.
AI real estate agent. An AI agent that can fly drones filming their neighbourhood, call people selling their homes and match them to buyers, whose preferences it will know better than anyone. And, if the AI agent need to hire a local real estate agent to close the deal in person, there’s another AI agent that can facilitate that.
AI that finds overemployment. An AI can monitor if any of your human employees is lying to you and actually has two full-time jobs by analysing minor differences in their code, timing of their messages or calendar availability.
AI doing performance reviews on other AIs. How do you evaluate other AIs? How do you decide who to trust in an AI agent world? An AI can collect information about past performance of other AI agents and make informed decisions about how much to trust another agent.
AI making board decks for you. If it has seen previous board decks, has been present in all your management meetings and has access to company’s internal data, it might only take a short voice conversation with the CEO and a few others to produce a set of board documents ready for a board meeting.
Now, I’m not saying that any of these is a good idea (it’s all off top of my head, really), but each of these is enabled by AI and benefits from cheaper and faster intelligence. At some point, we might get to one single god-like AI that can do all of that and more out of the box, but I suspect that before we get there, we’ll have enough time to play with building AI-first businesses.
Where does it leave humans?
That’s a question for another long post, but I’m sure humans will stay relevant and important, although not necessarily economically. AI excels at some kinds of intelligence, but not others, and there are plenty of things it can’t do (I’m sure I won’t live to see the day when an AI will be doing my yoga asana adjustments instead of a human teacher).
However, I do think that people who learn how to build and work at AI-first companies will advance forward very quickly while others will increasingly struggle to find good employment. As Yuval Noah Harari presciently put it in 2017,
In the 21st century we might witness the creation of a massive new unworking class: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society. This “useless class” will not merely be unemployed — it will be unemployable.
So, I expect some people to do extremely well and others struggle to adapt. I expect significantly worse inequality.
Think about it this way, imagine going back to 2010 and giving two people $1,000 to invest. One chooses to embrace Bitcoin and hold and another chooses equities that do extremely well. However well the second person does with their choices, they’ll be way behind the person who spent $1,000 on BTC in 2010. I think something similar will happen over the next decade to people who learn how to build and thrive in AI-first companies and everyone else.
We don’t know what kinds of skills will be more relevant in the new world. What makes people successful today — education, connections, IQ, EQ, LQ, etc — may or may not be relevant in AI-first companies. We’ll find out soon enough.
In conclusion
What sparked this essay wasn't just knowledge – it was understanding. Yes, I’ve been following AI progress since the ChatGPT moment. Yes, I’ve seen impressive demos. Yes, I’ve used AI almost daily for two years. Yes, I even had a module on AI as part of my CS degree long time ago and that was fun.
But there’s a difference between when we know and when the penny drops. At some point, we have moments when we realise our mortality, when we realise the scale of climate change and when we realise the trajectory of AI. We shift from knowing to understanding what it actually means.
I’m not suggesting that everything is going to change overnight, though. Things take time to implement, people take time to learn, infrastructure takes time to build and regulations take time to change. So I’m thinking about, maybe, the next decade or so. But significant AI-first companies of 2030 and 2035 will be founded around 2025.
I’d love to hear from you if you’re thinking about the same questions. Hit Reply or leave a comment and let’s discuss.
(This is Part 1. Here is Part 2.)
Yes, I know, there are companies like Nokia that started as a paper mill before making phones much later. But I’m inviting you to look at the overall pattern, not rare exceptions.
Yes, there are many kinds of intelligence, but here I’m talking about the subset of intelligence that LLMs are particularly good at.
Computational capacity or power; the ability to perform calculations.
However, they might have other formidable moats, e.g. regulatory, infrastructure, key asset ownership etc. So I’m absolutely not suggesting that every company will lose 99% of value like Chegg. However, companies that rely on specific types of intelligence being expensive and assets like software as moats, might find the next decade more challenging.
For example, our muscles and bones are antifragile: they grow stronger when we use them. This is different from simply being strong. E.g. a stone might be very hard to break, but it doesn’t get stronger every time I try to break it. In fact, it becomes more likely to break.
I hope I won’t insult anyone by saying that most people don’t think about it much at all, just like most don’t think about climate change. Maybe Don’t Look Up was actually about AI and not climate?
Today, we take management for granted, but it’s anything but. It’s not a natural skill to manage other people, let alone manage managers. We learned (kind of) to do it over the last century of so, really.