Your Six-Figure Salary Depends on Not Understanding This
Before AI takes our crown as the most intelligent creatures in the near future, we must still make one intelligence decision: clearly realise where things are going and what it means for us.
Last week I spoke at Full Stack Founder AI Bootcamp in London, which was an amazing two-day event bringing together dozens of founders building their businesses in an AI-first way. I hope the organisers will run it again because everyone loved it so much! This is a slightly polished version of the talk that I gave.
I believe we're going through a profound shift in how startups are built. Every single one of us learned how to build companies in the era of slow and expensive intelligence.
By this, I mean us, humans. We are intelligent, but we're slow to think, slow to hire, expensive, and can only work for a fraction of a day. Slow and expensive intelligence shaped everything we know about building companies: raising money, managing people, communication, ideation, finding product-market fit, scaling. Every aspect of our skillset has been shaped by this constraint.
For the rest of our lives, we'll be living and building businesses in the era of fast, cheap, and powerful intelligence. Intelligence that is smarter than us on many dimensions and certainly faster and cheaper. Therefore, the entire playbook of how to be an entrepreneur in this new world will be reinvented. We will learn to build AI-first startups, which I expect to be as different from today's tech companies like Tesla or Airbnb as those companies are from the giants of the previous era like BMW or Hilton.
The Tech-First Paradigm: Lessons from Netflix vs Blockbuster
Before we dive into AI, let's consider the classic case study of Netflix and Blockbuster. At the end of the nineties, both companies were renting DVDs. You want to watch a movie, they send it to you. Then you return your DVDs and get new ones back. Very exciting.
Netflix, however, realized that the future wasn't going to look like mailing DVDs. They correctly noticed that broadband speed and penetration were improving rapidly. They realized that if it kept improving, a decade or two later, people would be watching movies on handheld devices on the go instead of renting DVDs. So they started building a streaming business. Tech progress became a tailwind.
For Blockbuster, however, tech progress was a headwind. The better the broadband penetration, the fewer people wanted to watch DVDs. By the time Blockbuster realized their mistake and tried to launch streaming, it was too late. They went bankrupt. They were not a tech-first, but a tech-supported business: a business that used technology to optimize what they were doing without reinventing it.
By contrast, Netflix is a tech-first business. They used technology to reinvent the very business they were in. Their streaming business was enabled by and benefited from fast broadband. Their business got effortlessly better as the tech improved.
This is what it means to be tech-first. Not to use technology to optimize what you're doing, but to have it as a tailwind. This is why Airbnb is a tech-first company, but Hilton is not. Why Wikipedia is a tech-first company, but Encyclopaedia Britannica is not.
What Makes a Company AI-First
Now, apply the same thinking to AI-first startups. The most important thing about AI-first startups is that they are enabled by and benefit from AI progress. AI progress is a tailwind and an enabler. The better the AI becomes, the better the product becomes. And it couldn't have existed without AI.
There are three other properties I consider important:
They use a large number of AI agents. We're living in the era of AI agents already. We can ask Deep Research to go do research for us. We can use AI agents to get a summary of updates from our competitors and email them to us every morning. We can build AI agents to follow complex workflows with multiple steps and branches to accomplish complex tasks. ChatGPT just launched Agent that combines a chatbot, Deep Research, and an Operator in one system.
What this means is that AI-first startups will have a very high ratio of agents to humans. Every human will learn how to build, configure, and manage AI agents and swarms of agents working together. There will be far, far more agents than humans. If this sounds far-fetched, consider that not too long ago it wasn't obvious why everyone would need a personal computer at home. Today, we're surrounded by tech everywhere. Just like people learned how to hire and manage other people, we'll learn how to hire and manage AI agents.1
They learn as they deliver value. Of course, every organization learns, but I'm certain AI will take this to a completely new level. Humans are very slow learners. We don't like learning, truth be told. We have to fight against our desire for comfort and stability. AI, on the other hand, can be a voracious learner. Everything that happens in the business, every customer interaction, every email conversation, every sale is an opportunity to learn and improve in an automated way. And this can compound quickly.
At this point, this is more of a vision than a reality. Today's AI systems don't learn from every conversation except by maybe saving some static memories about you. Their weights aren't updated on every run, like biological brains. However, this will surely change. I can't speculate on the exact technical mechanism, but I'm confident it will change.
They leverage human wisdom. I believe that human wisdom—not labor, cognitive or physical—will remain an important input to build successful AI-first startups. By wisdom, I mean the ability to know what matters and why. I'll come back to this point.
Why Smart People Miss the AI Revolution
All of this sounds completely obvious to me, and yet I regularly meet smart people who don't get it. I don't mean people who have principled and well-articulated objections—that's most welcome—but people who look at AI and conclude that "it's just another chatbot," or a fad, or yet another technology like cloud or crypto.
They aren't stupid, and yet they're making the mistake of dismissing AI. I came to believe that this is because they try to fit AI into their worldview instead of adapting their worldview based on their experience of using AI.
To see what I mean, consider the mistake I made about crypto. For years, I looked at crypto and thought: it's money, but it's too volatile; it's a ledger, but we already have ledgers; it's smart contracts, but they don't quite work; it's a store of value, but it doesn't have any intrinsic value. I tried to put it into my existing worldview, and of course it didn't fit, because it can't!
What people smarter than I did was to actually use crypto day to day, which helped them realize that crypto is neither money, nor a store of value, nor a ledger, nor a smart contract, but all of this together in a unique way.
Even software developers, who understand and use AI better than most, are often in denial about its implications. The famous quote from Upton Sinclair is apt: "It's difficult to get a man to understand something when his salary depends on his not understanding it." Software development is both accelerated and threatened by AI, but many software developers are trapped in thinking along the lines of "I'm a smart and experienced software engineer who knows that delivering great software is about far more than just writing code, which is why I deserve a six-digit salary."
The problem is that the salary of such a software developer literally depends on not understanding the predicament they're in. If they were to actually understand it, they would be learning how to build software in an AI-first way like their hair is on fire! But there's not enough market demand for that yet, so their six-digit salary depends on them sticking to their current job at some bank that doesn't allow AI anywhere near the codebase.
Zero-Based Thinking: Building from Scratch
For entrepreneurs and developers alike, the right question to ask is not how to do what you're already doing more efficiently. The first question must start with zero-based thinking: what would my business look like today if it were built from scratch with AI at its core? And only then think about efficiency.
If Blockbuster had asked this question 25 years ago, we could all be watching Blockbuster on our iPads instead of Netflix. Instead, they used technology to improve the efficiency of ordering DVDs. We must apply zero-based thinking to build products and companies that will have a chance to survive and thrive in the next decade.
It doesn't mean starting a new business. Netflix didn't start as a streaming company, but then it became a tech-first company. Replit didn't start as an AI-first company a decade ago, but today they are an AI-first company. It means approaching things in an AI-first way.
I often think about AI as a dark thin line on the horizon. I want you to visualize what a tsunami looks like in real life. At first, it looks like nothing. There's a thin dark line far away, people are filming it and discussing it. Twenty seconds in, some people are starting to get worried. Thirty seconds in, the impact comes. That's what we're living through: exponential change. It looks like nothing, then still nothing, and then it's the impact.
So be careful when people tell you that it's just a thin dark line on the horizon, it's nothing big. When it becomes big, it'll be too late to prepare.
Finding AI Disruption Opportunities
How do you actually adapt your worldview? You don't just decide to do it. You must use AI tools on a daily basis to understand what they're good at, what they're bad at, what tools to use when.
To give you an example of a small mindset shift, let me tell you about how ChatGPT helped me with recruitment. Some time ago, I needed a freelancer with a particular skillset to start the following day. I wasted hours on Sales Navigator and Upwork before realizing I could ask ChatGPT Deep Research to find me the right person, who then started the next day. I wouldn't even have thought about using ChatGPT for recruitment if I hadn't been using it extensively for many other tasks.
Let me give you another example of how to use Deep Research in non-obvious ways. I used it to identify AI disruption opportunities by formulating a disruption thesis, defining the shape of the solution, and asking it to find specific examples.
My thesis was that small companies in the UK with strong product-market fit and human-intensive value creation are particularly vulnerable to AI. Why come up with new ideas if you can take an existing PMF and rebuild the product in an AI-first way, undercutting the incumbents on price? Small companies are easier targets than big companies: they're less innovative and have simpler products, but may have good revenues that would be attractive for a bootstrapped small team.
Deep Research found me many specific companies, including one that turns about £10m a year doing reference checks for big companies. They verify CVs by calling every previous school or employer and asking to confirm details. Can't AI do it? Voice generation is already scary good. Of their £10m in revenues, probably £8M go to people doing phone calls. What if the price can be slashed by 70% and the cost by 90%, delivering a couple million quid in free cash flow a year to a pair of bootstrapped founders?
My point is that to understand what AI is good at, we must use it a lot. We must use it daily to form a mental model of what's possible.
The Rise of Donkeycorns
This brings me to donkeycorns. Unicorns are so 2010 that I'm not interested anymore. Instead, I'm fascinated by the concept of donkeycorns, which grind like a donkey but party like a unicorn. They are small businesses that have low revenues but high profits, very small teams, and no external investment. Imagine two people doing a £2M ARR business at 90% margin.
Historically, building donkeycorns was very hard, so they are rare. But AI changes the equation. Before, we raised money to hire engineers to get to product-market fit and cash flows, hoping to sell the business for enough money to make the diluted stake worth it.
But if AI-first building tools allow founders to go from idea to prototype in days, why raise money? If the PMF can be proven quickly, we can get to revenue fast. And if our costs are low because we don't need expensive engineers, that's the path to bootstrapping the entire thing.
There's nothing wrong with building unicorns, but this is not the only playbook anymore. Consider what Daniel built at DraftPilot, helping lawyers redline contracts. They’ve got an incredibly lean team and are already profitable with an impressive ARR without raising any VC because they built the product themselves in little time!
And the crazy thing is that Daniel, or any of us, can build several AI-first products like this to learn fast and iterate quickly. Venture capital will have its place, but it'll focus on truly venture-scale ideas that truly require venture funding. But it's not the only playbook in town anymore.
Philosophy for the AI Era: Beyond Intelligence
Where is our edge as human beings in the era of cheap and fast intelligence? Let's digress into philosophy for a bit.
Your happiness in the AI era will be defined by how you experience and think about yourself, that is, by your philosophical perspective. Until recent advances in AI, most people could afford not to think about this. Not anymore.
What do you think defines you as a human? Today, most people take it for granted that our cognitive and communicative abilities define us as humans. We tend to see our very civilization as evidence of the value and power of intelligence and language. We tend to consider ourselves special or valuable because we have these skills. Furthermore, we derive a sense of safety from them: "I'm smart and articulate, so I'll figure it out."
Now, this is a philosophical position, whether you think about it in these terms or not. You assume that one of your aspects or skills is essential to who you are as a human being. What happens if we think about ourselves as defined by our intelligence if we live in a world where intelligent systems are plentiful and much smarter than us?
Let me illustrate by analogy. Imagine living in the pre-industrial world, where nearly everyone does physical labor because it's needed for everything, from fetching water to growing food to building a house. Imagine you're living well because you have a strong and healthy body. Maybe you're a hunter. You make good money. You're well-known and respected for your craft. You probably think of your ability to track game, read the wilderness, and make precise shots as essential to your identity.
What happens to your sense of self if suddenly someone invents machines that can do the hunting? Imagine seeing thermal imaging scopes, GPS tracking devices, automated traps, and drone scouts being invented all at once.
If you've invested your identity in your ability to hunt with skill and instinct, you'll have two options. You can either get very depressed because your sense of the world has been uprooted, or you can adapt to the new reality by, say, realizing that you can use your mind instead of your wilderness instincts.
So, instead of making a living by tracking and shooting game yourself, you now run a business that operates or sells thermal scopes and automated hunting systems. You stop thinking about yourself and your worth in terms of how keen your senses are and instead start thinking in terms of how smart you are. After all, GPS devices and thermal cameras can't think for themselves, right?
This evolutionary response is obvious because we're so used to thinking about ourselves as intelligent and articulate. So, let's apply the same logic to living in a world where we aren't the most intelligent or articulate anymore.
Like it or not, competing with AI in intelligence is like competing with a forklift in weightlifting, only worse, because forklifts don't get stronger all the time, as AI does.
So what happens to the sense of identity of a person who thinks about themselves primarily as smart and intelligent, like most of us do, when AI inevitably starts doing cognitive work better, faster, and cheaper than us? It will most likely feel scary and disorienting, as if the rug were pulled from under our feet, a big shock to our entire sense of the world.
However, it doesn't have to be this way. Our intelligence does not define us any more than our physical strength does. Our ability to think is not central to what makes us human. It is simply one of the many things we are capable of. It is an understandable but unfortunate fact that so many of us see intelligence as central to who we are without noticing it.
So what makes us human, then?
If you ask this question to a Zen master, they might slap you across the face as the answer. The point wouldn't be to insult you but to make you feel the answer rather than think about it. The answer lies in the difference between slapping you across the face and slapping your laptop across the screen. The computer wouldn't feel anything.
But there's more depth to it. There's a difference between being slapped by someone else and accidentally bumping your forehead into a door. When someone slaps you across the face, even if they are a Zen master, not only do you feel something, but you feel it in the context of a relationship with another living being.
I argue that these two aspects of our existence are essential to us as human beings: the ability to experience life and feel connected to or affected by other people. Technology will make it easier and cheaper to process what can be digitized and processed digitally. But what's truly essential to us—feeling and connecting to others—cannot be digitized.
One day, we may look back and consider our obsession with intelligence as something deeply inherent to humans in the same way humans used to relate to physical prowess. There's nothing wrong with running marathons, but no one does it to get from A to B faster. We have cars for that.
Yes, we'll probably lose our crown as the most intelligent creatures. Still, I bet that it will give us more opportunities to embrace being human, just like giving up most of the physical labor allowed us, as a civilization, to exercise our intelligence and wisdom.
Wisdom, defined here as knowing what matters and why, is born not from intellectual knowledge but from lived experience, from doing things together with other human beings and feeling the joys and sorrows inherent in the process. Wisdom, that is, knowing what matters and why, will remain within the human domain. Computers can be infinitely smart, but I'm yet to see any evidence any of them can be wise.
Agency Trumps Intelligence
But I think another critical human factor is agency. Our ability to set goals, be proactive, self-reliant, and adaptive will be the edge over AI.
In the era of slow and expensive intelligence, we used to outsource our agency to others by getting jobs. Someone else could deal with uncertainty, and we could have a comfortable job at a big company using our smart brain to solve complex problems. I think this model is now very fragile.
We used to think that entrepreneurship is risky and jobs are safe. If my startup fails, I'll go get a job, right? Entrepreneurship used to be the exception and jobs used to be the norm. Soon, it will be the reverse. It doesn't mean that everyone will literally register a company on Companies House, but people in jobs will act like entrepreneurs because the company won't be able to tell them what to do beyond offering high-level context and basic infrastructure. People will still have jobs on paper, but they will be acting like entrepreneurs: learning, adapting, and constantly evolving.
Most people think about the impact of the Industrial Revolution when they try to understand the impact of AI on jobs. But for me, a more recent and relevant example is the fall of the Soviet Union, which I remember as a child.
When the planned economy collapsed, many jobs disappeared overnight. Skills that used to guarantee a solid salary became useless in the market economy. The transition from planned economy to market forces was brutal for many people. Yet, it is those with entrepreneurial skills, with agency, with capacity to adapt and learn, who stood a better chance at navigating that transition. It took about 10 years for Russia to adapt to the new reality.
After decades of state-guaranteed employment, people suddenly discovered their skills were obsolete overnight. Jobs once seen as safe, even essential, evaporated as the planned economy imploded. Entrepreneurship, virtually nonexistent under Soviet rule, became a lifeline. Individuals pivoted careers rapidly—from chemistry teachers to market sellers, taxi drivers, restaurant owners, even banana importers from Ecuador. This flexibility was needed for survival.
I think something similar will happen in the coming years thanks to AI. We will still need our intelligence, but our wisdom and agency will matter even more than before. Better be fast and imperfect than perfect and late.
The Path Forward
So here is what I would like you to remember. Whatever you build, make sure it's AI-first: it benefits from and is enabled by AI progress. Use AI daily to adapt your worldview and see what's possible. Forget VCs unless you truly need them. Think about donkeycorns instead! They're less risky and more exciting. And finally, don't forget that wisdom and agency are some of the things that make us human and give us an edge against AI.
The future belongs not to those who can think the fastest or process the most information, but to those who know what matters, can adapt quickly, and maintain their humanity in an increasingly automated world. The AI revolution is already here.
PS: If you’re in for an overview of where the world is likely going, check out this video:
As I’m editing this essay, I’m also waiting for my AI coding agent to complete the task. Most of my time in the last few weeks has been spent learning how to manage AI coding agents well. It’s surprisingly similar to what managing people looks like minus all the psychology: clear context setting, clear goals, clear thinking, clear vision, clear everything.
This is great, I am also surprised by how many clever folks are stuck on “it’s just hype”. So much potential to reinvent entire industries and create new products and services
Great essay, thanks for taking the time to put pen to paper (or equivalent...)