What Smart People Don't Get About AI
They underestimate how different it is from anything they've seen before and place it in a wrong mental category, which leads to wrong conclusions.
I spend a lot of time talking to smart people about how AI is affecting their businesses and how I can help them build an AI-first business. There’s one thing that puzzled me: despite all of them being smart and successful, some of them clearly get what’s AI transformation means and others clearly don’t. But why?
I’ve considered a number of explanations. Dunning-Kruger effect? No. Lack of education or experience? No. Some mistake in reasoning they’re making? Again, not exactly.
The key divide between those who grasp AI’s impact and those who don’t is this: the former adapt their worldview, while the latter try to fit AI into their existing one.
This shows up in both software developers and businesspeople.
A turning point for software devs
I think most of the software development industry is in denial about what’s going on with AI. Sure, nearly everyone uses Copilot or ChatGPT to help with code instead of consulting StackOverflow, but they try to fit AI tools into their existing worldview like this:
I am a smart and experienced software developer who has a track record of solving complex problems. Sure, AI tools can help me, but they can’t replace me because building software is about much more than writing code.
When I hear something like this, I think of the famous quote:
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Upton Sinclair
Devs thinking like this are trying to view AI as yet another tool, fitting it into their existing worldview instead of starting with Zero-Based Thinking (ZBT). ZBT means starting with a question, “If I were to start over again today, would I do anything different?”
A ZBT approach for developers means asking, “What would the software development process look like if it were reinvented from scratch today?” The answer would look very different to how we’ve been developing software for the last few decades.
This is true even despite the many limitations of current AI products. Yes, they hallucinate; yes, their capabilities are often not obvious; yes, the underlying models behind an API change without a warning. Yet, this is improving quickly.
Why did I say that their salary depends on their not understanding it? Because to honestly answer this question would require facing the fact that their 10 years of professional experience that justifies their six-figure salary will be irrelevant very soon unless they reinvent their worldview.
Their salary literally depends on not understanding it because understanding it would imply stopping doing what they’re doing now and, therefore, risking their income.
Innovator’s Dilemma in software
About thirty years ago a Harvard professor Clayton Christensen wrote Innovator’s Dilemma, a book that The Economist later called “one of the six most important books about business ever written.” Why? It brilliantly articulated how established companies get disrupted by startups. In short:
A startup builds a solution that is inferior and cheaper than an existing product. Or, maybe it solves a slightly different problem or in a different market.
Established companies don’t (can’t) react because they don’t want to lose revenue selling an inferior product for less.
This gives the startup the time to improve its product in an adjacent market that the incumbent doesn’t want to (or can’t) serve.
Soon, the startup’s product is good enough to eat the incumbent’s lunch: now the incumbent is overpriced, underperforming and impotent.
Here’s how it is currently playing out for software devs:
AI-first workflows are inferior to traditional software development process in many contexts.
Software developers don’t want to start using them because it might threaten their income.
Over time, AI-first tools and workflows continue to improve exponentially, as are the devs using them. The very definition of software development changes.
Soon enough, devs thinking they “deserve” a six digit salary because they have 10 years of experience will struggle to get a £50k job if they aren’t using an AI-first approach.
This is not to say that developers will disappear or will be replaced by AI tomorrow. However, the skill set required to be productive is evolving rapidly to focus on leveraging AI-first tools to build software instead of using them as helpers.
Those who understand this dynamic are madly in love with AI-first tools like Cursor and Replit. Those who don’t are turkeys waiting to celebrate Christmas. Sorry.
Up until now, software developers benefitted from tech progress. Better tech meant more fun, more demand for devs, higher salaries and free food and ping-pong at fancy offices. Isn’t it fun to build tech to disrupt other industries?
This is the first time in history, I believe, when devs are the ones being disrupted. That’s why it’s so counter-intuitive: it never happened before.
What about businesspeople?
Many businesses are about to experience the Innovator’s Dilemma first-hand and not in a good way, but that’s a topic for another post. What can make their owners, smart and experienced people, blind to the impact of AI?
Again, it’s that difference between trying to fit AI into their worldview and adapting the worldview to AI.
The thing about AI is that it’s so different from everything we already know, that we can’t accurately place it into one of the existing mental categories from pre-AI era.
Here’s the trap. Imagine a very successful and a very busy person. Maybe they try ChatGPT a couple of times and reasonably conclude that it’s not going to help them with most of their tasks: high-stakes negotiations, complex strategy — whatever they’re doing. So they assume that it’s a “tech thing” that should be delegated to the tech department or that it’s “just a chatbot” or whatever. They fit AI into their existing worldview.1
What they’re missing is a chance to reinvent their worldview through using modern AI tools. I’d go as far as to say that a person who isn’t using AI daily for a meaningful proportion of their tasks is not qualified to make good judgements related to AI regardless of their other experience. Just like a car driver is not qualified to fly a plane regardless of how many years of experience driving cars they have.
How do I tell the difference? In the questions they ask. People who don’t get AI ask questions like, How can AI improve the efficiency of my business? They assume that their business will largely have the same shape. They don’t see this:

People who get AI ask questions like, What would my business look like if it was built from scratch in an AI-first way? They know their business might look very different very soon.
And then they also ask questions about efficiency. Because we can’t reinvent everything from scratch in an AI-first way in existing organisations. We can’t just blow up everything and rebuild. A transition has to be gradual. However, it must be guided by a very different vision based on an accurate understanding of what’s happening.
So, what to do?
Reflect deeply on this quote:
“In the beginner's mind there are many possibilities, but in the expert's there are few.”
Shunryu Suzuki, “Zen Mind, Beginner's Mind”
If you aren’t using AI a lot in your day-to-day work, you can safely assume that your ideas and assumptions about what AI is, what it’s capable of and how it works are incorrect. If you’re listening to people who have opinions about AI but they aren’t using it a lot, they are almost certainly wrong.
Regardless of how smart or experienced you are, you must use AI regularly to build a new mental model of what it is. It doesn’t matter whether AI is helpful or not. The point is to learn when it’s helpful and when it’s not. Then, you’ll be in a far better position to make good decisions about what it means for you, your career and your business.
If you’d like to discuss with me what AI means for your business, start here.
Something similar happened with crypto. I made this mistake. Crypto can’t be placed into an existing mental category. So lots of people make the mistake of thinking “Crypto is like money, but it’s too volatile… It’s like a ledger, but we already have ledgers… It’s like smart contracts, but they don’t quite work in the real world… It’s a store of value, but it doesn’t have inherent value like gold…” So a lot of people tried to put crypto into a wrong mental category and reached a wrong conclusion like “it’s an overhyped gimmick”.
Those who instead spent time working with crypto directly realised that it’s neither money, nor store of value, nor a ledger, nor code, nor a security, but instead all of that together in a unique way, created a new mental model for crypto and bought bitcoin when it was £10/BTC. They’re sending their regards from their 18 room mansion in Dubai.
I agree with the observations, I am seeing the same in many of my interactions as well. Another group is this: AI PhD’s. Many of them created the thesis for their PhD’s before chat GPT demonstrated the power of transformers and embeddings. They are resistant to change their direction of travel when they should be doing so if they were opened minded and using the best tools available. So so-called AI experts are getting left behind too as they are not using the right tools.