Share this comment
It is meaningfully useful to me personally and many I know. I'm absolutely amazed I'm paying £20/month for the value I'm getting from ChatGPT or Claude. The reason AI products like Cursor has $200M+ is ARR is because it's useful to countless people already today. clay.ai is super-useful today to countless salespeople. DeepL uses AI for (…
© 2025 Evgeny Shadchnev
Substack is the home for great culture
It is meaningfully useful to me personally and many I know. I'm absolutely amazed I'm paying £20/month for the value I'm getting from ChatGPT or Claude. The reason AI products like Cursor has $200M+ is ARR is because it's useful to countless people already today. Clay.ai is super-useful today to countless salespeople. DeepL uses AI for (excellent) translation. Everyone I know swears by Granola and WisprFlow: not as a novelty but day to day tools. This not to mention that AI tech is quietly inside every popular product making it better: Figma, Instagram, Canva, etc. I could go on and on: it's not future, it's already here.
The claim "Nobody is currently selling an AI product right now that is meaningfully useful beyond novelty" is just factually incorrect based on what I'm seeing in the industry.
Ok, so it's useful as a productivity tool in some industries and seems to be a better "Instagram algorithm" than the algorithm that already existed. Neither of those things is world changing.
None of what you just said in any way implies a society reading end of usefulness for human workers or any other wild disturbance. Being a useful productivity tool puts AI in the same category as Microsoft Word, which obviously didn't redefine society as a whole even if typists lost thier career after it came out.
I see what you mean now. If things stayed at they are, you'd be right, but the technology is on an exponential growth trajectory. E.g. if my body temperature increases by 0.1ºC every day, I won't feel any problem for a few days but soon I'd be dead. A crude analogy, but still.
A very detailed, but accessible analysis of this question in this book: https://the-coming-wave.com/ I can't recommend it highly enough. If you read and disagree — fair enough, but it's worth hearing the argument.
I am well aware of the "exponential growth" argument about AI. I am equally aware that the same/similar arguments were made for blockchsins, big data, digital twin, digital thread, meta verse, etc.
Extrapolation is a very risky prediction tool precisely because the current trajectory isn't evidence of future trajectory nor does it say anything about the real limits of a technology.
I am not making a definitive statement that AI is a giant fraud or anything. I'm just saying that until someone shows me something that isn't a neat productivity tool or "slightly better spotify algorithm " im not going to take claims of it ending society as we know it seriously. Those claims are pure imagination and should be treated as such given the tech industry's past of making these claims to hype products.
I'm sorry I assumed you weren't familiar with the exponential argument. I should have asked.
Overall, I think time will tell. As I've said, I'm struggling to make accurate predictions, but I'll be surprised if things stay substantially the same.
The question you should be asking is why do you assume current "exponential growth" means "exponential growth in the direction i have picked using my imagination"
For example: why would exponential growth in generative AI lead to AGI rather than just exponentially faster generative AI speeds? It is not rational that a complex algorithm that predicts the next letter of a sentence based on reading a huge amount of existing documents would somehow become sentient and be able to deduct information or consider contexts it was never trained on. You have to provide evidence that this is possible, much less likely, rather than just imagining something sensational and leaning on "exponential growth" to paper over the argument.
My doubts come from ~15 years of work experience with these new tech hype products. They all contain the same markers. There is one use case that at least superficially makes sense, a lot of hype, and nobody selling a finished product that the seller already knows how to implement to improve a business. Instead they expect you to pay them "to work with you" to figure out how "new hype thing" could maybe do "some vague beneficial idea" for a business.
I agree on the point that scale won't necessarily lead to sentience (and we haven't even touched on what that could be), but I don't think it's necessary.
Rather, my thinking goes that we've seen many technologies in history completely transform all aspects of society (language, printing press, gunpowder, electricity, fossil fuels, transistors, flight, internet) and this technology is developing at an incredible rate by historical standards.
For armies, it means swarms of self-guided drones and much more advanced cyberwarfare. For knowledge workers is means competing for jobs with technology. For biotech is means cheap and ubiquitous gene editing, etc, etc.
Already today, my business (makers.tech) is forecasting to hire far fewer people that we would have done before gen AI. Great for the business, but as I imagine every business doing it, the societal effects have to be pronounced.
My point is, I don't think any sentient AGI is needed: we just need to keep applying what we already have to our existing problems, and that will be a big enough transformation. It would have been more manageable if it were happening slowly, like it did with previous tech, but it's spreading like wildfire, by historical standards.
You are not convincing here. You run am AI booster company that sells the exact kind of non product indicative of the previous tech hype ideas. You aren't selling a discrete, useful, and immediately actionable AI tool. You are selling a service to "help companies use AI" for "vague idea of something beneficial" or seemingly even worse, "provide education for leaders on how to leverage AI"
Contrast this with a company like Siemens or Dassault. When thier sales rep shows up to my engineering office to sell thier software they have a list of talking points on thier software's new features. Those features map to problems engineers have with existing software and map to capabilities we can implement right now. I give them money and they give me the useful thing.
You keep talking in assumptions and beliefs rather than clear actionable problems and solutions and are actively selling "education on a vague idea of how to react to what may come to pass."
Well, time will tell what the future looks like. In my world, much of it is already here.