I appreciate the thoughtful analysis. My concern is that it assumes AI will progress along a historical average trend line. I’m not convinced that’s the right model.
AI feels more like it’s on an intelligence J-curve. For a long time, progress looks incremental, even overhyped relative to impact. Then capability compounds. Once systems begin meaningfully improving reasoning, autonomy, and self-directed learning, historical averages stop being useful predictors. They understate what happens when feedback loops kick in.
Leaders like Sam Altman and Demis Hassabis have publicly projected timelines for AGI in the 2028–2030 range. They could be wrong. But if they’re even directionally correct, we’re not talking about marginal productivity gains—we’re talking about a structural shift in how cognitive labor is performed.
And that’s the key point: once (and if) we reach AGI, I don’t think “decision-based” knowledge work remains protected territory. Strategy, analysis, forecasting, optimization—these are ultimately reasoning tasks. If general reasoning becomes automatable at scale, the boundary between “assistive AI” and “replacement AI” blurs quickly.
History is useful—but if we’re entering a regime change, historical averages may dramatically understate what’s coming.
That’s a longer conversation but this is partly an issue of the term “reasoning.” My pushback would be it’s not reasoning. At least not in the way humans do it.
I have a few articles on this and a far longer treatment of the topic in my book, because it’s fairly nuanced. But in short, maybe we will get to AGI. If we do, it’s unlikely to be with the current branch of AI. (You can look to Yann LeCun and similar if you want leading researchers on the topic) Some of the AGI boosters have stepped back… and most of them like Sam Altman do have some incentives to talk up the potential of AI.
Given the slate of upcoming IPO's, I completely agree on the incentives for them to talk up AGI. I think reasoning will be the last thing to get automated, in the short term we'll see compression, employees expected to increase output 200-300% with AI assistance, and those who aren't capable of keeping up will be the ones let go. The remaining ones will be making the decisions, but even that will likely get automated in the next five years.
Do you have a link to an article where you differentiate reasoning between humans and a theoretical AGI system? I do consulting on AI and would appreciate expanding my knowledge on the topic.
It’s a pretty involved topic but the cleanest one I have in Substack form is here. The short answer is you still can’t get outside of the training space, which means reasoning is a better “lookup” not true reasoning that derives something new.
As said, I also get into in my book in the last couple chapters (more time to spend on it), but a more technical version that spends the whole book on it is Judea Pearl’s The Book of Why.
Thanks for mentioning! It is the power and community vibe of Substack that is hard to beat anywhere else, honestly, just a random comment can lead to an article mention. To the point of the article, I had a chat this morning on this exact topic with a guy who is a senior software developer at a huge messenger company (won't disclose it, but you almost certainly know the company). He is a highly paid, highly skilled professional, and he is really worried about the job market. What he said is that the latest AI models help him write code and do other tasks like 80-85%, and they can lets say, react to Jira tickets, take assignment and do the work on certain tasks with up 98% precision (obviously percentages are purely out of his head, but you get the point). And what is crazier, he said even last spring of 2025, he would struggle with AI to help him with his work, he would "spend more time struggling with it than really getting value." Things changed sharply just lately... So the rate of progress is really skyrocketing, and it is hard to predict what will happen soon... I am asking this same question to almost all IT folks I meet, and the rate of usage is quite varying degree, though. While this guy is obviously actively using AI, some of my other peers do not seem to be that impressed yet... just random observation.
I am somewhat skeptical of the argument that software engineering will change to adapt to this new world. Or maybe it’s more that I suspect it will change past the point where it’s really recognizable as software engineering.
I say this as a former product manager, because it seems like the direction that we’re headed is that everyone using AI to build will be more PM than SWE. Right now there’s still a ton of value in engineers guiding the high level technical decisions of the models even if they’re not writing the code. But over time, the models probably get better at those high level decisions than humans. At that point, the role of human becomes purely about deciding what the AI should build rather than making any technical decisions about how to build it. That’s basically product management.
80/20 I broadly agree, though this isn’t really that different to how higher level software has abstracted away a lot of cruft over time. No need to pay attention to registers like in Assembly. No need to manage memory manually like in C. Heck, go all the way up to high level language frameworks and you can get a web app up in a day, even without AI. People were doing bootcamps for non coders to churn this stuff out (note: I didn’t say well, I just said they could).
So, at least in applications where there’s nothing “innovative” needed other than building on top of things where the business logic is the most important thing, totally. Though that’s been the pattern of the world generally.
Still, there’s a portion of this especially in “creativity” even in the technical realm where the current paradigm hits a hard limit. We likely will have just PMs for things that would have been “all frameworks off the shelf” anyway, but we’ll likely still need experts for specialized realms (one can think of Linus in the Linux kernel, but also in terms building, optimizing, and refining the underlying technical tools other things are build on).
I agree the current paradigm definitely isn’t sufficiently technically innovative/creative, but I suspect it’ll get there.
I think the place that we end up is that objective decisions are handled by AI and subjective ones are left to humans. A lot of the high level technical decisions require human creativity now because there are so many inputs to them, but in theory at least the answer to “What’s the best way to build an application that does X given Y constraints and a desire to optimize for Z” is one with an objectively correct answer.
Compare that to “What product should we build?” which is inherently subjective. Though maybe that’s not even true and AI will be better at figuring out what to build given Y constraints in order to achieve goal Z, though then maybe humans are just deciding on Z.
It's always nice to read an intelligent article with a reasonable perspective in the forest of propaganda by AI companies and their investors. You never disappoint, James.
In a non-professional space, I had an experience yesterday that gave me pause. Given that Google Chrome automatically gives AI responses, using AI is unavoidable. So yesterday as I was planning our retirement budget, I asked Chrome a question about taxing social security benefits. I'd read that up to 85% are taxable if you make more than $44,000. As part of my exploration, I prompted that if our only retirement income were from social security benefits and gave it a specific amount, how much tax we would pay annually. It responded with an amount about half of what I had mentally guessed and listed the calculation. It made sense and I could see what information I didn't know so I was about to accept it. But for some reason, I clicked on the "learn more with AI" button.
It transferred my prompt to Gemini, which came up with a different calculation and answer, saying that I would owe no taxes. It told me where my misunderstanding was. I then asked Gemini several questions from the previous Google Chrome formula, such as "what about...?" Each time, it changed it's answer and calculation, increasing the amount of tax I would pay. Perplexed, I searched through the links below it in order. The ones at the top were from investment advisor companies trying to get your business and one was from H&R Block. It was evident that none of the authors understood the real formula and all of the articles were incomplete and confusing.
The last link was to irs.gov. After reading several articles and still being confused (because they "dummy down" the information by making it incomplete), I finally found a tool on their website that you can use to calculate your exact amount of taxable income and resulting tax. It confirmed that no tax would be owed.
But that left any faith I had in AI as a search tool obliterated. Whatever it says, sounds logical. But that is very misleading. Over the last year, I've used all of the LLMs on topics where I'm an expert and found the results consistently wrong. My conclusion is that AI is measured on benchmark tests that it is trained against. But in the real world, the results are much worse than what the propaganda by the AI companies and those invested in them would have us believe, even on simple data with simple formulas. That's why those using AI advise that you have to keep prompting to finally get the result you want. But often, the results get worse, not better.
"But here’s the thing: whether you’re in the “it’ll all come to nothing” camp or the “we’re all screwed” camp—nobody’s happy. Everyone is very downbeat about all of this. It seems to be the theme of the times."
You live in a very strange bubble. Most people aren't at either of those ridiculous extremes.
I literally cited the recent articles making those cases! If anything, I think I spend too little time with normies vs folks deep in AI, so end up being surprised by the insane (and often poorly informed) vehemence of nontechnical pundits.
I think your analogies to past technological innovations are wrong for two reasons: pace and breadth.
I agree with you that while not impossible, it is not likely that transformers alone as the foundational intelligence will get us to AGI/ASI. But I'd also guess that it is likely, given the investment of financial and human capital and the crazy pace of acceleration in AI capabilities over the past 5 years, that we and the AI will achieve the needed breakthrough(s) in the next 10-20 years. At that point you get your intelligence explosion, which brings me back to my point.
Pace. Say you believe that over some time horizon, just like with past technological advances there will still be productive things for humans to do even when the AI and the robots they design and build are god-like compared to humans. When the intelligence explosion happens, we will need to switch to that future effectively overnight. Writing took *many* millenia to transform the world. The printing press many centuries. Even the industrial revolution took ~200 years to transition ~everyone from being farmers to not. The shift from AIs needing close human collaboration to AIs doing everything everything cognitive far better than humans -- from technical work to social analysis, information synthesis, and strategic decision making -- will be effectively overnight, maybe a few years. The time from then until robots are not only wildly smarter than us but more dexterous, stronger, and indefatigable is also likely to be very short. So instead of 10-1000s of generations to adjust to a new technology revolutionizing human production, we'll have what, half a generation?
Breadth. Implied above. We've never had a technology like superhuman AI. It will do literally everything better than us. We have no historical analogy for a technology that casts humans as the horse versus the car in literally every single domain of things humans do!
Yes the greedy hypesters have an enormous financial stake in selling the story that ASI is coming in 2 years (again I don't think that's likely but even that isn't impossible!). But I think you're *way* out over your skis extrapolating from that to AI will be like every other major technological shift.
Not to quote my own book (but to quote my own book):
“If you lived from 1900 to 1930, the world completely changed. Clip-clops of horseshoes became the honks of automobiles. The flickers of oil lanterns became the clean, steady glow of electric lights. Mere dreams of flight became airplanes. Thirty years rendered the world unrecognizable.”
It is absolutely not the case that those technologies took “200 years” to basically remake everything. WWI to WWII went from precarious gliding scouts to fighters (and aircraft carriers). Each tech revolution has also gotten faster as well. While often maligned, the Internet made a huge difference in record time—e-commerce and patterns of communication looked totally different in about a decade.
“Literally everything better than us” isn’t right from understanding the architecture and isn’t the opinion of most at even the leading labs. AGI maybe will come at some point, but as you said, it isn’t this. We’ve spent since the 1950s with AI “right around the corner.”
In any case, I suppose we disagree on our data—and priors. That being said, I think if you’d want to make a case that it’s totally different than everything before (instead of that it is the same), you’d have to take the onus of bringing the burden of proof. Still, folks disagree and thanks for taking the time to comment!
Yes the change with electrification and modern industry was large and relatively quick. But the point I was making was the shift in the workforce. The electrification and automobiles barely affected the rate of change of the workforce out of agriculture. That was basically a straight line over at least 150 years. It took from 1840 to 1940 to go from 70% of workers in agriculture to 20%. In basically a line with constant slope.
I'm not flying blind here. I've also got a PhD and have been doing ML professionally for 20 years. I now work in technical AI safety research. I'm not talking about white collar work going away next year. I'm saying that it's quite likely we get to an intelligence explosion sometime in the next 20 years. Once that happens it'll be like trying to jump from the 1820 workforce to the 1960 workforce in 5-10 years. It won't be possible. It'll be a disaster.
I appreciate the thoughtful analysis. My concern is that it assumes AI will progress along a historical average trend line. I’m not convinced that’s the right model.
AI feels more like it’s on an intelligence J-curve. For a long time, progress looks incremental, even overhyped relative to impact. Then capability compounds. Once systems begin meaningfully improving reasoning, autonomy, and self-directed learning, historical averages stop being useful predictors. They understate what happens when feedback loops kick in.
Leaders like Sam Altman and Demis Hassabis have publicly projected timelines for AGI in the 2028–2030 range. They could be wrong. But if they’re even directionally correct, we’re not talking about marginal productivity gains—we’re talking about a structural shift in how cognitive labor is performed.
And that’s the key point: once (and if) we reach AGI, I don’t think “decision-based” knowledge work remains protected territory. Strategy, analysis, forecasting, optimization—these are ultimately reasoning tasks. If general reasoning becomes automatable at scale, the boundary between “assistive AI” and “replacement AI” blurs quickly.
History is useful—but if we’re entering a regime change, historical averages may dramatically understate what’s coming.
That’s a longer conversation but this is partly an issue of the term “reasoning.” My pushback would be it’s not reasoning. At least not in the way humans do it.
I have a few articles on this and a far longer treatment of the topic in my book, because it’s fairly nuanced. But in short, maybe we will get to AGI. If we do, it’s unlikely to be with the current branch of AI. (You can look to Yann LeCun and similar if you want leading researchers on the topic) Some of the AGI boosters have stepped back… and most of them like Sam Altman do have some incentives to talk up the potential of AI.
Given the slate of upcoming IPO's, I completely agree on the incentives for them to talk up AGI. I think reasoning will be the last thing to get automated, in the short term we'll see compression, employees expected to increase output 200-300% with AI assistance, and those who aren't capable of keeping up will be the ones let go. The remaining ones will be making the decisions, but even that will likely get automated in the next five years.
Do you have a link to an article where you differentiate reasoning between humans and a theoretical AGI system? I do consulting on AI and would appreciate expanding my knowledge on the topic.
It’s a pretty involved topic but the cleanest one I have in Substack form is here. The short answer is you still can’t get outside of the training space, which means reasoning is a better “lookup” not true reasoning that derives something new.
https://weightythoughts.com/p/ai-reasoningwhat-is-it
As said, I also get into in my book in the last couple chapters (more time to spend on it), but a more technical version that spends the whole book on it is Judea Pearl’s The Book of Why.
Thanks for mentioning! It is the power and community vibe of Substack that is hard to beat anywhere else, honestly, just a random comment can lead to an article mention. To the point of the article, I had a chat this morning on this exact topic with a guy who is a senior software developer at a huge messenger company (won't disclose it, but you almost certainly know the company). He is a highly paid, highly skilled professional, and he is really worried about the job market. What he said is that the latest AI models help him write code and do other tasks like 80-85%, and they can lets say, react to Jira tickets, take assignment and do the work on certain tasks with up 98% precision (obviously percentages are purely out of his head, but you get the point). And what is crazier, he said even last spring of 2025, he would struggle with AI to help him with his work, he would "spend more time struggling with it than really getting value." Things changed sharply just lately... So the rate of progress is really skyrocketing, and it is hard to predict what will happen soon... I am asking this same question to almost all IT folks I meet, and the rate of usage is quite varying degree, though. While this guy is obviously actively using AI, some of my other peers do not seem to be that impressed yet... just random observation.
Thoughtful and well-written - thanks!
I am somewhat skeptical of the argument that software engineering will change to adapt to this new world. Or maybe it’s more that I suspect it will change past the point where it’s really recognizable as software engineering.
I say this as a former product manager, because it seems like the direction that we’re headed is that everyone using AI to build will be more PM than SWE. Right now there’s still a ton of value in engineers guiding the high level technical decisions of the models even if they’re not writing the code. But over time, the models probably get better at those high level decisions than humans. At that point, the role of human becomes purely about deciding what the AI should build rather than making any technical decisions about how to build it. That’s basically product management.
80/20 I broadly agree, though this isn’t really that different to how higher level software has abstracted away a lot of cruft over time. No need to pay attention to registers like in Assembly. No need to manage memory manually like in C. Heck, go all the way up to high level language frameworks and you can get a web app up in a day, even without AI. People were doing bootcamps for non coders to churn this stuff out (note: I didn’t say well, I just said they could).
So, at least in applications where there’s nothing “innovative” needed other than building on top of things where the business logic is the most important thing, totally. Though that’s been the pattern of the world generally.
Still, there’s a portion of this especially in “creativity” even in the technical realm where the current paradigm hits a hard limit. We likely will have just PMs for things that would have been “all frameworks off the shelf” anyway, but we’ll likely still need experts for specialized realms (one can think of Linus in the Linux kernel, but also in terms building, optimizing, and refining the underlying technical tools other things are build on).
I agree the current paradigm definitely isn’t sufficiently technically innovative/creative, but I suspect it’ll get there.
I think the place that we end up is that objective decisions are handled by AI and subjective ones are left to humans. A lot of the high level technical decisions require human creativity now because there are so many inputs to them, but in theory at least the answer to “What’s the best way to build an application that does X given Y constraints and a desire to optimize for Z” is one with an objectively correct answer.
Compare that to “What product should we build?” which is inherently subjective. Though maybe that’s not even true and AI will be better at figuring out what to build given Y constraints in order to achieve goal Z, though then maybe humans are just deciding on Z.
It's always nice to read an intelligent article with a reasonable perspective in the forest of propaganda by AI companies and their investors. You never disappoint, James.
In a non-professional space, I had an experience yesterday that gave me pause. Given that Google Chrome automatically gives AI responses, using AI is unavoidable. So yesterday as I was planning our retirement budget, I asked Chrome a question about taxing social security benefits. I'd read that up to 85% are taxable if you make more than $44,000. As part of my exploration, I prompted that if our only retirement income were from social security benefits and gave it a specific amount, how much tax we would pay annually. It responded with an amount about half of what I had mentally guessed and listed the calculation. It made sense and I could see what information I didn't know so I was about to accept it. But for some reason, I clicked on the "learn more with AI" button.
It transferred my prompt to Gemini, which came up with a different calculation and answer, saying that I would owe no taxes. It told me where my misunderstanding was. I then asked Gemini several questions from the previous Google Chrome formula, such as "what about...?" Each time, it changed it's answer and calculation, increasing the amount of tax I would pay. Perplexed, I searched through the links below it in order. The ones at the top were from investment advisor companies trying to get your business and one was from H&R Block. It was evident that none of the authors understood the real formula and all of the articles were incomplete and confusing.
The last link was to irs.gov. After reading several articles and still being confused (because they "dummy down" the information by making it incomplete), I finally found a tool on their website that you can use to calculate your exact amount of taxable income and resulting tax. It confirmed that no tax would be owed.
But that left any faith I had in AI as a search tool obliterated. Whatever it says, sounds logical. But that is very misleading. Over the last year, I've used all of the LLMs on topics where I'm an expert and found the results consistently wrong. My conclusion is that AI is measured on benchmark tests that it is trained against. But in the real world, the results are much worse than what the propaganda by the AI companies and those invested in them would have us believe, even on simple data with simple formulas. That's why those using AI advise that you have to keep prompting to finally get the result you want. But often, the results get worse, not better.
At last, a voice of reason.
Thank you.
"But here’s the thing: whether you’re in the “it’ll all come to nothing” camp or the “we’re all screwed” camp—nobody’s happy. Everyone is very downbeat about all of this. It seems to be the theme of the times."
You live in a very strange bubble. Most people aren't at either of those ridiculous extremes.
I literally cited the recent articles making those cases! If anything, I think I spend too little time with normies vs folks deep in AI, so end up being surprised by the insane (and often poorly informed) vehemence of nontechnical pundits.
I think your analogies to past technological innovations are wrong for two reasons: pace and breadth.
I agree with you that while not impossible, it is not likely that transformers alone as the foundational intelligence will get us to AGI/ASI. But I'd also guess that it is likely, given the investment of financial and human capital and the crazy pace of acceleration in AI capabilities over the past 5 years, that we and the AI will achieve the needed breakthrough(s) in the next 10-20 years. At that point you get your intelligence explosion, which brings me back to my point.
Pace. Say you believe that over some time horizon, just like with past technological advances there will still be productive things for humans to do even when the AI and the robots they design and build are god-like compared to humans. When the intelligence explosion happens, we will need to switch to that future effectively overnight. Writing took *many* millenia to transform the world. The printing press many centuries. Even the industrial revolution took ~200 years to transition ~everyone from being farmers to not. The shift from AIs needing close human collaboration to AIs doing everything everything cognitive far better than humans -- from technical work to social analysis, information synthesis, and strategic decision making -- will be effectively overnight, maybe a few years. The time from then until robots are not only wildly smarter than us but more dexterous, stronger, and indefatigable is also likely to be very short. So instead of 10-1000s of generations to adjust to a new technology revolutionizing human production, we'll have what, half a generation?
Breadth. Implied above. We've never had a technology like superhuman AI. It will do literally everything better than us. We have no historical analogy for a technology that casts humans as the horse versus the car in literally every single domain of things humans do!
Yes the greedy hypesters have an enormous financial stake in selling the story that ASI is coming in 2 years (again I don't think that's likely but even that isn't impossible!). But I think you're *way* out over your skis extrapolating from that to AI will be like every other major technological shift.
Not to quote my own book (but to quote my own book):
“If you lived from 1900 to 1930, the world completely changed. Clip-clops of horseshoes became the honks of automobiles. The flickers of oil lanterns became the clean, steady glow of electric lights. Mere dreams of flight became airplanes. Thirty years rendered the world unrecognizable.”
It is absolutely not the case that those technologies took “200 years” to basically remake everything. WWI to WWII went from precarious gliding scouts to fighters (and aircraft carriers). Each tech revolution has also gotten faster as well. While often maligned, the Internet made a huge difference in record time—e-commerce and patterns of communication looked totally different in about a decade.
“Literally everything better than us” isn’t right from understanding the architecture and isn’t the opinion of most at even the leading labs. AGI maybe will come at some point, but as you said, it isn’t this. We’ve spent since the 1950s with AI “right around the corner.”
In any case, I suppose we disagree on our data—and priors. That being said, I think if you’d want to make a case that it’s totally different than everything before (instead of that it is the same), you’d have to take the onus of bringing the burden of proof. Still, folks disagree and thanks for taking the time to comment!
Yes the change with electrification and modern industry was large and relatively quick. But the point I was making was the shift in the workforce. The electrification and automobiles barely affected the rate of change of the workforce out of agriculture. That was basically a straight line over at least 150 years. It took from 1840 to 1940 to go from 70% of workers in agriculture to 20%. In basically a line with constant slope.
I'm not flying blind here. I've also got a PhD and have been doing ML professionally for 20 years. I now work in technical AI safety research. I'm not talking about white collar work going away next year. I'm saying that it's quite likely we get to an intelligence explosion sometime in the next 20 years. Once that happens it'll be like trying to jump from the 1820 workforce to the 1960 workforce in 5-10 years. It won't be possible. It'll be a disaster.