Discussion about this post

User's avatar
Rob Carpenter's avatar

I appreciate the thoughtful analysis. My concern is that it assumes AI will progress along a historical average trend line. I’m not convinced that’s the right model.

AI feels more like it’s on an intelligence J-curve. For a long time, progress looks incremental, even overhyped relative to impact. Then capability compounds. Once systems begin meaningfully improving reasoning, autonomy, and self-directed learning, historical averages stop being useful predictors. They understate what happens when feedback loops kick in.

Leaders like Sam Altman and Demis Hassabis have publicly projected timelines for AGI in the 2028–2030 range. They could be wrong. But if they’re even directionally correct, we’re not talking about marginal productivity gains—we’re talking about a structural shift in how cognitive labor is performed.

And that’s the key point: once (and if) we reach AGI, I don’t think “decision-based” knowledge work remains protected territory. Strategy, analysis, forecasting, optimization—these are ultimately reasoning tasks. If general reasoning becomes automatable at scale, the boundary between “assistive AI” and “replacement AI” blurs quickly.

History is useful—but if we’re entering a regime change, historical averages may dramatically understate what’s coming.

Brent Naseath's avatar

It's always nice to read an intelligent article with a reasonable perspective in the forest of propaganda by AI companies and their investors. You never disappoint, James.

In a non-professional space, I had an experience yesterday that gave me pause. Given that Google Chrome automatically gives AI responses, using AI is unavoidable. So yesterday as I was planning our retirement budget, I asked Chrome a question about taxing social security benefits. I'd read that up to 85% are taxable if you make more than $44,000. As part of my exploration, I prompted that if our only retirement income were from social security benefits and gave it a specific amount, how much tax we would pay annually. It responded with an amount about half of what I had mentally guessed and listed the calculation. It made sense and I could see what information I didn't know so I was about to accept it. But for some reason, I clicked on the "learn more with AI" button.

It transferred my prompt to Gemini, which came up with a different calculation and answer, saying that I would owe no taxes. It told me where my misunderstanding was. I then asked Gemini several questions from the previous Google Chrome formula, such as "what about...?" Each time, it changed it's answer and calculation, increasing the amount of tax I would pay. Perplexed, I searched through the links below it in order. The ones at the top were from investment advisor companies trying to get your business and one was from H&R Block. It was evident that none of the authors understood the real formula and all of the articles were incomplete and confusing.

The last link was to irs.gov. After reading several articles and still being confused (because they "dummy down" the information by making it incomplete), I finally found a tool on their website that you can use to calculate your exact amount of taxable income and resulting tax. It confirmed that no tax would be owed.

But that left any faith I had in AI as a search tool obliterated. Whatever it says, sounds logical. But that is very misleading. Over the last year, I've used all of the LLMs on topics where I'm an expert and found the results consistently wrong. My conclusion is that AI is measured on benchmark tests that it is trained against. But in the real world, the results are much worse than what the propaganda by the AI companies and those invested in them would have us believe, even on simple data with simple formulas. That's why those using AI advise that you have to keep prompting to finally get the result you want. But often, the results get worse, not better.

21 more comments...

No posts

Ready for more?