Today’s AI critics don’t understand the history of technology
But is AI different than other technologies?
February ended up being a busy month! I’ll be writing new pieces later this month and onward, but in the meantime check out a post I published last year over at Creative Ventures about why some of the criticism about why AI will not destroy jobs and industries (at least not in the way some might think) and how this criticism is similar to criticism towards previous groundbreaking technologies.
As more people are talking about utilizing AI in their work and lives, this topic is more relevant than ever.
Stop me when you know what I’m talking about: It’s a technology that will put thousands of educated, skilled workers out of a job and allow a single individual to do the work dozens, or even hundreds—completely automatically. It will make entire businesses completely change how they operate and finally buy the high-tech computer systems that they had held back from.
Is it AI? GPT-3? Or even GPT-4? No. It was VisiCalc (and then later Excel and other competitors) — visual calculators, also known as computer spreadsheets.
The advent of spreadsheets did not put office workers, actuaries, accountants, or many other impacted professions out of work. If anything, the productivity improvements made actually tracking things systematically much easier and thus more prevalent in businesses across all kinds of industries that might not have bothered previously.
The same thing happened with Photoshop and other editing software gaining the ability to automatically enhance photos or crop subjects out of backgrounds. And yet again, with random social media apps that teenagers use gaining more powerful video editing capabilities than the highest-end, most expensive editing software of yesteryear. It didn’t make editors less in demand — it helped fuel a boom in new genres of content and the advent of 24/7 streams of information/entertainment.
If anything, as we’ve discussed at Creative Ventures, manual labor is much harder to automate. Physical dexterity and manipulation are not native to computers/algorithms, and they struggle mightily in that unfamiliar territory. But even there, we’ve seen advancements in AI/robotics (especially in reinforcement learning and advanced control systems). That process is far slower than “digital” automation, but even there, we don’t expect to see mass unemployment. The advent of the shovel didn’t suddenly put manual laborers out of jobs. Neither did the excavator. And robotics won’t either.
Is this time different?
“AI is different.” That’s the most common response to those confronted with the fact that technology and productivity improvements have just not led to mass unemployment. Relative to everything else, this is a “paradigm shift.”
One of the things we’ve learned over time is human beings are very bad at predicting what happens in cases of massive change. We tend to have limited imaginations when extrapolating the impact of technology. If better technology means you need fewer people, the logical conclusion is eventually you won’t need any people. People fail to imagine that entirely new industries, specializations, and further technologies will emerge that, if anything, employ more people productively than ever.
But is AI different? It’s a pretty different technology in the sense that it’s “intelligent” (it is, after all, in the name!). There are two responses to that.
The first one is factual.
Artificial intelligence just isn’t that intelligent
Talk to any AI researcher, and they will likely tell you that “AI” is kind of not “AI.” We’ve adopted the terminology and made it interchangeable with “ML” (machine learning) to the point that it’s pointless to fight it, but it doesn’t change that AI isn’t what science fiction imagines it to be.
AI/ML today is fancy statistics. It is the combination of algorithms and statistical techniques to cleverly “fit” problems — either to predict from it (typically what people think of for AI), or replicate fitted statistical distributions (generative algorithms, like ChatGPT or Midjourney or similar). Despite Microsoft’s Sydney chatbot describing itself to the contrary and promising not to annihilate humanity if it doesn’t have to, it is really just a statistical algorithm running, fitting what a human might seem to say.
It’s been a long, philosophical debate whether human intelligence is an “emergent” phenomenon that could come out of somewhat random processes stuck together that eventually create consciousness. Some think maybe. Some think not and think that our current path with machine learning will never create true AI. Either way, everyone agrees that even if it’s possible, current “AI” isn’t true AI and isn’t really anywhere that close yet.
As such, it’s really just a tool, just like the IT tools that emerged out of the 90s. Or a shovel.
Now, that brings us to a hypothetical:
Even if artificial intelligence were intelligent
Even if our current intelligent agents were actually intelligent, it’s unlikely to just lead to mass unemployment. Now, people might step back on how much they work. After all, remember that once upon a time, we didn’t produce enough food to allow us to stop working, even on weekends. And more recently, there’s been some experiments with 4-day work weeks. Leisure time isn’t inherently a bad thing and is a luxury partly afforded us by greater productivity.
But in terms of stopping work entirely? It’s unlikely since even artificial intelligence will likely have strengths and weaknesses and is potentially quite different than humans. It starts to get thornier in terms of what intelligence is, what is humanity (or consciousness), and what requires rights or not… but at least for the economics question, it’s pretty clear that when you have different entities (whether people or countries), they self-organize to produce things more efficiently. This is why developing countries ended up specializing as they did in terms of extraction and manufacturing (putting aside whether or not this was “good” for certain countries), and rich, advanced economies specialized more in knowledge work, lower-volume advanced manufacturing, and services. The U.S. can literally do everything better than certain small, poor countries (actually, a lot of them, no offense) — but that doesn’t mean that that country just stops doing anything, participating in the world economy, or hopefully pulling itself out of poverty.
Ultimately, this latter part is pretty speculative. It simply isn’t actually where we are. But regardless, critics of today’s AI fundamentally don’t understand how technologies get integrated into society. I’d encourage them to keep a more open mind. It would have been impossible for someone born in 1900 to imagine what our lives our today. Hell, it would have been impossible for them to imagine what their lives would have been like 30, 40, or 50 years from that point. It’s worth being a bit more humble about our ability to imagine what our lives will be like in the future as well, regardless of what path this technology takes.
I think we already know that AI actually IS different. I don't mean that anyone knows what the longer-term outcomes are going to be, but rather that we already have very good evidence that we have a uniquely fast feedback loop on our hands.
It's not the first technological feedback loop. You could argue that the entire industrial revolution was such a loop. Very roughly, harnessing fossil fuels, steam, and machines made industrial operations more effective, which in turn made us better at fossil fuel extraction and machine production. However, that feedback loop was slow because it involved lots of laborious and long lead time work like digging mines and building factories. The current AI loop involves writing code, coming up with new algorithms, and training better AIs, and it has already made some of those processes on the order of 10% more efficient. We're also investing trillions in compute, model training, and related improvements, rapidly growing the amount of technology and capital we can apply to AI.
Based on AI-driven productivity and investment, it seems pretty clear to me that we are already in the fastest technology adoption and deployment cycle in history. In contrast, VisiCalc adoption didn't directly improve VisiCalc. Spreadsheets have only improved incrementally over the last 20-30 years. Would we expect the same for the next 20 years of AI?
In terms of outcomes, I agree that the result isn't going to be some binary scenario like no work for anyone. Outcomes by type of work will also depend highly on how elastic demand for that type of work is. For instance, let's say software development becomes 10X more efficient but we have 20X untapped demand for more code if it can be produced faster and at a lower cost. We could actually see software employment increase by 2X. But that will not be the case for most fields. If copywriting becomes 10X more efficient, it's likely that much of that will just result in cost savings. The same way the word processor made the typing pool obsolete, may forms of cognitive work will become obsolete or a small part of a bigger job.
From our perspective, the disruptions of the industrial revolution seem relatively benign, but at the time they were big upheavals. I think we also need to keep an open mind that what's coming down the road with AI, in terms of changing the future of work, will be significantly more dramatic.
The astonishingly sudden advance that GPT-4 represented, is being forgotten, a moment when everyone was floored, astonished, and amazed. So, the idea that AI is actually not AI and, is silly.
Second, the limiting factor in all these scenarios of change is how quickly can society assimilate these technologies. How quickly can they be translated into products that can be deployed. How quickly can employers start to use these new technologies.