What LLMs Will Do To Jobs: All You Need is an Oracle
LLMs are Mainly Tools That Enhance Experts
This post is adapted from my upcoming book, “What You Need to Know About AI”, sampling text from primarily one chapter, but also including excerpts from a few others.
People use LLMs for all sorts of things.
One person I know made a pros and cons list for her current relationship in order to evaluate whether she should leave or take the next step. Another used it to generate an analysis of his company’s value and sent that off to a potential public company acquirer for a deal potentially worth millions of dollars. My wife and I used it to pull studies and analyze the probability distribution of how many eggs we’d need to retrieve for IVF to likely have one successful pregnancy because we have complications from a sex-linked genetic condition.
That’s an awful lot of trust placed with what I’ve often called a glorified autocomplete. That is, after all, how transformer-based LLMs actually work—they fill in token-by-token to build a response that looks like something a human probabilistically would write.
And yet, these examples are all reasonable—because they share a single, crucial characteristic. An oracle.
Borrowing From Cryptography
In cryptography, oracles reveal information to an attacker. That information may be vague or incomplete—like pronouncements of the Greek Oracles of Delphi, from where the term is derived. Any information can vastly simplify a hacker’s job.
How? Imagine a password prompt that tells you if your wrong password was too long or too short. That seems like a useful feature for a forgetful user, right? But that is a critical vulnerability that will make a password relatively easy to crack. With that bit of knowledge from that exposed password oracle, a hacker can rule out a vast number of password combinations.
How many? It’s not billions. It’s not trillions. It’s not even quadrillions. With a standard eight to sixty-four character password, the eliminated possibilities is a number that has one hundred and twenty-six zeros. That’s many, many times the total number of atoms in the universe.
When modern attacks can sometimes attempt billions of passwords a second, an oracle like that is the difference between cracking a password taking “a while” versus “longer than the expected lifetime of the universe.”
I have borrowed this concept in understanding when it’s reasonable to use LLMs—or other deep learning systems—and when it is not. It’s incredibly evocative. Even tiny amounts of information can make the difference of lifetimes.
Expert vs Not
To understand this, imagine writing an industry report as an analyst. Obviously, you can use OpenAI’s Deep Research. But how much can you trust it?
As Ben Thompson from Stratechery found when he used it to analyze a friend’s industry, it’s quite useful—as an expert. Deep Research found a lot of basic information but completely missed a major player in the supply chain. That entity happens to be a private company but is significant enough that any report that misses them would get a failing grade.
He was able to catch this easily. He’s Ben Thompson. If it was a real work product, he would have been able to take the broad report, edit it with the relevant information, and then have saved a significant amount of time.
But what if a junior analyst used it instead? That report likely would have gone out the door and been an embarrassment. The difference is Thompson is able to act as an oracle, and the junior analyst is not.
I’ve seen the exact same thing with programmers. An experienced programmer can fly with LLM help. A junior one quickly gets crushed by the weight of bugs and errors introduced by the LLM that the junior did not pick up on reading the code.
Back to the Examples
My friend, who was making pros and cons for her relationship, already had a sense of what she wanted to do—and obviously could have easily corrected something if the LLM spit out something nonsensical.
The person who generated the report on his company’s acquisition value obviously knew his own business—and besides, sent it to his investors, including me, who looked it over.
As for my wife and me, we were out and about with an energetic toddler, so having Deep Research write out the math for our IVF process and genetic condition was nice. I also could easily glance over the numbers—given I’m already distrustful of LLMs doing math—and recognize if something was off by a significant amount.
In all of these cases, the oracle makes it so the AI is useful. And, in all of these cases, the oracle is human. It doesn’t necessarily have to be, but with the limitations of deep learning and LLMs, it generally will be.
It’s poetic. The combination that works is the oracle and the know-nothing machine.
(Modern LLMs literally do “know nothing”—which I’ve explained before as a contrast between them and the expert systems of the 1980s—but the short answer is they train on weights… and then generate plausible tokens. They do not actually memorize any data. It merely looks like it when you ask questions that draw from the “center of the distribution.”But that can get really, really dicey when you ask for things don’t just need to be “generally right” but need to be precisely right, like legal cases).
What does this mean for jobs?
The Luddites were skilled artisans in textiles from the 1800s in England who protested against the mechanization of their industry. They were ultimately violently suppressed by the British government, imprisoned, executed, or worst, sent to Australia.
They were right about being obsoleted by machines. Hand-stitching and weaving, even today, can do things that machines can’t—but for the vast majority of cases, mechanized textiles are good enough… and much cheaper.
Mechanization down-skilled their trade. It allowed many more people to do what they did.
On the other hand, computers don’t exist anymore.
That might sound odd, but that’s because the profession of computer has been so thoroughly wiped out by electronic computers that few even remember that it used to be a title held by humans. During WWI and WWII, computers—mostly women, with men out on the war front—were critical for logistics, navigation, ballistics, and more. But by 1952, when the Association for Computing Machinery (ACM) started its now famous journal, the profession of computer was essentially extinct.
Of course, that didn’t mean those people who were analytically competent were suddenly useless. In fact, the ability to get past mere calculation unlocked a huge slew of opportunities. There are vastly more STEM jobs today than there were in 1952, and those today who would have been computers are probably much more gainfully employed than they were then.
So what will LLMs do?
It’s fairly clear here, isn’t it? For the most part, we’re looking at up-skilling… though not entirely.
Certain things, especially things like low-stakes, simple translations, might have LLMs do it “good enough.” High-stakes, like for an international business contract, or for a letter to your foreign in-laws? You probably still want an experienced human translator to at least look over the machine translation.
For the most part, though, the way this is mostly going is a tool to make experts even better. That may raise questions about greater income inequality, with highly skilled and productive people… getting even more productive.
That being said, I think mass unemployment is not on the horizon anytime soon—which, in our uncertain times, is at least something.
Thanks for reading!
I hope you enjoyed this article. If you’d like to learn more about AI’s past, present, and future in an easy to understand way, I’m working on a book titled What You Need to Know About AI that will be published later this year.
Sign up below to get updates on the book development and release here!
“ imprisoned, executed, or worst, sent to Australia.” made me laugh anyway :)
Clear, logical, and relevant. And much needed. Well done!