Read it yourself here! Except, of course, it’s ridiculously long.
It’s easier to read the fact sheet, which is more like an executive summary. I do recommend skimming it, since you can get a feel for it, direct from the source, and it’s quick.
At this point, there are quite a few articles about the AI Executive Order, including an excellent one by Ben Thompson at Stratechery. Some analyses go more line-by-line, but I’d personally just suggest reading the fact sheet above instead—if it’s just a summary, just get it from the source itself.
I also started this article section-by-section but figured that it was less useful than giving a broad synthesis of what I think about it and its practical implications.
What is the 80/20 of it?
The EO directs various agencies to think about and create regulatory guidelines around the use of AI, to both protect against potential harm and “promote” it.
It also encourages US Congress to do something and write legislation—which seems unlikely in its current state—since EOs can only go so far.
Why don’t I think anything will happen? Well, besides the circus nature of US Congress lately, privacy, despite broad bipartisan interest, has continuously failed to be taken up by Congress. If that can’t get done, this kind of targeted legislation getting anywhere seems far-fetched.
Does it all kick the can down the road?
Now, one of the more specific pieces of the EO “requires that developers of the most powerful AI systems share their safety test results and other critical information with the US government.” This is supposedly based on the Defense Production Act (which was recently used for COVID response) so it’s maybe legal (?). But regardless of legality, it is unenforceable in practice.
There’s a rough definition around “large-scale” that’s larger than anything today that adds a little bit of specificity and official imprimatur to the order. But computing tends to make static numbers look dumb over time (at one point, 1 megabyte of RAM was “more than anyone could need”). For all we know, this could either end up covering all models… or none, as algorithmic sophistication increases.
However, even if this was a reasonable limit, is the government going to be auditing every computing system or cloud run to ensure that all “large-scale” training runs are being reported?
Posturing points are also thrown in
Finally, given the current administration’s political leanings, it says things about standing up for civil rights, workers, consumers, patients, students, and privacy (well, why not, especially if legislation isn’t getting done on it?).
It also specifically calls out algorithmic discrimination in housing (landlords) and the criminal justice system. The silly part of this is that simple statistical models already do that “job” of skewing results and creating unfair outcomes. (Weapons of Math Destruction is a good book on the topic). There’s nothing new about AI there that a linear regression from a high school statistics class can’t do.
Finally, to balance this all out and strike a moderate tone, it also talks about promoting innovation and competition.
Talent “Surge”
We’ve had various surges of military deployments, COVID-19 PPE supplies, etc. over the years, but the EO talks about surging the hiring of AI professionals.
In theory, this could go towards pushing for easing hiring requirements in the government to bring in more relevant technical talent. At least a few people who have worked both in tech and government mentioned that this could have an impact. In theory. Yet again, this remains to be seen how much it will be implemented and matter.
You’re rather flippant about this
Well, this is the EO that was informed by acclaimed AI expert, Tom Cruise, in Mission: Impossible Dead Reckoning Part 1.
More seriously, there isn’t much that has actually happened.
Ben Thompson at Stratechery in the article I linked above had a take I think was right:
The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm in Washington, D.C. about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors.
The current regulatory push by big AI companies is not out of any true belief in existential threats (which is the primary thrust/reasoning of the letter signed by all of them from the Center for AI Safety).
If it was, I think they would be quite disappointed.
The executive order doesn’t talk about it, and Kamala Harris, who went to the UK’s summit on AI regulation, specifically stressed that the US cared about near-term harms, not existential threats. This is in contrast with Rishi Sunak, the UK PM and host of the summit, who was mostly focused on existential threats.
Instead, this effort resembles a lot of tech regulation, which we saw the practical impact of in the European GDPR. It imposes costs, which often prevent entrants—like startups—from threatening the businesses of large incumbents.
Is this good or bad?
In general, I will quote Ben Thompson again, which unfortunately I can’t disagree with.
We should accelerate innovation, not attenuate it. Innovation — technology, broadly speaking — is the only way to grow the pie, and to solve the problems we face that actually exist in any sort of knowable way, from climate change to China, from pandemics to poverty, and from diseases to demographics. To attack the solution is denialism at best, outright sabotage at worst…
In short, this Executive Order is… rooted in the past, yet arrogant about an unknowable future; proscriptive instead of adaptive; and, worst of all, trivially influenced by motivated reasoning best understood as some of the most cynical attempts at regulatory capture the tech industry has ever seen.
I think that’s probably right. There are self-interested reasons that the current AI incumbents signed on—which, as I describe, have and will tend to be the same incumbents as large-scale computing/internet.
If you’re someone who’s hoping big tech will be reined in, unfortunately, this is an effort to entrench big tech in the new frontier of technology.
Additionally, to think that the government, without expertise, will somehow hit the right balance between the upside of innovation and preventing harm is… optimistic. I’m not a “government shouldn’t get involved in anything” kind of person, but this is kind of like inviting a tax accountant to do electrical work in your house.
I also agree that this EO is not really about the upside or encouraging uses/competition.
If so, the language wouldn’t mostly be “there’s scary stuff out there.” What else should we read into significant ink spilled on scary biological agents, chemical attacks, and preventing dangers to vulnerable parts of our society, and only pays vague, occasional mentions of “innovation and competition”?
That being said…
It’s probably not going to do anything. Congress hasn’t gotten a real privacy bill done, years later after supposedly being late to the party.
EOs exist for the administration that they’re in, and there’s no guarantee this one will survive the next election. Additionally, even if it does, there’s no guarantee that any of the requested things within the EO will be done quickly and make any meaningful impact.
Now, I do think there is a danger of overregulating AI. AI has a lot of wealth-creation and standard-of-living potential for society that needs more of it. However, we also have big problems like aging populations, climate change, and many other things where AI—not as a buzzword, but as the first tool for supplementing aspects of human intelligence—truly will make a huge impact. I also do think some regulation is likely a good idea.
However, if I were to fear one direction taken over the other, it’s probably overregulation.
The direction of AI is that it’s becoming easier and easier to do interesting things with more accessible amounts of data and compute. Bad actors will soon be able to do whatever they want with AI with computing equipment that is hard to monitor (e.g. gaming PCs), so overregulation won’t even protect us from the worst outcomes. All it will do is prevent us from creating countermeasures and realizing the benefits of AI.
When that point comes, and we have “real” legislation, I think we’ll have to watch what happens carefully.
For now, this isn’t it.