News Roundup: May 27, 2024 🧴
Why does Google suck so much, Microsoft Co-Pilot Everywhere, and Sam Altman
It’s another round-up of interesting news with my commentary. As always, some links are paywalled, but I’ll give enough context that reading more is helpful but not required.
Another day, another PR disaster for a Google AI launch
It didn’t have to be this way. I’ve said in the past that Google has one of the strongest cases for a differentiated “foundational model”—specifically, because of their data treasure trove of YouTube and technically all others breaking Terms of Service when they use that data. It’s so valuable that other companies can’t really do without it. Greg Brockman, OpenAI president, reportedly personally collected videos with the company, knowing this was a legal grey-zone (and probably illegal) at best.
All of that promise. All of that talent. All of those resources.
Instead, soon after Google launched its AI search, journalists are writing articles about how “Google is paying Reddit $60 million for F*cksmith to tell its users to eat glue.” Basically, Google AI grabbed a parody reddit post about using Elmer’s glue to adhere cheese to pizza… and answered a search query with that parody answer as the real answer.
This is not the only example of insane answers. Going along with the food theme, eating rocks for nutrients comes up as another… interesting query result.
After Google I/O, everyone was initially worried about how Google’s new AI-search would destroy journalism by burying actual articles/publishers (a valid challenge, and one that is actually on Google to balance—I’ve repeatedly brought up the challenge of how one would display ads “fairly” and transparently in this new paradigm as well).
Instead, after Bard being a pathological liar, the rebranded Gemini then promptly generating racially diverse Nazis, and now glue-pizza, one really has to wonder: is Google truly able to squander such a massive advantage?
There’s a great article from Ed Zitron about one instance in how Google search was “destroyed.” I’d say that’s just one episode in a continuum of decisions that rendered search largely a page full of ads and where I, personally, search thing-I’m-searching-for + “reddit” to read human reactions because I don’t trust (or, more to the point, find helpful) what Google surfaces anymore.
If this does ultimately lead to Google’s downfall, it does prove—again—that even neigh insurmountable advantages can be squandered by terrible execution.
Microsoft AI will be everywhere
Microsoft Build last week saw a lot of exciting stuff launch.
One of the most noted features was Copilot Recall, which can search and look for everything you’ve ever done. As some note, this is basically spyware/keylogger/etc (essentially, stuff that a decade or two ago would have been featured in viruses), now sold as a feature.
I have continuously in multiple podcasts since reading Chip War soon after it came out in late 2022, talked about AI as essentially being the same thing as semiconductors in the 1990s and 2000s. I also talk about this more literally in terms of having the fastest CPU in the 1990s, in “Most AI Startups Are Doomed”. Basically, companies can have proprietary data, and freely swap out for new, shinier AI models because the AI model side will be commoditized.
Microsoft thinks about it the same way (excerpting from a conversation between Ben Thompson and John Gruber about Thompson’s interviews with Microsoft’s CEO and CTO):
Traditionally, the ideal spot to be if you’re building a software project is pushing the limits of hardware slightly ahead of the state of the art, assuming that Moore’s law will catch up… by the time you’re done building the software, the hardware will catch up. Taking that to an AI context, getting ahead of the models… this was a big theme that Microsoft was pushing overall… [Optimizing] is wasted effort. It’s not going to help you win in the market if your competitors are building features that barely work, and then magically work better 18 months later. Kevin Scott was definitely pushing on this point. Build features where the model is a bit too dumb to pull it off. The model will be smart enough to pull it off sooner than you think.
I’ve said this before, too: compute has been overrated as the source of improvements for AI. It’s really been that the frontier of the models has been shifting ridiculously fast. From my article on compute:
Substantive improvements are dictated largely by bottlenecks. If you double compute for the same cost today, you won’t get double the performance of today’s models (assuming you could even properly measure “performance”). The next frontier of AI shock and awe is more likely to come from the upcoming techniques and models already being developed today.
Microsoft is making the right bet, in my opinion. AI models will be like semiconductors and be the platform (but also commodity) that everything else is built on. Microsoft aims to own the Windows of the new era, not the CPU.
As such, Microsoft will happily use OpenAI as a loss-leader to own the platform that everything runs on, whether it’s Windows, Microsoft 365, or Azure. And I think that’s brilliant and a continued example of Microsoft leading the pack on AI.
Sam Altman is the most unpopular man of the week (and a real problem for Microsoft)
Sam Altman had a terrible, horrible, no good, very bad day (actually, week). Starting from the entire Scarlett Johansson’s voice thing casting a pall over OpenAI’s otherwise well-received GPT-4o unveiling, to all the departures from OpenAI from the Superalignment team (supposedly, the team was for safety, but I think Hyperdimensional has good context here), to just getting savaged about things that were known but are cast into a new light like Altman’s stake in OpenAI (no “direct equity” but him far from being uninvested—Gary Marcus has a great list here of this and other challenges)… it’s been a bad week for him and OpenAI.
It doesn’t really matter that OpenAI got a kind-of puff piece in the Washington Post to “clarify” the Scarlett Johansson matter. It was a bad look to start with, and—kind of like the list in Gary Marcus’s article above—the piece spins but doesn’t fundamentally address OpenAI’s core problem. Specifically, 1) he did reach out, and 2) more deeply, this was going to be an area that prompts a lot of anxiety and bad PR regardless. Anyway, this ScarJo thing isn’t that important in and of itself—it’s mainly that this has become a pattern. Let’s not forget the OpenAI-New York Times lawsuit.
OpenAI and Sam still have a lot of the “move fast and break things,” that Microsoft, despite partnering with OpenAI, probably wishes it didn’t. Scandals on OpenAI’s side (let’s also not forget the entire Microsoft being blindsided by the Sam Altman firing thing) are most likely one of the biggest threats to Microsoft’s future dominance in being the AI vendor/money printer.
Obviously, a lot of that culture is what brought OpenAI to where it is, but now that dominance has been seized, it poses a real danger in terms of turning the public against AI and attracting regulator attention.
OpenAI, honestly, has always been a terrible deal in valuation, weird terms, a wonky structure from practically its very beginning… except for Microsoft, and maybe some of the earliest folks and YC. This bad deal includes employees (which was yet another no-good, very bad thing that came out in terms of employees kind-of-not-owning-their-shares-in-a-way).
Competition is still hot. Google really should not be that much of a disaster. But regardless, there’s still Apple (regardless of its rumored partnership with OpenAI) and Meta and Amazon/Anthropic (and wanna-be competitors too like Mistral—finally, I’m skeptical but will wait and see on xAI). OpenAI needs to keep its shine to both keep up its ability to attract talent, and not have governments start coming down on it.
And yes, it was supposed to be 2024!