News Roundup: May 27, 2024 š§“
Why does Google suck so much, Microsoft Co-Pilot Everywhere, and Sam Altman
Itās another round-up of interesting news with my commentary. As always, some links are paywalled, but Iāll give enough context that reading more is helpful but not required.
Another day, another PR disaster for a Google AI launch
It didnāt have to be this way. Iāve said in the past that Google has one of the strongest cases for a differentiated āfoundational modelāāspecifically, because of their data treasure trove of YouTube and technically all others breaking Terms of Service when they use that data. Itās so valuable that other companies canāt really do without it. Greg Brockman, OpenAI president, reportedly personally collected videos with the company, knowing this was a legal grey-zone (and probably illegal) at best.
All of that promise. All of that talent. All of those resources.
Instead, soon after Google launched its AI search, journalists are writing articles about how āGoogle is paying Reddit $60 million for F*cksmith to tell its users to eat glue.ā Basically, Google AI grabbed a parody reddit post about using Elmerās glue to adhere cheese to pizza⦠and answered a search query with that parody answer as the real answer.

This is not the only example of insane answers. Going along with the food theme, eating rocks for nutrients comes up as another⦠interesting query result.
After Google I/O, everyone was initially worried about how Googleās new AI-search would destroy journalism by burying actual articles/publishers (a valid challenge, and one that is actually on Google to balanceāIāve repeatedly brought up the challenge of how one would display ads āfairlyā and transparently in this new paradigm as well).
Instead, after Bard being a pathological liar, the rebranded Gemini then promptly generating racially diverse Nazis, and now glue-pizza, one really has to wonder: is Google truly able to squander such a massive advantage?
Thereās a great article from Ed Zitron about one instance in how Google search was ādestroyed.ā Iād say thatās just one episode in a continuum of decisions that rendered search largely a page full of ads and where I, personally, search thing-Iām-searching-for + āredditā to read human reactions because I donāt trust (or, more to the point, find helpful) what Google surfaces anymore.
If this does ultimately lead to Googleās downfall, it does proveāagaināthat even neigh insurmountable advantages can be squandered by terrible execution.
Microsoft AI will be everywhere
Microsoft Build last week saw a lot of exciting stuff launch.
One of the most noted features was Copilot Recall, which can search and look for everything youāve ever done. As some note, this is basically spyware/keylogger/etc (essentially, stuff that a decade or two ago would have been featured in viruses), now sold as a feature.
I have continuously in multiple podcasts since reading Chip War soon after it came out in late 2022, talked about AI as essentially being the same thing as semiconductors in the 1990s and 2000s. I also talk about this more literally in terms of having the fastest CPU in the 1990s, in āMost AI Startups Are Doomedā. Basically, companies can have proprietary data, and freely swap out for new, shinier AI models because the AI model side will be commoditized.
Microsoft thinks about it the same way (excerpting from a conversation between Ben Thompson and John Gruber about Thompsonās interviews with Microsoftās CEO and CTO):
Traditionally, the ideal spot to be if youāre building a software project is pushing the limits of hardware slightly ahead of the state of the art, assuming that Mooreās law will catch up⦠by the time youāre done building the software, the hardware will catch up. Taking that to an AI context, getting ahead of the models⦠this was a big theme that Microsoft was pushing overall⦠[Optimizing] is wasted effort. Itās not going to help you win in the market if your competitors are building features that barely work, and then magically work better 18 months later. Kevin Scott was definitely pushing on this point. Build features where the model is a bit too dumb to pull it off. The model will be smart enough to pull it off sooner than you think.
Iāve said this before, too: compute has been overrated as the source of improvements for AI. Itās really been that the frontier of the models has been shifting ridiculously fast. From my article on compute:
Substantive improvements are dictated largely by bottlenecks. If you double compute for the same cost today, you wonāt get double the performance of todayās models (assuming you could even properly measure āperformanceā). The next frontier of AI shock and awe is more likely to come from the upcoming techniques and models already being developed today.
Microsoft is making the right bet, in my opinion. AI models will be like semiconductors and be the platform (but also commodity) that everything else is built on. Microsoft aims to own the Windows of the new era, not the CPU.
As such, Microsoft will happily use OpenAI as a loss-leader to own the platform that everything runs on, whether itās Windows, Microsoft 365, or Azure. And I think thatās brilliant and a continued example of Microsoft leading the pack on AI.
Sam Altman is the most unpopular man of the week (and a real problem for Microsoft)
Sam Altman had a terrible, horrible, no good, very bad day (actually, week). Starting from the entire Scarlett Johanssonās voice thing casting a pall over OpenAIās otherwise well-received GPT-4o unveiling, to all the departures from OpenAI from the Superalignment team (supposedly, the team was for safety, but I think Hyperdimensional has good context here), to just getting savaged about things that were known but are cast into a new light like Altmanās stake in OpenAI (no ādirect equityā but him far from being uninvestedāGary Marcus has a great list here of this and other challenges)⦠itās been a bad week for him and OpenAI.
It doesnāt really matter that OpenAI got a kind-of puff piece in the Washington Post to āclarifyā the Scarlett Johansson matter. It was a bad look to start with, andākind of like the list in Gary Marcusās article aboveāthe piece spins but doesnāt fundamentally address OpenAIās core problem. Specifically, 1) he did reach out, and 2) more deeply, this was going to be an area that prompts a lot of anxiety and bad PR regardless. Anyway, this ScarJo thing isnāt that important in and of itselfāitās mainly that this has become a pattern. Letās not forget the OpenAI-New York Times lawsuit.
OpenAI and Sam still have a lot of the āmove fast and break things,ā that Microsoft, despite partnering with OpenAI, probably wishes it didnāt. Scandals on OpenAIās side (letās also not forget the entire Microsoft being blindsided by the Sam Altman firing thing) are most likely one of the biggest threats to Microsoftās future dominance in being the AI vendor/money printer.
Obviously, a lot of that culture is what brought OpenAI to where it is, but now that dominance has been seized, it poses a real danger in terms of turning the public against AI and attracting regulator attention.
OpenAI, honestly, has always been a terrible deal in valuation, weird terms, a wonky structure from practically its very beginning⦠except for Microsoft, and maybe some of the earliest folks and YC. This bad deal includes employees (which was yet another no-good, very bad thing that came out in terms of employees kind-of-not-owning-their-shares-in-a-way).
Competition is still hot. Google really should not be that much of a disaster. But regardless, thereās still Apple (regardless of its rumored partnership with OpenAI) and Meta and Amazon/Anthropic (and wanna-be competitors too like Mistralāfinally, Iām skeptical but will wait and see on xAI). OpenAI needs to keep its shine to both keep up its ability to attract talent, and not have governments start coming down on it.
And yes, it was supposed to be 2024!