8 June 2025
News
OpenAI moves up the stack
OpenAI added tools to connect to Google Docs, Teams and a few other third-party services, including the ability to record, take notes and summarise for Teams calls. This is a bunch of startups’ entire product (remember ‘thin GPT wrapper’ - but of course, when you build on a platform you should always be building at right-angles to its provisions (remember ‘why won’t Google do this?’). LINK
Conversely, Anthropic caused a fuss by cutting access to its models for the coding tool Windsurf, because of rumours that OpenAI might be buying it. This is probably not the best way to persuade API buyers that you’re a reliable partner. LINK
Advertising automation
Mark Zuckerberg has been talking about AI ad-creation for a while now, and the WSJ says it’s now working on a fully automated creation system, in which many variants of ads would be dynamically generated, complete with all the creative, and tested and refined in real time. As Zuck said, tell us the aim and give us a budget and we’ll do the rest. This seems a little simplistic about the aims of marketers, but Meta would certainly like to lower the barriers to entry and optimisation for its SME advertisers, which make up about half of revenue. LINK
Meanwhile, WPP has launched its own model for generating segmentation, creative and media planning (people in advertising talk a lot about the scope to automate all of the media buying, which is still very manual). LINK
Data licensing
Bloomberg reports that three major labels are in talks to license work to Udio and Suno, which can generate new music ‘in the style of…’ This is an interesting point on a matrix of copyright issues vis-à-vis training data: these companies’ outputs are not piracy per se but they are competitive and substitutional to the training data, as opposed to (say) using a LLM trained on novels to automate accounting systems. LINK
Conversely, Reddit is suing Anthropic for scraping its forums for general and specific training data despite being told not to. LINK
And third, as part of the NY Times lawsuit against OpenAI, the NYT speculated that people might be getting verbatim chunks of its news stories in responses to ChatGPT prompts, so I got a judge to order OpenAI to retain all logs of all users’ outputs, which seems like a huge overreach and potential privacy problem. LINK
The week in AI
The FT reports that Apple plans to use Alibaba’s models for ‘Apple Intelligence’ in China, but it’s stuck in the approval process due to Trump’s trade wars. There’s a narrative here that Apple is losing share in China to Android with a blizzard of new AI features from the local players. LINK
Meta, like Microsoft a while ago, has signed a deal to buy electricity for data centres from a nuclear power plant. LINK
Unsurprisingly: Amazon is testing humanoid robots for delivery. LINK
Ideas
This week’s viral AI research paper comes from a team at Apple, arguing that the new wave of ‘reasoning’ LLMs or ‘RLMs’ can be made to collapse in predictable ways under test conditions, indicating that they’re not actually ’reasoning’ at all. This is of course the general question about whether LLMs can become general intelligence: are they actually ‘intelligent’ (whatever that word means) or do they just look like it, and if they give good enough results does it matter? LINK
Steven Sinofsky on the paper above: don’t anthropomorphise LLMs. LINK
Another interesting research paper, trying to calculate precisely how much of the training data is explicitly encoded inside an LLN. LINK
The ‘big four’ accounting firms all want business auditing models and products. LINK
North Korea hacks the US with the help of witting or unwitting Americans, just trying to get by. LINK
A fascinating little New York Times profile of a Twitter politics troll. He gets a cut of revenue from Twitter, but not actually that much - about $160k since 2023 despite being prominent enough to get an invitation to the White House. LINK
Hailey Bieber (me neither) sold her cosmetics brand Rhode to Elf this week for $1bn. That sounds like a headline from the great D2C boom of the late 2010s, but it’s still possible. LINK
Reflecting the above, Bain’s annual report on new D2C brands. LINK
How Expedia is talking about LLMs replacing search. LINK
Outside interests
RIP Bill Atkinson. LINK
Demis Hassabis and Darren Aronofsky discuss AI film-making tools. LINK
Data
Epoch AI released a dataset on AI training… what do we call them? Datacentres? Factories? LINK
MrBeast now has 400m subscribers. LINK
Unsurprisingly, it looks like Temu’s US usage has halved since the tariff changes. LINK
Column
Does AI kill Apple?
Ten years ago, an idea went around the tech industry that machine learning (which we then called ‘AI’) was an existential threat to Apple. This was clearly the New Thing, it depended on data, and Apple deliberately didn’t collect user data (‘privacy’). Apple was bad at services. Apple’s culture of secrecy would make it hard to hire researchers. Google would use machine learning to turn Android into an iPhone-killer.
That isn’t what happened. It turned out that machine learning was a technology, not a product: Apple used it to build new features, without needing user data (or leaving user data on your device), and meanwhile everyone from Google to Snap to TikTok used machine learning to build new things that you ran on your iPhone. Indeed, a lot of machine learning features were driven by the camera, which meant a high-end phone, which mostly meant an iPhone (especially in the USA).
Today LLMs bring the same kinds of worry for Apple, but with a much sharper focus. Machine learning was never really a product that a consumer would use (any more than SQL was) - it had to be wrapped in a product. But there is a very widespread view that LLMs themselves area product and can replace large chunks of existing software use-cases and create lots of new ones as well. If you can ask ChatGPT to book you a trip, get you an Uber from the airport, and order dinner delivered for when you get home, then the app ecosystem that drives thousand-dollar hardware sales at 40% margins has new questions. Meanwhile, of course, Apple has dropped the ball on LLMs, badly. Jony Ive said the iPhone is a ‘legacy product’ and wants to create a new device category with LLMs. And regulators are trying to add insertion points for third-party services on the iPhone that allow competition that wasn’t possible before.
Now there’s another historical comparison: Microsoft. When the consumer internet took off, 30 years ago, software development moved from Win32 to the web, and it didn’t use Microsoft platforms to do that. Microsoft tried some development tools that didn’t win in the market, and it built a bunch of web properties that didn’t give it strategic leverage (who today remembers that Microsoft created Expedia?). Microsoft did manage to crowbar its way into dominance in web browsers for a while, but it turned out that web browsers weren’t the key point of leverage (at least, not then and not for Microsoft). Microsoft failed the web. But you needed a computer to use the web: Apple was tiny and consumer Linux never happened, so that meant a Windows PC. The web meant that Microsoft lost its dominance of tech, but also meant that Microsoft became ten times bigger.
This gets us to a couple of scenarios. The base case, perhaps, is that this really does play out like machine learning: LLMs drive new apps, and features in existing apps, and features for Apple, and maybe other companies’ apps are a bit better sometimes, but not enough to matter (how many people switched to Android to get native Google Photos, say?). And meanwhile, Apple’s hardware design team shouldn’t have a problem ensuring that iPhones still have the best camera, screen and battery life - indeed they can also have the best edge compute to run local models.
The problem with this, ironically, is exactly what Apple proposed last year: a very clear vision of a way that an LLM assistant that has all the context that comes from seeing everything you do on your phone can change what a smartphone means. That vision gives an alternative device a different basis to compete with Apple, and if Apple can’t ship that and Google or OpenAI can, then Apple would have a new kind of problem.
On the other hand, Apple may have radically over-promised in its ‘Siri 2’ demo (that turned out to be a mock-up), but no one else has that working yet either. Apple proposed a hybrid cloud/on-device, agent, tool-using system, that privately indexes most of your online activity and can seamlessly integrate with third-party apps, and that’s ready to be deployed, mostly error-free and with no prompt injection attacks, to over a billion people who’ve never heard of ‘hallucinations’. Plenty of other people have done concepts and demos of that too, but no one has shipped it. Broadening that point, if you live in tech and use five LLMs every day, it’s really important to understand that most people aren’t, and most people who do use an LLM only pick it up once every week or two. Apple might lose a lot of share to the vision or visions I outlined above on a five-year view, but that vision isn’t going to drive a lot of switching to Android in 2025. Half the cool new Google stuff won’t work on most Androids anyway and half the rest works on iPhones too.
The broader point, perhaps, is whether you think Apple itself is drifting. As I write when the delay was announced, the screw-up in Siri 2 is not that it doesn’t work yet but that they went on stage and announced it without realising it wasn’t working. A whole ago I saw Apple described as being run like a family business - Uncle Eddie is off spending money in Hollywood and running Music into the ground, but Cousin John couldn’t get budget for GPUs. Why on earth did it ship the Vision Pro? Why has it been so obstinate over app store commissions? Why is Apple News doing so well… wait, sorry that’s the wrong narrative. And there are always narratives, and you can find facts to fit them.
The more useful point, I think, would be to say that it’s still very early and still very unclear how any of this is going to work. We don’t actually know what the right interaction models and product strategies are for LLMs, and we never do at the beginning (half of Silicon Valley thought Android would crush the iPhone because it was ‘open’), and this doesn’t exactly work yet. So Apple does have time. It does have to wait to get this right, whatever it announces tomorrow, just as Google has to get this right and OpenAI does not, because Google and Apple have existing products with pre-existing expectations and they can’t break them. It might be that the real pivot point is in glasses: that’s a product that really might be an entirely new hundred-billion-dollar category that could pull sales away from iPhones, and really does have to be lit up by AI from the ground up where smartphones are not. But then we’re talking about a piece of hardware that has to be beautiful, power-efficient and private - almost like a watch. What’s the best company to make that?