30 March 2025
News
War signals
You don’t need this newsletter to tell you that a group of Trump officials used Signal, a smartphone messaging app, to discuss plans for military action in Yemen, but it’s worth noting why this was very very stupid from a smartphone security perspective.
It’s not just that they added a journalist by mistake, but that they’re using personal smartphones. While Signal itself is (probably) secure, if your phone has been compromised then whoever did that can read all the messages in all the apps. Intelligence agencies find ways to hack the phones of specific individuals and companies like NSO sell those as tools (this is what some of those Apple and Android security updates are about), and so some or all of these people’s phones might have been sending all their chats - these and others - to another country. This is why security rules exist and why you get sacked or court-martialed for breaking them. LINK
Google leapfrogs
I don’t call every new model release (there are dozens of them), but Google’s new version of its Gemini model is notable because it’s at the top of a lot of the benchmarks for the first time, and by a significant margin. OpenAI is still probably first amongst equals, but the top of the field is getting more and more crowded. LINK
OpenAI is back in images
OpenAI’s Dall-E models were amongst the first image generators that were interesting outside research circles, but the company fell behind the cutting edge as the attention moved to MidJourney (even though MidJourney has never really got around to building a product). Now OpenAI is back, with a really good image generator that allows for refinement of the previous image (no more prompt slot-machine) and is even pretty good at text. It went viral this week with people doing style-transfer of Studio Ghibli, which was accompanied by equally viral outrage at this ‘theft’ or just ‘disrespect’ (I wrote about the IP challenges of these models a while ago). Meanwhile, try taking a photo of your living room and giving it to ChatGPT, along with a photo of a table or a lamp, and ask it to insert the lamp into the room. LINK, GHIBLI
AI middleware
OpenAI announced support for MCP, a standard protocol proposed by Anthropic a while ago to make it easier for AI agents to talk to other kinds of software (Uber, Instacart, Oracle, Salesforce). This is basically middleware, and developers always want that, but there are always two basic problems that are really hard to overcome.
First, you’re trying to abstract very different and complex pieces of software into a standardised universal layer, and that creates a ‘lowest common denominator’ problem - the middleware can never support all the features the underlying tools create (see Steve Jobs’ ‘Notes on Flash’). Second, why would, say, Instacart want to become a dumb API call for someone else’s trillion dollar company? Instacart makes all its profits from ads, Uber wants to upsell you a black car and a subscription, and Salesforce want you to to use its new LLM tools - they don’t want someone else to control their user experience and own their customer. LINK
Xiaomi raises
Chinese OEMs won the Android market outside the USA, and as EVs disrupt the car market there’s a pretty good thesis that Chinese EV OEMS will do the same to cars, with no equivalent of Apple. And Xiaomi, one of the winning Android OEMs, has itself been working on cars for a while (see previous issues). Now it’s raising over $5bn to push harder. LINK
Cloud GPUs
CoreWeave’s IPO got out this week, but well under the initial ask, signalling I think how ambivalent or just nervous the market is about LLM infrastructure. Now Nvidia (which build up CoreWeave by giving it GPU allocation) may be buying a competitor for a few hundred million. LINK
The week in AI
Microsoft is still in an ambivalent position, making itself ‘the AI company’ yet still depending on OpenAI and lacking its own SOTA model family. It’s building, though, and just added ‘chain of thought’ to Co-pilot. LINK
Reve - the latest hot photo-realistic image generator, at least until OpenAI launched. LINK
Reddit is deploying AI translation at scale. LINK
OpenAI expects $12.7bn of revenue this year, and is raising a new round (mostly from SoftBank) that is apparently the largest private round ever. LINK
Perplexity says it wants to buy TikTok and open-source the algorithm. 🤷🏻♂️. LINK
‘Accent-neutralisation’ for call centres, from Krisp - things like this, plus translation and dubbing, are flattening the world. LINK
Virtual models at H&M
H&M is planning to produce AI versions of specific models, so that it can generate many different products shots for new SKUs more quickly and efficiently. Ikea was using 3D models for product shots 15 year ago: now it’s reached fashion. H&M is being careful to (at least try to) do the right thing, giving new rights to the talent and even letting them use their ‘digital twins’ for other companies, but as this scales it might be a big change to the bread-and-butter of the industry. LINK
TechCrunch goes… where?
In the 2000s TechCrunch was the main new place reporting insider startup news. But the founder’s been gone a long time, it was sold to AOL, which is now Yahoo, and now Yahoo has sold it on to a PE firm. LINK
Elon discovers digital transformation
Like a lot of old, big and slow organisations (banks, airlines, taxes everywhere) the US social security system runs on mainframes with software dating back to the 1960s and 1970s. This means it’s very reliable but it’s also very hard and expensive to change or improve anything. The challenge for all of these systems is that it’s even more expensive to re-platform the whole thing and hard to get the budget for that, so these systems tend to drift on and on, expensively (again) tended by consultants and by IBM. Now Elon Musks’s government ‘efficiency’ posse wants to migrate it! Right now! Using AI! The trouble is, if you move slowly and carefully you’ll never do it, but if you move at Elon speed you’ll break things and a lot of people won’t get their cheques. LINK
Elon Musk restructuring
Elon Musk merged Twitter-as-was into xAI, his LLM company, buying out the investors who helped him buy Twitter with xAI stock at roughly par to the purchase price, on the valuation he gives for xAI, of course. That seems like a pretty good result for them, swapping 100% of a turkey for a stake in a hot AI rocketship, though it is not clear to me how xAI will win in a world where foundation models are commodities. LINK
Ideas
The French news-magazine the Nouvel Obs challenged Hervé Le Tellier, an experimental novelist and winner of the Prix Goncourt, to a writing match against ChatGPT, channeling the famous Kasparov versus IBM match of a generation ago. It won, kind of, LINK, COMMENT
Steven Sinofsky on MCP and the reasons middleware is harder than it looks. LINK
Great interview with Tony Xu on the early days of Doordash. LINK
Interesting use of OpenAI’s Deep Research to try exploring the theses that TV shows get worse over time. LINK
The history publisher that got all its traffic from Google and lost half of that to AI. I’m thinking a lot about this - what happens to content that can be synthesised? What still gets click-through and why? LINK
Anthropic has new research on explainability of LLMs. LINK
A Variety interview with Ted Sarandos from Netflix, on winning streaming. LINK
Outside interests
For a 90s kid made good - you can buy Quentin Tarantino’s expired driving licence (people will collect anything). LINK
Data
Anthropic released more data on what users do with its tools. Heavily self-selected, obviously. LINK
Adobe survey data on how much consumers use generative AI for purchasing decisions. LINK
Column
AI and the death of links
As soon as ChatGPT took off, publishers started muttering about the social contract. Google indexes your content and makes money from that, and it sends you traffic and you can make money from the traffic, or try to. But if an LLM scans your content and everyone else’s, and then just gives people the answer, then you don’t get any traffic. The contract is broken: you do the work and they make the money. Most news sites have now blocked LLM crawlers in robots.txt.
After a little more head-scratching, people started talking about ‘SEO for LLMs’ - if someone asks an LLM for recommendations in your space, where would your site and your brand be, and how might you change that? Flood the internet with copy about yourself? You could even use an LLM to generate it? (Conversely, it appears that Russia has been trying to stuff LLM training data with content pumping its own narratives.) The internet made distribution free, but not discovery, and now AI makes creation free but now discovery becomes a lot harder.
Last week, though, I spent a long time talking with a UK magazine publisher about what this means for the content that gets created, and why. I think it’s helpful to start by acknowledging that we don’t start from neutrality: why do newspaper websites have fifty ‘top ten city breaks! stories, and why are there a million chocolate chip cookie recipes on the internet? Because you could get traffic from Google. Now a lot of that will get abstracted and synthesised away, and there’ll be new patterns - we just don’t know what the new incentives are.
The easiest observation is that there are some things that can’t really be abstracted. As I observed writing about MCP above (and previously about things like operator or Rabbit), the more specific and specialised an experience, the harder it is to squeeze it into a single-purpose UI. An LLM tells me ‘the answer’ but Booking or Instacart are about options, not answers.
This isn’t content, though. What kinds of content can’t be abstracted and generalised? The more that you care who’s saying it, how they say it, and what their brand or voice looks like, the less it can be abstracted. You didn’t read Hunter S. Thompson for advice on hotels in Las Vegas. What can’t be turned into ten bullet points by ChatGPT because the bullet points aren’t what matters? Unless, of course, LLMs’s make it easier to express the same idea in a range of different tones and attitudes - what kind of authenticity would you like? Ask ChatGPT: “turn this generic cookie recipe into an inspiring story of family and togetherness”(as George Burns said, “the key to success is sincerity. If you can fake that you've got it made.”).
On the other hand, I said above that discovery will now be harder for publishers, since people won’t need to click on the link, but LLMs change how search works in other ways too. Right now, the standard example that everyone in publishing uses is that the user will get a summary instead of clicking a link, but they can also ask new kinds of questions and get different kinds of links. You can now go to Walmart or Amazon and ask ‘what would be good to take on a picnic?’ - LLMs mean it can try to answer questions that the site couldn’t answer before, because they’re not database queries. Google was never as binary as that, and could always try to answer fuzzier kinds of questions, but I wonder now how LLMs will enable new kinds of searches and drive traffic to different kinds of content in new ways. The LLM can give you an answer to a search instead of ten blue links, but can it also enable new kinds of search and new links?