27 July 2025

News

A new US AI Plan

Biden’s AI policy was explicitly based on the premise that Washington had ‘missed’ social media and that was bad, and so now they had to catch hold of AI and make sure that policymakers were in control, and that a long list of vague and speculative ‘harms’ were controlled. This meant a pretty basic hostility to open source or startup involvement, and by default meant handing the AI agenda over to the incumbent ‘big tech’ companies. (The EU’s AI Act has much the same premises and consequences.) 

Trump, on the other hand, managed to hire a bunch of AI advisors who come from Silicon Valley startups and VCs, and so now we have a new plan that swings in the opposite direction: remove ‘unnecessary’ restrictions, push for faster adoption within the US government, push for open source and open datasets, and create incentives for infrastructure investment. The diplomatic side has more continuity, aiming to push US influence (as much as is left after six months of Trump) and to try to restrict China. LINK

Leaky Sanctions

Part of this strategy is trying to slow down Chinese AI: we know that sanctions on Nvidia GPUs have been leaky, but this week the FT reported that smuggling alone added up to at least $1bn worth of compute (and that’s not counting Nvidia chips in Malaysian datacentres training models for Chinese customers). LINK

Intel pain continues

Conversely, see Intel’s turnaround news: the new CEO is in the pain stage, cutting 25k people, reducing its global presence, investment, and, most painfully, saying that it has no significant customers for its entry into the foundry business (a strategy that it only began in 2021, and it invested $25bn in each of the two years). This is painful in its own right, but if the US sees AI as a geopolitical issue, then the ability to make its own cutting-edge chips, and outside of greater China, must be part of that. LINK

China’s OSS AI boom

Meta’s Llama pioneered open source LLMs, but nine of the top ten open-source models are now Chinese. LINK

Project Starship?

Earlier this year, OpenAI and SoftBank announced plans to spend $500bn building datacentres - $100bn ‘immediately’. It probably shouldn’t be a surprise that only one small datacentre is actually getting built this year. The parallel infra partnership with Oracle seems to be going a lot better. LINK

The week in AI

OpenAI has started teasing ‘ChatGPT 5’, to be released sometime in August. LINK

Google is pushing beyond AI Overviews at the top of search results to experiment with a feature that organises and summarises all of the search results. Why would you use a third-party LLM to do a web search and summarise the results if Google has the best search engine (true) and can summarise that better (maybe) itself? LINK

Amazon bought a small company behind an AI wearable that listens to everything you say. LINK

Meanwhile, Meta’s wrist-based gesture control tech is getting better. LINK

A leaked memo from Anthropic has the founder Dario Amodeo deciding to accept investments from nation-states - i.e. Persian Gulf monarchies - despite previously ruling it out on human rights grounds. This is a race for capital, and if others are willing to take the money then he can’t fall behind. LINK

Things that feel bubbly - JP Morgan Private Bank is marketing an AI Plan for HNWIs. LINK

Alt media

The former NY Times journalist Bari Weiss, who went indie on an anti-woke platform three years ago, is apparently now looking for a $200m valuation. If LLMs subsume generic content, then brand and opinion (and pay-walls) matter much more? LINK

The UK does age verification

Amongst other things, the UK’s Online Safety Act, which has been in gestation for a decade or so, requires websites with adult content to verify their users’ ages starting this week. It’s much harder to do that safely and securely than to pass a law requiring it, and one early result is that searches for VPNs have spiked in the UK. LINKSEARCH

Ideas

Topical given this week’s US AI plan - a RAND analysis of China’s AI industrial policy. LINK

The FT digs into the question of whether AI is killing graduate jobs. TLDR: grad jobs are down, but probably not because of that, yet. LINK

A nerdy tech blog post from Netflix on its streaming infrastructure. LINK

Apparently, Amazon’s ‘Project Starfish’ is using LLMs to correct and improve the information given to buyers in Marketplace listings. LINK

McKinsey’s annual report on frontier tech. LINK

Outside interests

RIP Tom Lehrer. LINK

Data

Pew: about 34% of Americans have used ChatGPT at least once (no data on active use). LINK

Pew also found that people using Google click fewer links if they get a ‘Search Overview’. This seems like a truism. LINK

Mistral released some data on its energy use. This is a hot topic for LLM deployment, but Mistral isn’t necessarily representative, and this is a moving target as the tech evolves on one side while efficiency increases on the other. LINK

Substack surveyed its writers on how they use AI. LINK

Pugpig, whose platform has mostly won the market for newspaper / magazine apps, released annual data from its customers. LINK

Column

Ten AI questions 

Trying to understand generative AI still feels a lot like trying to understand the internet in about 1995 - it’s clear that this is going to change everything, but not tomorrow. Everything is amazing, but most things don’t work quite as well as people want to believe. It’s very unclear what the mature state of the products, ecosystems, companies and value chains will look like. And it’s very unclear how this will affect everyone outside of the tech industry. As the joke goes, newspapers thought the internet would be great because they could reduce their printing budgets. 

This kind of uncertainty means that many questions that seem important today will end up being the wrong questions entirely. I always remember how often people in 2000 asked ‘what’s the killer app for 3G?’ When it turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket. 

The big difference between LLMs and all previous platform shifts, though, is that we really don’t know what the raw technology itself wil be able do to. In 1995 we knew how fast networks and PCs could get in the next year or two, and in 2007 we knew that the iPhone wasn’t going to have retina projection or roll up, but today we really don’t know how good generative AI will get in 12 or 24 months - indeed, that’s the question that determines everything else - how far will this keep scaling? But that aside, we’re getting to the point that we have some questions beyond AI itself. Some things I’m wondering this summer: 

How does web publishing work if LLMs can synthesise what you write? What’s ‘SEO for LLMs’? How much will the publishers be able to block the new thing? Does this mean we move much more to brand and opinion, and to pay walls? What can’t be synthesised? 

If an LLM can replace your junior staff, how many graduates do you have, and how do people learn the job? Or does this work more like Excel, which resulted in a far more analysts, doing far more analysis? What careers ad business models rely on leverage and aggregation of talent that will go away, or can be automated in different ways?

As a consumer or a marketer, the internet has meant that there is infinite product, infinite media, and infinite retail space. How does anyone know what to buy? How would you know that thing you love existed? And now, what happens if an LLM can synthesise some sense of what you’re interested in, and in parallel has ‘seen’ everything that there is? Does this unlock totally new kinds of product recommendation and discovery? How much does your phone really know about you?

And pulling some of these threads together: YouTube never knew what was in the video, Instagram didn’t know what was in the picture, and Amazon didn’t know what the SKU was: they each had metadata written by people, and they can look at the social graph around them (“people who liked this liked that”), but they can’t look at the thing itself. How far do LLMs change this - how far do they mean that YouTube can watch all the videos and know what they are and why people watched, not which upload they watched, and Amazon can know what people bought, not what SKU they bought? And how does that change what we buy, and what gets created? 

Benedict Evans