24 August 2025

News

The Intel turnaround

Intel is at risk of falling off the cutting edge of Moore’s Law, and probably needs $15-25bn that it does not have to get back. This week the US government took a 10% stake for $8.9bn, in the form of grants that had already been awarded under the (Biden) CHIPS Act but not yet paid. There won’t be a board seat. 

Broader context: Samsung is also at risk of falling off (though it does have money), and China’s SIMC is of course trying to get there - so there’s a real risk that either TSMC will have a monopoly, or that it will share one with China.  Either outcome would be bad both for the tech industry (most obviously, what happens to prices?) and for the strategic security of the USA. After that, nothing is clear - most of all, does Intel still give up on SOTA, and if not, how much more money does it need, where from, and how does a government stake change that? LINK

OpenAI India

OpenAI is launching a cut-price, cut-down version of ChatGPT5 in India for ₹399/month (~$4.5). There are 6-700m smartphone users in India, and every consumer tech company wants to be there. LINK

Prompt injection

Brave, the browser company, discovered that the AI sidebar in Perplexity’s Comet browser is vulnerable to prompt injection, which means that a web page you visit could contain text that tells Comet to use Gmail to pass on your bank details, and it would do that. The indeterministic nature of LLMs makes prompt injection a fascinating problem that’s surprisingly hard to fix. LINK

FreeGPT for the Feds

Google and OpenAI are both now giving employees of US federal agencies access for the next year for a nominal fee (whether those employees have a PC or internet access is another problem). There are lots of land-grabs going on right now - see also the India story. GOOGLE, OPENAI

Apple Gemini?

Apple is apparently looking at using some form of Google’s Gemini. This might mean actually powering new features (i.e. the much-delayed rebuild of Siri), or might just be an alternative to the existing ChatGPT integration, which works much like having Google as default search in Safari. It’s very likely that this year a judge will order Google to stop paying Apple a ~$20bn revenue share to be search default: it would be hilarious if that was replaced by a $20bn deal to be Apple’s chatbot default. LINK

Meta AI

Meta is still working out the new AI strategy - after spending hundreds of millions of dollars (or more?) to poach researchers from competitors, it’s also now doing a licensing deal with Midjourney. LINK

And, it’s also apparently done a deal to use Google Cloud, worth $10bn over six years. LINK

Amazon blocking bots

Amazon is now blocking AI scraping from Meta, Google and other LLM systems, which means they won’t send purchasing traffic to Amazon (or will send less), but also means they won’t have access to all of that SKU-level data and reviews about products. LINK

The UK backs down on Apple back doors 

Earlier this year it was reported that the UK was demanding (in secret) that Apple provide a back door to encrypted user data. This is a bad idea in general, but bizarrely it also emerged that the UK was trying to get this not just for users in the UK, where at least there is jurisdiction, but for all Apple customers globally. Apple reacted by pulling the product from the UK and going to court, but this week Trump’s director of national intelligence said that the UK has backed down.

I have far more sympathy than most people in tech for the reasons that intelligence agencies and law enforcement would like to read terrible messages from terrible people, but it was always untenable for the UK to try to order Apple to let UK spies read the messages of Americans in America - no US administration would be relaxed about that. LINK

Ideas

How screwed is Intel, why, and what does it need? Probably $15-25bn to get back onto Moore’s Law. LINK

A paper trying to work out how LLMs make product recommendations. LINK

Netflix released guidance for its suppliers on how it does and does not want them to use generative AI. LINK

The New Scientist used a FOI request to get access to the current UK tech minister’s ChatGPT history. LINK

Bloomberg on how Oracle’s cloud became more than a joke. LINK

A documentary on the industry smuggling Nvidia chips into China. LINK

John Collison, co-founder of Stripe, in a video podcast with Dario Amodei, founder of Anthropic. Interesting in its own right, but it also occurs to me that quite a lot of the insider conversations that happened on Twitter a decade ago are now happening in video podcasts instead, where you can’t have interesting input from new people, but you also don’t have people screaming at you. (Much of the rest of those conversations are now on private text chats). LINK

Outside interests

The designer of the James Bond 007 logo dies, aged 103. LINK

Nine ways to ‘fix’ the best ad of the 20th century. LINK

The Library of Things. LINK

Shelter. LINK

Data

Google released a detailed study of its energy use for LLM inference - running the models (as opposed to training). This is a topic where a lot of people are vague, or indeed just make up numbers, so it’s good to have something solid. Key points: Gemini is currently averaging about 0.24 Wh of electricity per text query (equivalent to about 9 seconds of running a TV set), and that’s reduced by 33x (!) in the last 12 months. LINK

Databricks is the latest $100bn private company. This is not a healthy long-term approach to funding. LINK

OpenAI now has $1bn monthly revenue. LINK

30% of US ad spend is now made directly by the advertiser, up from less than 10% in 2019. LINK

Column

No column this week - here's one for the archive I still think about a lot

When efficiency is bad 

A long time ago, I heard a story about OpenTable trying to launch in Spain. Apparently, they went to a whole bunch of family-owned restaurants and said, “Would you like a system that will give you a perfectly accurate record of all of your cash flow?” and the restaurants said, “Thank you very much, but no, we wouldn’t.” 

I had a similar experience a few years ago talking to somebody working on NFTs, who proposed that the fine art market would love to have a system that gave them perfect price transparency and liquidity: you don’t have to know very much about the art world to understand that nobody in that market wants any of that at all. 

I’m thinking about these kinds of questions now as I look at large language models and wonder what industries might find them most useful. Quite a few economists have done top-down analysis looking at things like labour productivity to try and identify sectors that might have a lot of boring grunt work that should be easy to automate, and hence have a lot of potential cost savings.  This is all very well as far as it goes (though rather too high-level for my taste), but you could also ask the same question from the opposite direction - what industries might suffer from cost savings? 

Newspapers were a good example of this in the past: some of them looked at the Internet and thought that this would be great because it would cut their printing and distribution costs, and didn’t realise that the printing and distribution was the barrier to entry that protected their business. So, what industries today are protected by inefficiency? 

One place to look for this now might be in health insurance in the USA, which has created many layers of complex and time-consuming paperwork, and there are already stories about patients and doctors using ChatGPT to speed up paperwork (with the appropriate caveats about the need to check the work). To generalise this, it might be useful to look not just at labour productivity but at regulatory burdens - how many rules have been applied to this industry, how many laws have to be checked and boxes ticked, what does that do to competition, and how much of that compliance could now be automated? 

Another angle is professional services that invoice on a cost-plus basis. If a marketing company or law firm can do a job in a week with five people instead of a month with ten, it might still be able to charge the same percentage markup on those people’s time, but the absolute value of that will be a lot less. 

On the other hand, internet publishing businesses are protected by the fact that you have to pay people to write things, and now LLMs will churn out generic ‘what anyone would probably say’ content at massive scale. This is the so-called ‘AI slop’ (though it seems to me that the internet has always had plenty of ‘slop’), which also makes people worry about model collapse, and will certainly make life harder for Google. And at the extreme, deep fake nudes are easier and more accessible than Photoshop, just as Photoshop was easier and more accessible than the darkroom. 

Of course, every time someone uses an LLM to write a letter to an insurance company, that insurance company will be using an LLM to read it and write a much longer reply. There was a joke last year that half of LLMs would be turning three bullet points into an email and the other half would be summarising an email into three bullet points. So we might end up with a lot of computationally expensive models talking to each other, and burning electricity instead of time. Maybe that’s still progress? 

Benedict Evans