30 November 2025

News

Google selling TPUs?

The WSJ and Information both reported that Google is looking at selling its TPU AI accelerator chips to other companies, including Meta. This would be a big deal if true, given how far Nvidia is ahead of any publicly available competitors: TPUs are apparently very good, but only Google has access. After that, things get complex: for example, how much do other customers rely on Nvidia’s CUDA software layer as well as the hardware itself, and how much are TPUs tightly linked to Google’s systems? Ask a DC analyst. INFORMATION, WSJ

Chat Shopping

Following Google, OpenAI launched a shopping assistant experience inside its chatbot. Meanwhile, Amazon is expanding the third parties that it blocks from doing this.

There are a lot of overlapping questions here. How much can such a system remove and clarify product complexity, and do that better than Google? This is how people in Silicon Valley like to shop, but how well does that map against how other people shop? How does that offset losing the sophistication (which is complexity!) of the retailers’ own tools, UX, and recommendations? How much does the model really know you, and know you better than Amazon? And on the other side, which retailers will let you do this, because they want the traffic, and which retailers will block you because they want to own the experience and the customer? LINK

Amazon does US AI

Amazon announced a deal worth up to $50bn to build dedicated compute infrastructure for the US government. LINK

AI accounting

Michael Burry, known for shorting US mortgage bonds in the GFC, has got attention in the last week or two by claiming (amongst other things) that AI companies are inflating earnings by depreciating their data centres too slowly. I don’t do share prices anymore, but IMO this is a second derivative question, where the first derivative is what the useful life of AI accelerators (as sold by Nvidia) will be, and the primary question is the real steady-state capex needs for AI, for both new capacity and replacement capacity. 

The problem, for me, is that we don’t know how much more compute-efficient the models will get at achieving a given result, nor how much more compute-intensive new use-cases and applications (agentic, video) will be, nor how much how many people will use each of them. This is all very like trying to predict bandwidth needs in 1998 or 2000 when you don’t know if YouTube or Spotify will exist, never mind how much bandwidth they’ll need. And you don’t know the revenue either! In that context, worrying about depreciation policies seems like displacement: aren’t you looking at small variables you can analyse instead of much bigger ones you can’t? 

Also, Burry closed his fund and started a $39/month Substack. LINK

Conversely, Amazon increased its depreciation charges. Stephen Clapham has a good write-up of the broader questions. LINK

Ideas

Billboard saw a copy of a fundraising deck from the AI music generator company Suno. It’s creating the equivalent of Spotify’s entire catalogue every two weeks. LINK

The Verge did a nice write-up of Hoto and Fanttik, the two hot new Chinese tool brands. LINK

Following last week’s reveal that a lot of deliberately divisive MAGA accounts on Twitter are not actually based in the USA, 404 points out that a lot of them are entrepreneurs chasing revenue shares. LINK

The FT points out that Oracle, Softbank, and Coreweave combined are on track to borrow over $100b to build infrastructure for OpenAI contracts. Other People’s Balance Sheets is the new Other People’s Money. LINK

Interesting case-study of someone using AI to generate and sell fake articles to publications. The internet made content distribution ‘free’, and that had all sorts of unexpected consequences - now AI makes content creation ‘free’. LINK

This 90-minute Dwarkesh Patel interview with Ilya Sutskever (OpenAI co-founder, now SSI) got a lot of pick-up. I know I should say it’s fascinating, but I got 25 minutes in and lost the will to live. Sooo (Long. Pause.) Sloooow. Maybe I’ll try again at 3x speed. LINK

Conversely, this Ari Emanuel interview on entertainment, live events, sports, and AI is very good. He talks fast. LINK

Detailed case study from Booking.com on using agents in their guest messaging system. LINK

Laurene Jobs asks Sam Altman and Jony Ive “what can you tell us about The Thing?” Not much, except it will involve ‘less distraction’. LINK

Statistics about tech energy and water consumption have a long and ignoble history of people creating their own statistics based on wild extrapolations of poor data, or just getting the units wrong, and ‘Empire of AI’, a bestseller focusing on water consumption, appears to have based a central argument on numbers that are wrong by several orders of magnitude. Of course, the companies themselves disclose too little that it’s hard to get this stuff right - the big platform companies do disclose primary data, but that tells us little about future AI use. LINK

ChatGPT’s public launch was three years ago today. LINK

Outside interests

Alan Yentob’s film for the BBC about Sir Tom Stoppard. LINK

UK Government report on the regulation and costs of building nuclear power plants. Read the section on fish protection at the top of page 68: £700m to save the lives of 0.083 salmon per year. LINK

For the friend with everything, a 3D-printed, light-up model of the Chernobyl plant. LINK

Tap-to-pay leaves street vendors and the homeless behind. LINK

Data

Alix Partners 2026 Data Centre outlook. Power and Water. LINK

Bain survey data breaking down generative AI deployment by sector, crucially, splitting pilots from production. LINK

AN analysis of LLM downloads arguing that China has overtaken US models. Not surprising. LINK

A fairly good Brookings Institute US survey on Generative AI usage. LINK

Conversely, the US Census does a bi-weekly business survey (BTOS), and since 2023 it’s asked about AI use. Since this is national, authoritative, and frequent, it gets reported a lot, and in the last couple of periods, it’s flat-lined. However, the methodology means this data has zero value. 

The definition of ‘AI’ in the question is “Examples of AI: machine learning, natural language processing, virtual agents, voice recognition, etc.” which does not distinguish generative AI from systems but 10 or even 20 years ago, and could cover almost anything, while the frequency is also only ’did you use it in producing goods or services?’. The definition is so broad and open-ended that it will massively overcount, while limiting to ‘producing goods and services’ means use for marketing or fraud prevention would be excluded. This is not helpful. LINK

Column

Agents, AI apps and the widget fallacy

Every five or ten years, somebody tries widgets again. Instead of having to dive into lots of different applications, with different interfaces and experiences, why not just have little units that show you the stuff you really need, in one standard UI? 

If all you want to do is see your next appointment or tomorrow's weather, this is quite useful. But if you want to do anything with the logic and data in those applications, very, very quickly, you do need to go back into the app. Those features are there for a reason, and yes, no one uses more than 75% of the features (make up your preferred percentage), but everyone uses a different set. So you open the app, or you open a website, and you forget about widgets.

Going a step back, you could suggest that for as long as we've actually had separate programs, engineers have wanted to find ways to abstract them back into a single layer. There was a moment in the early history of GUIs when people thought that the operating system would do all of this. Really, what was an app, but just a different view on data that was being stored in the file system by the OS? That also got you to things like OLE, which you have to be over a certain age to remember, when people thought that you could somehow embed... a spreadsheet or a CAD drawing or a Photoshop file into a document, and when you clicked on them, that UI would load, but the individual ‘apps’ would be very thin layers of code with the OS doing all the work. In today’s terminology, Bill Gates thought Office was ‘just a thin Windows wrapper’. 

This is also a little bit like the engineer’s fallacy that Airbnb is ‘just’ a CMS. Yes, everything from Uber to Tinder to Airbnb is ‘just’ a database. But there's an enormous amount of consideration, thought, optimisation and learning in how that UI works (which is also a problem in the concept of a ‘generative UI’). 

It seems to me that quite a lot of today’s discussion of agentic consumer experiences, and in particular OpenAI’s app platform, repeats a lot of this pattern. You can abstract away all of the complexity of these different services into a common layer! Except then you’ll want to do something that takes more than 30 seconds, and the common layer won’t handle it, and you’ll go back to the website. 

Meanwhile, you also need all those third-party services to build a widget for you (in OpenAI’s app platform), or at least not block you, for more agentic approaches (of course, it could be that ChatGPT could try to generate its own widgets dynamically in the future). They also need to invest in that experience. You can always get 20 logos on stage for launch - there are people at these companies whose job it is to get their logo into your launch event - but you need to deliver beyond that. And what you’re delivering, and the answer to the blocking question, is distribution. 

Presume for the moment that an agent, or widget or something else, can deliver a consumer experience that works. Does any third-party app or service actually want that? The trade-off is between distribution - more customers from this new agent’s user base and recommendation flows - and owning the customer and the customer experience. Anyone in that position wants to shape how a customer sees the product. They want to show options and sell the product in the best way. They also have other motives - Amazon sold over $65bn of ads in the last 12 months, all of Instacart’s profit comes from ads, and Uber wants to upsell you a black car. So, OpenAI can offer 800m WAUs, but are they buying, and do you want to give Sam Altman your destiny?  Amazon doesn’t need OpenAI, at least not yet, and neither does it want to let OpenAI read all of its product listings and learn about a world of 750m or a billion SKUs. And then, of course, as we saw with Google and then Amazon, there’ll be ads, and you place at the top of the recommendations will be, um, helped by your ad spend. 

As I find myself saying all of the time these days, all of this is pretty speculative. We don’t really know how any of this is going to work, but we do know two things. It’s a lot harder than it looks to abstract single-purpose products into general-purpose interfaces. And distribution comes at a cost.  

Benedict Evans