16 March 2025
News
Google does Robots
It seems like AI accelerating robotics is one of the stories of 2025, and this week Google announced that it’s using its Gemini models for robots too. The cool part of this is humanoid (or quadruped) robots that don’t fall over (where going to electric from hydraulic is also a big change), but I suspect the idea of more old-fashioned form factors that can now be much more flexible might open up more use cases? Another angle, meanwhile, is the idea that some kind of physical embodiment (or at least learning from the physical world) will be needed to take AI itself forward (on the theory that learning from text will top-out). LINK
Better, faster, cheaper...
Google released version 3 of its ‘Gemma’ open source models, claiming that it can outperform OpenAI’s o3-mini while running only on a single GPU. I don't write up every model release (that would barely be possible), but there seem to be two directions of travel here - ‘everything will run on city-sized AI factories’ and ‘this stuff will run on your phone. I don’t believe both of these will happen. LINK
White smoke from Intel
Intel has chosen a new CEO to lead a turn-around: Lip-Bu Tan, a semis veteran and VC who turned around Cadence. He says he plans for Intel to continue as a foundry. Meanwhile, he will also have to work out what Trump is doing with the CHIPS Act. LINK, CHIPS
The week in AI
Sam Altman says OpenAI has trained a model that can write better fiction, as part of the general push towards ‘creativity’ (see last week’s column). And yet, his example seems so predictable. LINK
As it rebalances its infrastructure away from Microsoft, OpenAI has signed a $11.9bn five year contact with CoreWeave ahead of its IPO, also taking a $350m stake. Is this a tech company or an SPV? LINK
As Meta tries to catch up to the hyperscalers on AI datacenter infrastructure, it is (finally) testing its own AI silicon. LINK
The NY Times put a number on Google’s stake in Anthropic: 14%, as per a lawsuit, with a cap of 15% and no votes or board seat. LINK
OpenAI’s latest lobbying push
OpenAI is pivoting from asking Biden to ban open source to asking Trump to liberalise copyrights for training data and for the US to treat AI as a central part of its geopolitical competition with China. Tell me the audience and I will tell you the argument for less competition. (It also called the company ‘state-controlled’, which annoyed some people but is sadly a truism in describing any Chinese company that attracts the state’s attention.) LINK
DeepSeek’s travel bans
Conversely, DeepSeek is apparently now banning its researchers for leaving the country, which is ironic given that the loudest US voices were claiming that DeepSeek did little more than rip off Llama and OpenAI. The truth, I think, is that a technology can be both a generic commodity in a rush to the bottom and also involve a lot of capex and a lot of very hard engineering (think of screens, say, or cloud hosting). LINK
Trump Antitrust
There was a brief moment when some people argued that Trump’s election meant that regulatory pressure on ‘big tech’ would be relieved. Nope: MAGA republicans were just as suspicious of big tech companies, though not always for the same reasons. Now the FTC is pushing forward with a wide investigation into Microsoft. (Ironically, the place where Trump’s election might lead to weaker tech regulation is the EU.) LINK
Pokemon Go sells
Niantic Labs sold its games division, including Pokemon Go, to Scopely, a games buyout house backed by the Saudi sovereign wealth fund. The underlying geospatial tech will be spun out into a new company (which presumably has big hopes for AR glasses). Remember when Angry Birds was the hot mobile games thing? LINK
Meta’s fact-checking replacement launches
Earlier this year Meta gave up on its manual, centralised fact-checking programme and said it would replace it with a bottom-up ‘community notes’ system in which users write and vote on corrections (a similar system is currently in use with limited success on Twitter). A lot of people got very upset about this decision, but the old system was really just theatre: it never checked more than a dozen or two posts a day, because after all, how could it? It was completely unscalable. The new system is theoretically scalable, but comes with its own problems, not least how you balance the voting system. We will see. LINK
Ideas
John Gruber, a well-connected and friendly-but-critical Apple blogger, gave Apple both barrels over its ‘vapourware’ announcement of a new Siri. LINK
Bloomberg has a detailed breakdown of MrBeast. The food business had sales of $250m last year and profits of $20m, where the actual videos had the same revenue and lost $80m. LINK
Everyone is selling AI agents but no-one has the same defintion. LINK
The AI talent wars mean million dollar stock awards and personal attention from Mark Zuckerberg. LINK
Why AI still struggles to extract data from PDFs. LINK
A good case study of why legacy services with modern front-ends still have to go down overnight. LINK
The origin of the term ‘prompt injection’ - a crucial concept to understand for any public LLM service and one possible cause of Apple’s delays. LINK
Outside interests
Christies is selling the art collection of Leonard Riggio, creator of Barnes & Noble. LINK
Data
Spotify’s annual report on music spending: $10bn paid out in 2024 and $62bn since it was founded. Key message: ‘look how much money we pay to rights holders: if you’re unhappy about how it’s distributed that’s not under our control!” LINK
McKinsey surveyed enterprise AI adoption: what teams, what methods, what delays? LINK
Column
New questions
Pretty much as soon as ChatGPT broke into the tech industry, a bit over two years ago, it was clear that there was a set of central, interlocked questions that determined how this would play out. How long would scaling work, yes, but also, compute needs, barriers to entry and network effects, access to chips (and Nvidia’s moat), error rates, China’s ability to catch up, the role of startups, the scope and availability of training data… you can probably make your own list, but all of these got you to a range of possible outcomes.
At one end, if this keeps scaling and keeps getting bigger and more expensive, and the models get more and more capable, then we get something that looks like a handful of giant world computers with dominant market power, that can run most of all of what we do with software today (in a sense this is also the ‘doomer’ scenario). At the other, recall the old line that AI is whatever doesn’t work yet, because once it works it’s just software. Machine learning is ‘just software’ now, and it may be that LLMs end up as just another commodity building block on AWS or your iPhone - that this will end up like databases or spreadsheets.
What’s happened since then? Well, both a huge amount and almost nothing: we don't have answers to many of those questions.
It now seems very clear that there are no moats at the model level - all that DeepSeek really did was demonstrate that anyone with a few hundred million dollars (and perhaps much less) can have their own SOTA model. This isn’t much of a surprise, though. But what else?
A vast number of papers are published every week and there are so many new models that I don’t bother trying to keep up. The escalating technical complexity reminds me a little of Moore’s Law, if there were a dozen Intels, except that the detail doesn’t matter to the rest of us: ‘computers get faster’. Much the same applies to the escalating complexity of AI data centres: it’s a lot of money and a lot of very clever engineering, but this has no broader strategic significance.
And while scaling has carried on so far, that doesn’t answer the question. There are plenty of options on how far this will go (I’m on the skeptical side, FWIW), but we really know no more than we did in 2012. And in turn, we don’t know whether this stuff will get to the point that it can answer any question or solve any problem, or even that it knows when it’s wrong and knows what it can’t do.
On the other hand, there’s an old English joke about a Frenchman who says ‘that’s all very well in practice but will it work in theory? You can spend too much time worrying about deep meaning instead of building, and there’s a lot of building going on. There are hundreds of SaaS companies building things with LLM APIs, and almost all the latest YC batch is doing the same. Coding is working, but so are hundreds of pilots and PoCs inside big companies. However, it seems to me that almost all of this activity is implicitly a bet against scaling. It’s a bet that the foundation model won’t be able to ‘do the whole thing’ and go to the top of the stack. It’s a bet that you need to wrap this stuff in tooling and a go-to market, which means it’s ‘just software’.