2 November 2025
News
Home robots
This week’s viral AI sensation - the $20k home robot from Neo. It can walk, but for it to do anything, a human remote operator acts as a puppeteer, driving the thing around your house and folding your laundry, all to generate training data in the hope that eventually AI will be able to do this instead. This is an extreme example of the Catch 22 that every ML company used to face - how do you make the product before you have user data, and how do you get user date before you have a product? See this week’s column. LINK
Results season
I don’t do results notes in general, but this week the quarterly numbers from the big tech companies had some common themes: continuing growth in demand for Ai in the cloud, strong results for AI-optimised ad stacks, lots of vanity AI metrics (‘tokens produced’, which I keep saying it like reporting your bandwidth use in 1997) - and accelerating capex. Microsoft’s September quarter capex spiked to 45% of revenue, Meta and Microsoft said they expect the rate of growth of capex to rise next year, and Meta in particular unnerved the market. At this stage, the big four companies building AI infra (Google, Amazon, Meta, and Microsoft) have increased their 2025 data centre capex guidance from around $300bn at the beginning of the year to closer to $400bn today (exact numbers are hard because Amazon doesn’t report AWS and logistics capex separately). LINK
OpenAI’s new structure
OpenAI has finally worked out a new ownership structure. There’s a for-profit operating company, OpenAI Group, with a valuation of $500bn, which is owned 26% by the non-profit OpenAI Foundation and 27% by Microsoft. Apparently, an employee vehicle holds another 26% and the rest goes to other external investors.
As part of this, OpenAI contracted to buy a further $250bn of Azure hosting, while Microsoft gives up its right of first refusal to be OpenAI’s infrastructure provider (a right it has clearly already decided not to exercise, given all the other deals OpenAI has struck). As part of this, incidentally, Microsoft’s results now reveal OpenAI’s current burn rate: $11.5bn last quarter. Reuters reports that OpenAI is now considering an IPO in 2027, at a $1tr valuation. OPENAI, MICROSOFT, IPO
OpenAI moves up the stack…
At the same time as announcing the new structure, OpenAI held a one-hour live stream talking about future strategy, with three things notable.
First, the chief scientist Jakub Pachocki said they expect to have systems that can do ‘AI research’ at the level of ‘a research intern’ by September next year. Like a lot of AI timelines, it strikes me as both too short and too long: the systems we have today don’t feel anything close to replacing someone at that level, but also, the field changes a lot in a year (especially, this is contradictory). But given that OpenAI is fond of claiming its models are at ‘PhD-level’ already (which Deepmind’s Demis Hassabis says is ‘nonsensical’), this may just be a matter of definition.
Second, the company showed a generic diagram of an ecosystem stack and said that it wants to build a standard software ecosystem, where it participates in each level but there are many independent third-party developers at each level - exactly as Microsoft, Google, or Apple operate. On one level, this is predictable but far easier to say (of course, they want that!) than to deliver. But stepping back, if ChatGPT is just one of many apps in many layers built on top of many different models from OpenAI - and others - that’s what I believe, but it’s also a repudiation of the thesis that these models can grow to the point that they can do ‘anything’ and we won’t need hundreds and thousands of individual apps. Logically, this is an acceptance that this is ‘normal software’, which seems a direct contradiction of the previous point.
And third, after the flurry of overlapping funding and infra announcements lately, the company helpfully clarified that today it has commitments for “a little bit over 30GW of capacity” for “about $1.4tr” (spread over years). It also said it has an aspiration to build a GW per week of compute capacity at some point (years) in the future, at $20bn per GW - so approximately 50GW and $1tr each year. For context, total global data centre capacity today is something in the region of 75-100GW (estimates vary). LINK
Meta moves to debt
So far, model-building and data centre capex from big tech have been funded out of cash flow (OpenAI and Oracle are a different story), but the scale of the spending now envisaged, combined with the cost of capital, means that debt is becoming a bigger story. Meta, which is small and has less cash flow than Microsoft or Google, is now going to the capital markets: last week I noted a $30bn SPV (that keeps the debt off the balance sheet); this week it’s borrowing $25bn directly from the bond markets. LINK
Nvidia goes to Espoo
Nvidia continues to use its excess cash flow to buy market positioning (TSMC can’t or won’t add capacity as fast as Nvidia piles up cash). This week, a $1bn investment in Nokia. It’s a long time since tech paid attention to Nokia (there’s a solid telecoms equipment business that rolled Alcatel and Lucent, the old Bell Labs), but listening to Jensen Huang announce the deal, it was comforting to hear that Americans still don’t know how to pronounce it (Nok as in knock, not as in know). LINK
The week in AI
OpenAI did a deal with PayPal (ask your parents) for checkout inside the app - presumably the first of many. The stock went up 13% on the day. Companies surging when they sign obvious deals on commodity items for things no one uses yet is a good sign of a certain stage in the cycle. LINK
Bloomberg says that the upcoming and long-delayed new version of Siri will be powered by Google’s Gemini models under the hood. LINK
OpenAI is speed-running a lot of platform conversations, and now it’s trying to sort out the problem of vulnerable people getting into mental health spirals with ChatGPT. LINK
Last week we heard that OpenAI is paying ex-bankers by the hour to create training data - this week, unsurprisingly, we hear it’s also doing that for management consultants. As I wrote last week, none of this is available as public training data, so paying to create it is an edge… but there’s a vast gap between seeing a model or deck and knowing how and why those choices were made for a particular project and a particular client. There may be an element of the cargo cult here - the outward forms are not really the work. LINK
AdWeek says many brands have a sudden interest in advertising on Reddit because they think that might be a way to feature in LLM recommendations, since it appears that LLM labs favour Reddit as training data. LINK
Amazon layoffs
Amazon is cutting 14k people (about 5%) from the corporate workforce (as opposed to warehouses), which it says is driven by culture - the CEO thinks that the org has become too bloated with too much bureaucracy. That may be true, but the track record of companies correcting a culture once this has set in is not good. LINK
Ideas
A profile of Bending Spoons, an Italian company now valued at $11bn that specialises in rolling up stalled but still cash-positive consumer tech - it just bought AOL for $1.5bn, and also owns Evernote and Meetup. LINK
The latest Jensen Huang 2-hour keynote, for GTC in Washington DC. One notable meta-layer: the show opens with a four-minute video full of uplifting music, that’s all about how America is the home of tech innovation. Huang also name-checks Donald Trump individually a few times for helping to restore manufacturing to the USA. The political positioning is pretty transparent, and it’s fun to contrast it with this ‘Made in Taiwan!’ video from a year ago, a Korea video also from this week, indeed the 2024 GTC keynote intro, that made a big deal of helping with extreme weather, renewable energy, the blind and disabled, healthcare , and even translating from (gasp) Spanish - all things no longer wanted in Trump’s America. THIS YEAR, TAIWAN, KOREA, LAST YEAR
An argument that before AI it took a lot longer to build something than design it, but now AI-driven development (at least for a v1) is fast enough that you can ship faster than you can design, and designers are scrambling to catch up. LINK
Outside interests
Inside NORAD in 1966. LINK
A YouTube education. LINK
Data
Wharton released a big survey on enterprise AI adoption. LINK
Pornhub says its UK traffic is down 77% since legal age verification requirements came in. The company argues, not unreasonably, that a lot of that traffic has probably been displaced to companies that aren’t bothering with verification (and probably care less about compliance as well). LINK
Useful Deloitte data on US media and internet use. LINK
Microsoft expanded its global, country-by-country estimates for AI use, derived from Windows telemetry (so lacking mobile data). LINK
Column
The ‘great demo’ phase of the cycle
Moravec’s Paradox is the observation that it can be very easy to get computers to do things that are very hard for people, such as, say, complex mathematics or chess, but hard or impossible to get computers to do things that any young child or even young mammal can do easily, such as walking or visual perception.
This can lead to anthropomorphism - we tend to see a ‘machine’ doing something that humans would find hard and imagine we’re seeing ‘intelligence’, forgetting that while a human would need many other capabilities before they could be very good at chess (starting with vision!), a chess program can only play chess, just as a washing machine washes clothes better than a person but doesn’t know what water is.
This is a risk in looking at today’s wave of humanoid robots. Machine learning (and better electric motors and batteries) means what we can now replace wheels with legs. We can make a bipedal robot that doesn’t fall over. Some of them can even do cartwheels. I can’t do cartwheels, but I can unload a dishwasher, and today’s robots cannot. This isn’t because they lack the manual dexterity (maybe today, but that’s a solvable mechanical engineering problem), but because navigating a strange kitchen is a completely different problem to replacing wheels with motors and gears.
Indeed, Steve Wozniak once suggested that the ability to make yourself coffee in a stranger’s kitchen is a good test for AGI. It sounds simple… but where is the coffee machine? Is that a coffee machine? Or is there a Moka pot somewhere? Perhaps drip? Does this need filters? Where are they? These kinds of questions are the reason why things like translation or image recognition were unsolved until machine learning came along: you think it’s easy until you try writing down all of the logical steps, and ten years later it still doesn’t work.
Machine learning solved a broad class of these kinds of problems by turning them from logic problems into statistics problems - instead of writing down logical steps to tell a computer how you know that’s a cat and not a dog, you give it a vast number of examples. Very crudely, you could say that today’s LLMs are trying to do this with reasoning - give the machine a vast number of examples of reasoning and make that a stats problem.
But the challenge for a domestic robot is that you don’t have hundreds of thousands of examples (even videos) of how to make coffee in a strange kitchen, or unload a dishwasher, or really anything else. LMs can give you step changes around some of that, maybe, and in five years all bets are off. But right now, any humanoid robot is still a puppet - a Roomba on legs. It can’t do anything that a robot on wheels couldn’t do. And if you want it to make coffee, or unload the dishwasher, well, a human remote operator with a VR headset will need to control it for you.
To be fair, Neo, this week’s hot robot company, is perfectly up-front about this, and very clear that the purpose of selling this device reliant on human remote operation (no mention of unit economics for that, incidentally, which must be pretty entertaining even at $500/month or $20k for an outright purchase) is to gather that training data. This is an extreme example of the Catch-22 every ML company used to face - how do you make the product before you have user data and how do you get user data before you have a product? Autonomous cars give us two contrasting approaches to watch today: Waymo builds bottom-up slowly and does massive numbers of simulations to generate synthetic training data, while Tesla puts sensors in every car and hopes that eventually it will have enough data for the whole thing to work. Neo can’t really do either, and frankly, even if they had tens of thousands of these in the wild, would that produce enough training data? How many miles of driving did we need to get Waymo to where it is now?
Stepping back, one thing that often happens in platform shifts is that people try brilliant ideas about a decade before they’re ready, and it ends up being someone else, a decade later, that makes it work. RealPlayer tried streaming video over dial-up in 1995, and General Magic tried to make a whole smartphone in 1994. Maybe Neo really will be able to bootstrap this, and maybe LLMs will solve the whole thing. Or maybe we’ll be checking back on this space in 2035.