3 August 2025

News

Mark Zuckerberg’s AI manifesto

Mark Zuckerberg has written half a dozen ‘change the mission’ manifestos since he started the company - the last one was about the ‘metaverse’, and now, naturally, he’s written about AI. See this week’s column. 

Driven by that, capex for 2025 is now planned at $66-72bn, slightly up from the Q1 outlook of $64-72bn and compared to $39bn in 2024: they expect ‘similarly significant’ capex growth next year. Meanwhile, Meta’s ‘Reality Labs’ division had an operating loss of another $4.5bn - the cumulative total investment in xR since buying Oculus back in 2014 is now close to $100bn. LINK

Results season

I generally try to steer clear of calling the results seasons in favour of more strategic and structural news, but there’s a very clear theme from the big tech quarterly earnings this week - generative AI is driving a lot of demand for enterprise cloud, with Amazon, Microsoft and Google all saying they still don’t have enough capacity, and driving significantly improved ad revenue for Google and Meta, due to better targeting. LINK

Microsoft’s OpenAI contract

Microsoft seems to have a generally painful relationship with OpenAI, but one big specific issue is that under the current contract it will lose access to OpenAI tech if and when OpenAI produces ‘AGI’, where OpenAI can decide by itself whether that’s happened. Apparently, this is being renegotiated now. 

The underlying problem is that ‘AGI’ is a concept and a thought experiment, not any specific technology or benchmark that you could prove to a court. Most people would probably use ‘AGI’ to mean something roughly equivalent to human intelligence overall, not just being better at chess or Go (though, again, we have no solid definition of human intelligence), but OpenAI publicly defines AGI as ‘highly autonomous systems that outperform humans at most economically valuable work’ - which you could easily argue doesn’t require ‘human intelligence’ at all. Indeed, I think this is actually what people are trying to communicate with ‘super-intelligence’, another term that means whatever the speaker wants - again, see this week’s column. LINK

The week in AI

Microsoft is entering the AI browser game, adding a Copilot sidebar to Microsoft Edge (currently used for around 10% of US web traffic). This reminds me of search toolbars 20 years ago. Also, no one tell the DoJ. LINK

Samsung signed a deal to make chips for Tesla, worth $16.5bn through 2033. Bad news for Intel? LINK

Anthropic had to rate-limit some users of Claude Code, who are using it far more than the $200 fee can support. LINK

Youtube does AI age analysis 

The UK’s launch of an online age verification requirement (only for adult content) got a lot of attention this week. YouTube is trying from the other direction, using AI to guess whether a user might be under 18 and adjusting the recommendations and ads accordingly. LINK

Robotics news

We’ve clearly reached some kind of turning point in limbs and robotics: China’s Unitree is now selling a $6000 humanoid device that can turn cartwheels. This comes partly from AI and partly from better batteries and motors. However, we should remember Moravec's paradox, and that in general, what’s hard for people is easy for machines and vice versa: backflips are a lot easier for a machine than making a cup of coffee, let alone finding the coffee in a strange kitchen. Making your robot with legs instead of wheels doesn’t mean it’s any more intelligent than a Roomba, so where is that useful? After all, if you want a robot to do your laundry, that’s a washing machine. LINK

Meanwhile, Waymo finally has autonomous cars (another ‘robot’) working, though only in strictly limited situations, and Aurora is now running trucks on freeways in Texas: 20k miles since May between Dallas and Houston (so one trip a day, though). LINK

Ideas

Unilever is using generative AI to create an average of 400 creative assets per product: “Before, we’d be doing 20 assets per campaign, and now we’re doing hundreds.” The universal automation question: does this mean fewer people doing the same work, the same number of people doing far more work, or more people doing more work? Here - how many people do you need to hire to check all those generated assets? And do you hire more creatives to feed the prompts? LINK

Om Malik on Zuck’s AI manifesto, and the previous manifestos. LINK

With OpenAI exploring Shopify integration, retailers are wondering what LLMs mean for e-commerce - is this a new kind of referral or a new kind of aggregation? LINK

A Bollywood studio used AI to create a new, happy ending for a classic hit movie. The talent are not happy. LINK

Apparently, US intelligence agencies intervened to get the HPE/Juniper merger past antitrust scrutiny, as they badly want a counterweight to Huawei. LINK

Is cord-cutting starting to level out? Interesting discussion of different TVCo strategies from an industry veteran. LINK

Interesting a16z podcast on LLMs in software development. LINK

Today in AI use-cases - undertakers auto-write obituaries. LINK

China’s consumer drone boom. LINK

Outside interests

An interesting data-led analysis of how and where strategy consulting adds value. LINK

A searchable database of all the text on all the streets in New York. LINK

Ukraine used a cargo drone to deliver an e-bike to a soldier stranded behind Russian lines. LINK

Data

A Microsoft Research paper on what people were doing with CoPilot in the first 9 months of last year. Lots of selection bias, obviously. LINK

The UK’s age-verification requirement for adult content led to a surge in use of VPNs. LINK

OpenAI now has $12bn annualised revenue and 700m WAUs. LINK

Anthropic is raising at a $170bn valuation, and OpenAI raised at $300bn. ANTHROPIC, OPENAI

Column

Meta moves to AI 

Meta is the only large consumer tech company that’s still run by the founder. Mark Zuckerberg has the authority (and preference shares) to make radical changes that a professional CEO would find a lot harder - he could buy WhatsApp and Instagram for what seemed like crazy prices, he could spend $100bn (and counting) on xR, and he can spend tens and hundreds of millions to hire individual AI researchers to get Meta back to the forefront of generative AI. 

People often criticise him for copying, which I think misses the point - social media is pop culture, and it’s protean, always changing, and Mark Zuckerberg is extremely good at surfing user behaviour - at keeping up with how much and how fast everything is changing all of the time. There’s a ferocious rigour to how Meta has been run over the past 20 years that has an originality all of its own, and looks much more like the Microsoft of Bill Gates than it does Google or Apple. He’s missed things and made mistakes, but when he says things about where the industry is going, we should probably listen. 

However, what he says isn’t that surprising. He comes down on the side of the argument that says that these models are certainly the next big thing, and will get a lot better, but they probably won’t go all the way to a giant monolith AGI that can do ‘everything’. The term ‘super-intelligence’ has emerged, I think, to denote a sort of intermediate point, in which we have systems that are far better than the ‘AI’ we have today, and so we need a word that denotes some kind of almost human, somewhat autonomous, somewhat general-purpose systems, without getting as far as ‘AGI’, which people generally use to mean something like actual human intelligence, or beyond. In other words, this will just be more, better software. That's Mark's view, and it's shared by a lot of other people in tech.

What would Meta’s super-intelligence be, though? Well, more Meta: systems that are better at understanding you and recommending things you might like to see, and do, and of course buy. As I keep circling around, the internet has meant that we have infinite products and infinite media, and a couple of billion humans with different needs and desires, and so how do we connect those? Search and social have done their best, but a system that can learn over time and know whatever all that media really mean could connect things a lot better (and sell a lo more ads). 

This is also, of course, self-serving at a more structural level. Meta made Llama open source because it benefits from open source: it wants generative AI to be commodity infrastructure sold at marginal cost, so that Meta can use it to make Meta things on top. That’s the opposite of what OpenAI wants - OpenAI wants this to be unique, and and sold at a healthy profit, and for generative AI itself to be the product, not infrastructure for products built by Meta (or Apple, or Microsoft). So open source is a mission, yes, but it’s also a tactic. 

However, part of the surge of hiring in the last few weeks, with Zuckerberg offering AI researchers at other companies 8 and 9 figure deals, is how many people have apparently turned him down, because they prefer the mission and the vision of building something amazing somewhere else. So, Meta needs a vision too and now maybe it has one, or at least a first draft - this won’t be one monolithic platform or even Mark Zuckerberg’s platform, but many different tools for human empowerment, running on open source for everyone (though with a new mention of responsibility and caution, which may also be aimed at some of those researchers). 

All of this said, though, the challenge isn’t just to get back on the LLM leaderboards with its closed-model peers in Silicon Valley, but rather, as I pointed out last week, the fact that there are now 5-10 impressive open source models from Chinese companies ahead of Meta on those lists. Meta can’t buy those companies, even with Zuck as CEO. 

Benedict Evans