15 June 2025
News
Meta does the first big AI acquisition
Meta will pay around $15bn in cash for a 49% non-voting stake in Scale.ai, which manages around 100k contractors creating and labelling training data for machine learning companies. It was founded in 2016 and serves most of the big model companies. The co-founder and CEO, Alexandr Wang (aged 28), will join Meta.
Meta’s last big acquisition was WhatsApp, which it bought for almost $20bn back in 2014, which was about 10% of its market cap at the time, whereas this is less than 1%, and Meta also plans to spend $65bn on data centre capex this year, mostly for AI. (It’s also spent close to $100bn on VR and associated investments and research since buying Oculus, also in 2014.) Meta woke everyone up to the possibilities of open source LLMs, but Llama 4 was a disappointment, and it’s fallen off the leaderboards and had a lot of staff turnover. It’s been reported that Meta is now doing a broader reset, creating a new AI lab to build ‘super-intelligence’ (whatever that means), and so this is part of that reset. Mark Zuckerberg doesn’t like standing still.
However, one could also point out that he bought Instagram and WhatsApp because they were clearly winning in the next big thing and had very clear winner-takes-all effects, and he wanted to capture that. Today generative AI is also clearly the next big thing, but it’s also clear, at least for now, that there are no winner-takes-all effects, and the only moat appears to be capital - and perhaps access to training data, especially as we run out of the web. In other words, Meta bought Whatsapp because there are moats and bought (49% of) Scale because there aren’t. LINK, NEW LAB, FOUNDER
Meanwhile, Meta’s chief AI scientist, Yann LeCun, has been pretty vocal (and contrarian) in saying that he does not think that LLMs will scale to AGI (or super intelligence, or whatever term you prefer), and that we need new approaches based on observing and understanding real-world physics (one advantage of this view is that we have infinite video to use as training data). Now he’s launched a physical world model based on this view, V-JEPA. LINK
Disney and NBCU sue Midjourney
If you use Midjourney to make an image of Darth Vader, has Midjourney broken Disney’s copyright? Disney and Universal think so, and they’re suing. The filing is an interesting read, perhaps inadvertently, since it talks as though Midjourney is copying and distributing existing images, whereas what it’s actually doing is creating new images of existing, copyrighted characters, which isn’t quite the same thing. LINK, FILING
The week in AI
OpenAI’s scramble for capital and infrastructure means it will use Google Cloud as well as Microsoft. LINK
Mattel announced a ‘strategic alliance’ with OpenAI, which sounds like something ChatGPT would generate if you asked it for corp-dev ideas. LINK
The US military created a tech advisory group with the CTOs of Palantir and Meta (Shyam Sankar and Andrew Bosworth) plus Kevin Weil and Bob McGrew from OpenAI. The brief Silicon Valley taboo on working with the military (always weird, especially given the Valley’s roots in defence) seems to be dead. LINK
Meta sued a Chinese fake nudes app for advertising on Instagram under false names. LINK
WWDC
Apple held its annual developer event, which was fairly low-key after last year’s announcement of a new and exciting version of Siri that never shipped. There’s still no timeline for that (though as I’ve pointed out, no one else has anything equivalent working either) and most of the announcements are features and iterations on a mature platform. Hence, there’s a visual redesign (‘Liquid’), Yet Another windowing /multitasking model for the iPad (which might be laying foundations for AR), APIs to let developers run Apple’s own LLMs for free locally on iPhones, and the usual range of nice-to-have features that would have been witchcraft a decade ago (live translation in FaceTime, say). LIQUID GLASS, IPAD, EDGE MODELS, SIRI
Tangentially, Tim Cook gave an interview to Variety in which he asserted that the billions Apple spends on prestige TV shows isn’t there as self-funded marketing to sell iPhones, but does actually aim to make a profit on its own behalf. Why? LINK
Snap: “We’re still here”
Snap says it will launch its own AR glasses next year. Meta is pushing glasses research hard (and selling a LOT of Ray-Bans), Google announced a whole new software platform and Apple is pouring money into this space (even if no one bought the Vision Pro) - I’m not sure this is the best use of Snap’s vastly smaller resources? LINK
Ecommerce does crypto?
Silicon Valley’s attention has moved on entirely from crypto to AI (and most of the tourists and scammers have left, also to focus on AI), but the hard core that remains is still working hard building low-level protocols and infrastructure, and thinking about use cases. This week Stripe expanded its explorations by buying Privy, which makes a wallet for developers, while Shopify launched a stablecoin payment protocol with Coinbase and added stablecoin support to its own payment in partnership with Stripe. STRIPE, SHOPIFY PROTOCOL, SHOPIFY STABLECOINS
More in defence-tech
Two interesting stories to note. First, the ESA is looking for funding for a €1bn satellite surveillance network. LINK
Second, echoing Ukraine’s attack on Russian airfields last week, Israel’s attack on Iran this week also appears to have used pre-positioned consumer-grade drones (as well as more traditional systems). LINK
Twitter follies
Twitter has been pursuing the interesting ad sales pitch of threatening to sue companies that don't buy ads (on a sub-scale network that never had good targeting and now has a major brand safety problem too). Now it emerges that the FTC has been quizzing companies that don't buy. This is of course totally normal behaviour. LINK
Ideas
The WSJ reports on declining search traffic at some news organisations as Google scales out AI Overviews. This feels like a new version of the pay-wall debate: do you want broad reach, or can you build a narrow, focused audience without relying on traffic from Google and Meta? LINK
Generalising this issue, McKinsey published an interesting paper looking at time spent, revenue and profit across every kind of media from live sport to books and streaming TV. LINK
Jenson Huang is fed up with Anthropic’s Dario Amodei’s attempts at regulatory capture: “One, he believes that AI is so scary that only they should do it. Two, [he believes] that AI is so expensive, nobody else should do it … And three, AI is so incredibly powerful that everyone will lose their jobs, which explains why they should be the only company building it.” LINK
Sam Altman published a blog post saying that the S-Curve has curved up and ‘super intelligence’ is almost here - whatever that means. The rest is mostly hand-waving, but it does give tangible data for OpenAI energy consumption, a field in which moral panic is accompanied by a lot of guesswork. LINK
Waymo, currently the only company running autonomous cars on public roads at scale outside China, published a technical paper analysing the scaling of its models with more data and computing power. Somewhat unsurprisingly, scaling works. LINK
The UK Courts warn lawyers against using generative AI briefs without checking, after two recent cases where lawyers presented precedents that did not exist. These models create things that look like what someone would probably say, and that can include things that look likeprecedents. As with most ‘disruptive’ tech, LLMs are bad at the things the previous technology was good at, at least to begin with (but good at new things that matter in new ways). LINK
This week’s viral-but-complex story reflects exactly the same issues in more emotive ways: people who are mentally unstable and have conspiratorial fantasies ask ChatGPT about those ideas, and it tells them what they want to hear. Just as for the lawyers, the underlying problem is inherent in how these systems work: they generate a response in a pattern that seems to match the pattern of the question. Should that be filtered? Google and Meta ran into this a generation ago and still sometimes struggle: if you Google for good suicide techniques or proof the Holocaust was a fake, or look at lots of self-harm content on Instagram or Pinterest, the models will give you what you ask for, unless a human decides to filter that out, but how do you find these cases and what lines do you draw? LINK
The WSJ on how LVMH is using generative AI: mood boards, operations and customer service, to begin with. LINK
The US air traffic control system hopes to move off floppy disks. LINK
Ofcom, the UK’s TMT regulator, released an interesting qual report on the ‘manosphere’. LINK
Outside interests
London to Tokyo in just 36 hours! From 1953. LINK
Google partnered with Darren Aronofsky to make a short film incorporating footage created with Veo. LINK
Christie’s has a beautiful Hellenistic necklace from the 1st century BC… LINK
And a glass bed. LINK
Data
Deloitte’s annual digital consumer survey has a lot of useful data, especially around how people are using and understanding (or not) generative AI. LINK
OpenAI reports $10bn of ARR. LINK
Match Group’s annual US singles study. LINK
Column
AI’s step two
One of the classic ways that you can think about the deployment of a ‘platform shift’, or indeed any important new technology, is that it follows three stages. (I’m a consultant - everything is three bullet points). First, we use it for the things we already do: we make the new tool fit our existing work, we automate the tasks we already know we have and we solve problems we can already see. In the enterprise, this generally means cost savings. Second, over time, we realise the new things that this makes possible - the things that could not have been imagined and perhaps that did not even look like problems. We move from bottom-line innovation to top-line innovation. Incumbents add new features, and startups unbundle incumbents. And third, sometimes, we have ‘disruption’ - someone works out how to change the question. Airbnb doesn’t sell software to hotels - it changes what ‘hotel’ means. And of course, this varies by industry: the internet disrupted travel agents but not airlines.
It seems to me that LLMs are mostly still at the first step, and people are starting to wonder about the second. There are some industries where this has strong and early product-market fit (software, marketing, perhaps customer support), and some where it will take a lot longer (law, healthcare). Some companies have lots of pilots and have put a few into production (and taken a few out of production), and there are others where a probabilistic system that doesn’t give predictable results is really hard to use.
But I’ve had a couple of interesting conversations recently, as I prepared to speak at corporate events (something I do 40-50 times a year, incidentally), that went something along the lines of ‘everyone here has had ten AI presentations now and wants to know what’s next?’ They’ve had the presentation from Bain or McKinsey, WPP, and Google or Microsoft. A lot of the more sophisticated consumer-facing companies have got a bunch of ‘step one’ things in deployment now - they launched LLM search, they’re doing review summarisation, they have a project for automated SKU tagging, and they’re evaluating the second wave of customer service bots. Now what? Conversely, a lot of companies gave everyone Copilot or ChatGPT last year and got a big bill with nothing to show for it (or at least, nothing they can measure) - but again, they want to know what's next.
This kind of thing also steers my own thinking and publishing: after creating an AI presentation once a year for the last couple of years, I did an entirely new deck this spring after only a few months (which I will present at the SuperAI conference in Singapore this Wednesday) and I’m now thinking much more about a ‘step two’. What comes after automation?
Part of that means returning to themes I spent a lot of time looking at before ChatGPT. We have infinite product, infinite media and infinite retail, so how do you know what to buy - how does discovery work? Now LLMs could change that entirely. The internet economy grew up around search, then social, then apps - if LLMs don’t drive traffic, how does content work? That gets us to one of this year’s talking points - ‘what’s the SEO for LLMs?’
Meanwhile, all the stuff we cared about before ChatGPT broke out is still there - TV advertising is unlocking, Amazon is the world’s third-largest media owner (outside China) and e-commerce is 25% of retail. In the UK, non-food retail is now forty percent online. All of this kind of stuff means, as I pointed out in a slide in my last presentation, that ‘what’s our AI strategy?’ is actually a lot of different kinds of question. Is that a question for the CIO, CMO, CFO or CEO? Accenture? Or Bain / BCG / McKinsey? It moves from the CMO getting the CIO and CFO to authorise some experiments to the CEO changing their job.