18 January 2026
News
Assistants for Google and Apple
As rumoured for months, Apple will indeed use Google’s Gemini models to power the rebuilt personal digital assistant that it first demoed at WWDC in the summer of 2024, but then couldn’t build. As Apple originally planned, the models will run locally on Apple devices and in the cloud on Apple-controlled compute, with all of Apple’s usual privacy undertakings (Apple won’t see individual user data, and Google certainly won’t).
Meanwhile, Google itself launched is own first run at an LLM-powered visual assistant, called ‘Personal Intelligence’, which connects Google’s core ecosystem (Gmail, photos, Youtube and search) to analyse your activity and offer help. See this week’s column. APPLE, GOOGLE
OpenAI finally does ads
After months of speculation, gossip and leaks, OpenAI announced an initial advertising strategy, with the free and cheapest subscriber tiers now having ads in the feed. For now, the ads will only be based on the current thread, not long-term knowledge about the user. Meanwhile, Google says it has no plans to put ads in Gemini. See this week’s column. OPENAI, GOOGLE
The week in AI
Anthropic launched ‘Cowork’, in which its desktop app can try to work with apps and files on your Mac or PC. I say ‘try’ - it is a very rough beta. I asked it to look at a folder to see if it contained some specific data and got a macOS popup telling me I needed to install the command line dev tools before I could use ‘git’. Then it did a plain text keyword search before using an LLM to summarise the first match it found. Yes, this will get better, but today we're still in demo-land. LINK
Yet more AI headcount drama: two of the co-founders of Mira Murati’s ‘Thinking Machines’ startup have left/been fired and gone back to OpenAI. Thinking Machines, as a reminder, raised $2bn at a $12bn valuation last autumn. LINK
The CDN Cloudflare wants to be an AI rights gatekeeper, screening LLM bots as they try to scrape publisher websites for content and training data. It’s adding to that by buying the UK startup Human Native, which is building a rights marketplace. LINK
Walmart is expanding its trials of Alphabet’s Wing delivery drones to another 150 stores. LINK
The Information reports that Microsoft is now on track to spend $500m a year at Anthropic (it bought a $5bn stake last year). LINK
Anthropic shuffles product
Anthropic moved head of product (and Instagram co-founder) Mike Krieger to run an experimental ‘labs’ group, a step that looks very similar to OpenAI moving its own head of Product, Kevin Weil (former head of Instagram), to run an experimental science lab. Given that Anthropic has no consumer use or product, and seemingly no interest in that, he was an increasingly anomalous hire. Course correction at both, perhaps. That said, the new head of product, Ami Vora, has a background in product at Meta, most recently running product for WhatsApp. LINK
DeMeta
As rumoured last week, Meta cut about 10 people, or 10% of the total, from the VR hardware and ‘Horizon Worlds’ VR social network, as it cuts costs here (almost $20bn of burn in the last 12 months) to invest in AI. It also closed three VR content studios, and will no longer produce new content for ‘Supernatural’, its VR fitness experience - note that this was the $400m acquisition that the FTC tried to block on anti-competition grounds. (This illustrated the inherent dilemma for tech competition policy: you can either wait until the market is clear, but then it’s too late, or try to get in early, but then you’re speculating on what might happen and will probably be wrong, as here).
More broadly - as I noted above, there’s really nothing to say about VR and AR that we didn’t say when the Vision Pro launched, and when Facebook renamed itself Meta, and indeed a decade ago when this was all super-hot. The hardware isn’t good enough, it’s not clear when it will be or what ‘good enough’ would be, and even if we had perfect hardware, it’s not clear how many people will care. LAYOFFS, SUPERNATURAL
Ideas
Elon Musk’s lawsuit against OpenAI is getting close to court, and the disclosures are coming. Amongst other things, apparently Elon Musk wanted control. And a $80bn nest egg to fund a city on Mars. Oh, and he wanted his children to control ‘AGI’ (that could actually be a pretty large and varied committee, to be fair). Given that many of the participants are distinguished by a talent for intrigue and somewhat economical attitudes to the truth, this should be entertaining, but it’s not clear to me whether all this will really change very much. LINK
Bloomberg has a long piece on the new (very bubbly) concept of AI datacentres in orbit. TLDR: there’s no law of physics against it (free 24-hour solar power, while radiators to get rid of heat are feasible) but there would be enormous cost and engineering challenges. Back in the dotcom boom, people talked about building data connections in space (Teledesic), and that took 20 years: sometimes history rhymes? LINK
A bunch of people in Bay Area tech are up in arms about a proposed California state ballot initiative that would impose a one-off 5% wealth tax. I have some sympathy with the idea that the USA does not tax billionaires enough (though few of my peers seem to share that view), but one-off taxes are almost always bad governance. The real problem here, though, is the fine print, which says this applies to your share of voting rights. So, Larry Page owns ~3.5% of Google, but those are a special class of shares with 10x votes, so he would be levied at 5% of 30%, which is half his stake… except that to pay, he’d have to sell, paying state and capital gains tax at ~45%, meaning this supposed 5% tax would cost him 90% of his stake. This proposal probably won’t pass, but if nothing else, this kind of thing helps explain why people in Silicon Valley often seem to feel such contempt for governments. LINK
Another take on the question of of how AI is affecting entry-level employment points out that the change in employment started far too early to be triggered by ChatGPT. LINK
This is what happens when graphic designers do interaction design - a continuing series. LINK
Outside interests
Congestion pricing in New York is working well. LINK
China appears to have built a system that can position thousands of fishing boats (big ones) into vast barriers in the middle of the ocean. LINK
Why do police in America kill so many more people than those in other countries? Data and analysis suggest, mostly, on one side (obviously) that there are far more guns, but on the other, that there aren’t enough cops and they get far less training than is normal elsewhere. LINK
Data
JLL’s data centre outlook. LINK
The US Census has been asking companies ‘do you use AI to produce goods and services’, which is useless for at least three reasons. First, their definition of ‘AI’ included things that have been around for a decade (machine learning) or longer (‘voice recognition’). Second, ‘use’ could mean anything from ‘rebuilt your company’ to ‘the CEO has a ChatGPT account’. And third, most use cases are not ‘producing goods and services’ - if a bank uses LLMs to write code, power customer support, or rework their marketing, that would not be included. This month, it has tried to improve 1 and 3, but not 2, and the data remains pretty useless, but still very widely reported, for lack of much better data. LINK
The 2025 CB Insights report on venture capital. LINK
News publishers expect search traffic to drop 43% by 2029… LINK
And the Press Gazette says that search traffic to news publishers dropped by a third in 2025. LINK
Nat Bullard’s 2025 electricity, EV, decarbonisation, and energy use report. LINK
Luminate’s 2025 global music industry report. LINK
Sensor Tower released a report claiming that Amazon sessions using the Rufus chatbot had 3.5x higher conversion. I can think of a lot of ways to be sceptical of this, but… LINK
Column
Distribution
OpenAI still sets the agenda for new models, mostly. It has 8-900m weekly active users, it has the mind-share, and it has the consumer brand. But none of that is based on any fundamental, structural, competitive advantage. Half a dozen to a dozen companies regularly ship SOTA models now, and there are no network effects in LLMs, at least not yet. Only 5% of those 800m users pay, and 80% post less than three messages per day. That usage is a mile wide and an inch deep - which is to say, people can move very easily.
So, how can it turn that position into something durable? It has to make itself a daily habit, and it has to close the ‘capability gap’ - the fact that most people who use ChatGPT don’t find it very useful. Anthropic has gone for the developers and the enterprise API market, but Sam Altman wants OpenAI to be the new Google, Microsoft, and Apple all rolled into one.
That makes ads an interesting choice. On one hand, ChatGPT has marginal cost, and most users aren’t paying yet, so ads are a way to close that gap on those less engaged users (and perhaps tempt them to upgrade). It also makes it easier to give free users more features that have more marginal cost.
Indeed, contextual ads in a chatbot seem like continuity with search ads - everything else that’s happening with AI and the web is a fundamental change, but asking a question and getting an ad in the answer is the same model we got used to 25 years ago. There are much bigger ways that AI will change search, ads, and links than this.
More strategically, if ads are part of the future of chatbots, does that mean it’s better to start building that business soon rather than later, and especially before Google runs away with it? Well, perhaps. But Google has decided the opposite: it won’t annoy Gemini users with ads, for now, concentrating on improving the product and the experience. After all, Google can afford it - this is a loss-leader for now.
But Google is posing much bigger questions than that for ChatGPT. It’s spent the last six months pushing Gemini to all the distribution points that it already has, taking on ChatGPT directly, and it’s also looking at use cases by adding it to products where OpenAI has nothing to say. Yes, I could set up ChatGPT to look at my email and my calendar and tell me what’s going on tomorrow, but Google already has that context ready to go, and can see it from the inside out. Meta can do the same in Instagram: I can ask ChatGPT what I’d like, but Meta can see what I’ve already liked and looked at.
The Apple Gemini deal, to power ‘Siri 2.0’, covers all of these stories. It’s cashflow for Google, and a vote of confidence, but not exactly distribution: the personal assistance that’s integrated with your own data on your iPhone or Mac will be branded ‘Siri’, though more ‘world-model’ questions, that today Siri 1.0 can send to ChatGPT, might go to Gemini instead (we’ll have to wait until the release to see exactly how Apple’s implementing this). But this is really Apple, like Google, proposing a set of use cases where an LLM is a technology to power a feature, not a new tool in its own right.
I think this is a much more general question for OpenAI. Today a general purpose LLM is a commodity, as experienced by a normal consumer for normal, general purpose use cases. If you’re a developer or spend hours in these things every day they feel different, but if you’re posting three prompts a day they’re all the same, and we don’t know how that would change. So how do you compete? You can extend the chatbot itself to enable use cases and drive adoption (how?), or use it to power new features for your existing products (which OpenAI doesn’t have), or try to invent new, dedicated vertical use-cases, unbundling the raw chatbot and abstracting it into tools and products - which means competing with every startup in the world.