The standard way to talk about autonomous cars, shown in this diagram, is to talk about levels. L1 is the cruise control in your father’s car. L2 adds some sensors, so it will try to slow down if the car in front does, and stay within the lane markings, but you still need to have your hands on or near the wheel. L3 will drive for you but you need to be ready to take over, Level 4 will drive for you in some situations but not others, and Level 5 doesn’t need a human driver ‘ever’ and doesn’t have a steering wheel.
This seems pretty straightforward, until you start thinking about how you might actually deploy this - and about the fact that some places are easier to drive in than others.
As we can already see with the early tests being done with prototype autonomous cars (with their need for a human ‘safety driver’, today these are are effectively L2 or at best L3), autonomy of any kind in one city is different to another - Phoenix is easier than San Francisco, which is easier than Naples or Moscow. This variability applies not just across different cities and countries but also in different parts of each urban landscape: freeways are easier than city centers, which might be easier or harder than suburbs.
It naturally follows that we will have vehicles that will reliably reach a given level of autonomous capability in some (‘easy’) places before they can do it everywhere. These will have huge safety and economic benefits, so we’ll deploy them - we won’t wait and do nothing at all until we have a perfect L5 car that can drive itself around anywhere from Kathmandu to South Boston. And so, if we call a car even L4, we have to say, well, where are we talking about? We might mean ‘most of this country’. But more probably, it will be L4 in one neighborhood, L3 in another and only L2 in a third - and a car might encounter all three of those on one journey. Put your route into the map and it will tell you if today is an L5 day or not.
Hence, for example, a number of companies are working on autonomy for long-haul trucks on the basis that a highway is a much simpler and more constrained environment than a city street, and highways are 80-90% of the mileage of a long-haul truck, so just solving the highway part is worth doing even if you need a human driver to take over at each end, like a pilot taking a big ship into a harbour. So, that truck is L4 or L5 on the highway, but L2 or L3 on city streets.
Conversely, suppose that a medium-sized town says that the middle of town is autonomy-only in daylight hours, and the only vehicles allowed in then are autonomous buses and autonomous cars and taxis. If those autonomous taxis never leave the reserved area (and never go on a highway), they might look more like golf carts than cars. If they can handle the center of Cambridge unaided (so long as there are no human drivers) but not rural roads, but never leave the center of Cambridge anyway, are they L5 or not? What does ‘all use cases’ mean - all use cases for any car, or all use cases for this car?
Then, imagine a garbage truck that follows its crew down the street by itself, but needs to be manually driven from the end of the road to the depot. Is that ‘full autonomy?’ Or is it ‘advanced cruise control’? Does it matter?
These might seem like purely definitional questions, but they get at the problem that it’s easy to ask ‘when can we have autonomy?’ (and everyone does) but the ‘when’ depends on the ‘where’ and the ‘what’, and these are a matter of incremental deployment of lots of different models in different places over time.
Moreover, this process of deployment will shape the final situation. The diagram at the beginning of this piece reminds me painfully of some of the conversations I had about the future of the ‘mobile internet’ in 1999 and 2000. Imagine if a bunch of telecoms people had sat down in 1998 and defined what a ‘partial multimedia terminal’ and ‘advanced multimedia terminal’ would look like and what specs it would have and then told the manufacturers, or rather, the ‘suppliers’, to go away and come back in a decade with products that matched the spec. That isn’t how it happened - rather, the process of creating the technology shaped the final result. We tried many paths, and worked out the ones that worked (and in the end the telcos were not in control anymore and the ‘suppliers’ had all changed). We didn’t start with a predetermined conclusion and then go away and build it. In the same way, I sometimes think that talking about L4 or L5 presumes the shape of the outcome - instead, the outcome will be shaped by the process.
Clearly, the problem with this multi-faceted, evolutionary, process-based model is that autonomy actually has some pretty basic binary questions - ‘Can I read instead of watching the road or not? Can I go to sleep in the back of the car or not?’ An evolutionary model can only take you so far - you do need to be able to tell people whether the car is ‘autonomous’ or not. A car that needs to transition from ‘autonomous’ mode to manual mode in the course of a trip poses some major challenges: human beings are not good at taking over driving a car at high speed at no notice. Indeed, some people argue that L1 is actually safer than L2 or L3 - with L1 you do know explicitly that it’s you that’s driving the car ALL THE TIME, but sometimes the car will stamp on the brakes when you weren’t looking and save a life.
This is why so much work is going into how the vehicle might communicate with the user - how does it say ‘this is an L5 journey and you can sleep’, or ‘I’ll drive myself for the next hour, and alert you 5 minutes before it’s time for you to take over’? Does that autonomous golf cart just refuse to cross an invisible line into a neighborhood where it’s not certified for autonomy? And can you push the Johnnycab driver out of the way? (These, incidentally, seem like Apple’s kind of questions.)
As with most issues in autonomy today, there are many more questions than answers - this is, again, like trying to predict the shape of the 2018 smartphone market in 2000. But my point here is that even the terms of the discussion might be misleading. Almost certainly, there will not be a moment in 2023 or 2027 when the first ‘autonomous car’ goes on sale, with a sticker that says ‘Level 4 Certified’. There might never be a ‘first’ L4 or L5 car, or there might be lots of different ‘firsts’.