When I moved to Silicon Valley from London, in 2014, I bought a second-hand German car from 2009. The dashboard reminds me very much of using a Nokia in 2000 - it's perfect, and clear, and easy to understand, and there's no software at all. There are features, some of which are shown on a monochrome screen, and powered by firmware, but no software.
Then, a few weeks ago, it needed to be serviced and the dealer lent me a brand new top-of-the-line version of the same model. This one was like using a Nokia from 2007 - they've added all the smart stuff, badly. There are so many buttons that even the buttons have buttons, and though each particular feature makes sense on its own, and might even be implemented quite well, when they're all added together the effect is absurd.
My new favorite site on the internet shows this extremely well, if unintentionally. 'My Car Does What?' is a attempt by the car industry to educate the public about the safety features that have been added to their cars over the past decade or so (I saw it advertised on a video screen at a gas pump). Unfortunately, what it really shows is that a proliferation of features has overwhelmed the 'job to be done'. The job is to stop the car crashing (or rather, stop the user from crashing the car), but the implementation is 'give the user 37 different icons on their dashboard'. Indeed, it's not just the drivers that are confused - the dealers are too.
One way to look at this is to say that the car industry is just bad at software and human-computer interface design. That is probably true, and often you can see the org chart in the dashboard layout, but I think it misses a deeper point - what's really happening is that an interface model has been overloaded to the point that it's becoming top-heavy, and needs to be replaced by a new one. The industry has added more and more features on top of the product: if you'd only added one, it would make sense for it to be a separate feature with its own button or light, but when you've added dozens, really, you need to invert the model and put all of them underneath. Like a melting iceberg rolling over, you need to invert the interface model. In computing, this is what happened with first PC GUIs and then smartphones - interface features that had been added to the top of the previous generation's interface disappeared underneath the new one. You need a new platform to build on.
This is a common theme in many classes of device: you start with a product that has a few electronic functions added, and then those functions are delivered with chips, and perhaps they gain an interface and then a screen, and more and more functions (and probably multi-function buttons) - and then, somehow, you've built a little weird custom computer without actually meaning to, and all the little silos of features and functions become unmanageable, both at an interface level and also at a fundamental engineering level, and the whole thing gets replaced by a real computer with a real software platform. And this new computer is almost certainly made by a different company.
You could see this problem very clearly at Motorola, which developed as many as two dozen 'operating systems' - for phones, pagers, satellite phones, car-control, industrial devices, chip evaluation boards and so on and so on, and picked them for each device out of a metaphorical parts bin just as you'd choose a sensor or battery or any other component. And boy, they really knew how to write operating systems - they had dozens! With, probably, 'millions of lines of code'. This was exactly the right approach in 1995, but in 2005, again, the whole thing collapsed under its own weight, because they needed software as a platform rather than as a one-off component, and instead they had a mess.
In cars, part of this will be addressed by what's termed 'sensor fusion'. Rather than individual sensors triggering individual notifications, a car will have a single computer that takes input from all of the sensors on the car and builds a unified model of what's going on around it. (This is of course also a necessary building-block for autonomous cars.) We don't quite have sensor fusion yet, but Nvidia and others are now selling early versions to OEMs (including Tesla), each of which puts their own light layer of customisation and UI on top. So, instead of a sensor for the left blind spot, and another for the left passing warning, the car will know, at least in a crude, mechanistic sense, what's around it.
However, though this might be a platform of sorts, it doesn't really change the interface problem at all - sensor fusion makes the sensors work together properly, but what should the car do with that? Should it show the same lights and sound the same warning chimes, just more reliably, with all the same buttons? A rich animation on your new fully-digital dashboard? Should there be a 'stick-shaker' that stops you changing lane if there's someone in your blind spot? What if you turn against the stick-shaker? Is all of this answered by more iteration of what car OEMs have already built, or does it call for a more fundamental rethink of the car UI? That is, again, how far can you keep adding stuff on top of the existing dashboard, and when do you need something new? And will car OEMs or their traditional suppliers be the ones to do this?
A good 'Occam's Razor' for this, I think, is the Eric Raymond adage that a computer should never ask you a question that it should be able to work out for itself. These alerts and warnings, and all those buttons, are questions. And so, just as Windows doesn't ask you what sound card you have and smartphones don't ask you where to save a file or what your password is, what is a back-up warning but a question - do you want to stop now? Really, a car shouldn't have a back-up warning - it should just rubber-band to a halt. And that, in turn, is a step to autonomy - to level 3 and 4, the car that will try not to let you crash, and will increasingly drive itself.
That is, the end-point is to have no interface at all. In a fully-autonomous, 'Level 5' car, with no steering wheel or manual controls at all, the only human-computer interface is when you say "take me home now". But most people in the autonomous driving field think that's at least 5 years away and more probably 10, or more. In the mean time we have a transitional phase, as you go from lots of warnings to one and you ask what fundamentally that warning should be, and as you sit in a car where you need to be in the driving seat and steering, mostly, or ready to steer, but the car might stop you, or drive itself. Something that drives itself until it doesn't can easily become dangerous. So, my struggle to turn off the HUD on my borrowed car might become something rather more urgent.
This could, incidentally, be the best car opportunity for Apple. A car that you just tell to go home and forget about is Google's sweet spot, without much scope for Apple to add any unique insight as to how the experience should work. Conversely, a car that you still need to drive, somehow, but in radically new ways, seems like a fruitful place for thinking about how interfaces work, and that's Apple.