In 1999, when WAP was the future of mobile, the industry group behind SIM cards worked out a way to use the programmable space on a SIM to build a complete WAP browser. This meant that instead of having to wait for consumers to buy new phones with WAP built-in, mobile operators could push a WAP browser onto every phone already in use over the air and get people to start using these services straight away.
This looked like genius - if you worked for the SIM industry group. The problem was that any phone that hadn't shipped with a WAP browser also, ipso facto, had no kind of dedicated data network access (GPRS at the time) and so would be accessing these services over dial-up at something under 9.6 Kbits/second (and paying per minute for call time), and also almost certainly only had a one or two line character-based screen. Adding WAP to such a phone would be almost totally pointless.
This is an extreme example of a bridge product. A bridge product says 'of course x is the right way to do this, but the technology or market environment to deliver x is not available yet, or is too expensive, and so here is something that gives some of the same benefits but works now.'
Hence, retrofitting a WAP browser to existing phones was a bridge and, indeed, WAP itself was a bridge. It was self-evident even by 1999 that the ‘right’ approach was to put the web onto phones in some form (even if it was a mobile version of the web). But outside Japan, where NTT DoCoMo launched exactly this with i-mode in early 1999, phones and networks were not good enough to do that and WAP was something that could be done now.
In hindsight, though, not just WAP but the entire feature-phone mobile internet prior to 2007, including i-mode, with cut-down pages and cut-down browsers and nav keys to scroll from link to link, was a bridge. The 'right' way was a real computer with a real operating system and the real internet. But we couldn't build phones that could do that in 1999, even in Japan, and i-mode worked really well in Japan for a decade.
That is, all technology is a bridge in some sense - you're always shipping what you can ship now even though you know something better will be possible in the future. So there are two questions:
- Do you have a long enough market window for people to use it before the 'right' approach (or the next bridge) becomes viable?
- Is the experience you can do today good enough to be useful in its own right?
WAP had a window of 5 years or so, but the experience was so bad even on phones that were designed for it that no-one ever used it, and it was a fiasco. Conversely, i-mode had a window of a decade and was a great product despite not quite being ‘the web’, and at its peak well over half of the Japanese population was using it (and the clones from the other Japanese mobile operators). Meanwhile, the problem with the Firefox phone project was that even if you liked the experience proposition - 'almost as good as Android but works on much cheaper phones' - the window of time before low-end Android phones closed the price gap was too short.
Sometimes, though, it’s not clear which tech is the bridge. Sometimes the ‘right’ way to do it just doesn’t exist yet, but often it does exist but is very expensive. So, the question is whether the ‘cheap, bad’ solution gets better faster than the ‘expensive, good’ solution gets cheap. In the broader tech industry (as described in the ‘disruption’ concept), generally the cheap product gets good. The way that the PC grew and killed specialized professional hardware vendors like Sun and SGi is a good example. However, in mobile it has tended to be the other way around - the expensive good product gets cheaper faster than the cheap bad product can get good. For example, multitouch smartphones running UNIX-based operating systems crushed featurephones, and short-range local wireless systems (DECT, CT2, PHS) lost out in public services to cellular.
The most obvious bridge technology today, I think, is the use of LIDAR for autonomous cars. Everyone agrees in principle that at some point in the future we’ll be able to make fully autonomous vehicles that drive with vision alone - after all, people drive without LIDAR. And LIDAR sensors today are very expensive (tens of thousands of dollars per unit with several needed per car) and bulky. But the consensus amongst researchers in the field is that the science of computer vision is not good enough yet to do autonomy unaided, and will not be for perhaps 5 or 10 years, and so we use LIDAR (and sometimes radar) until then. LIDAR is a bridge - it’s not ideal, but it’s the sensor tech that works now. It’s the J2ME of autonomous cars. Meanwhile, autonomous driving itself doesn’t yet work even with LIDAR (for unrelated reasons), and LIDAR pricing and practicality is rapidly improving. So, will autonomy start working and vision get good enough before LIDAR gets small and cheap? If so, it might be forgotten (this is obviously Elon Musk’s bet at Tesla). Or, will autonomy work and LIDAR get cheap before stand-alone vision is good enough? Or will we always use lots of different sensor types? We’ll find out.