Google and Tesla are the indisputable pioneers in the development of autonomous car technology. But while Google’s Self-Driving Car (SDC) has been in development for the better part of the past decade, Tesla’s Autopilot has advanced by leaps and bounds seemingly overnight and has effectively taken the lead in bringing (semi-autonomous) vehicles to market with its Model S and Model X.

Interestingly, however, Google and Tesla each set off on radically different paths to reach their common goal of fully autonomous vehicles by around 2020, legislation permitting. These differences reflect not only differences in technology, but crucially, just as with Boeing and Airbus, differences in their fundamental philosophies of how fully autonomous cars should behave, and most tellingly, how drivers will — or will not — still be able to control them.

There are two primary ways that Google and Tesla differ in their approach towards building autonomous cars: (1) choice of computer vision technology; and (2) human car control — or lack thereof.

Computer vision: cameras vs. LIDAR

Necessary for any self-driving car, of course, is the ability to see the road ahead (and all around). And, street lanes, pedestrians, other cars, stoplights, stop signs, traffic cones, deer, possums, beavers, llamas, and all other sorts of worldly variables.

This is no easy task. In fact, it is incredibly, mind-bendingly complex. To wit, the world has relied upon self-flying, mostly-autonomous commercial aircraft autopilot systems for decades, due in no small part to the fact that (1) computers are simply better pilots than humans; and (2) humans cannot land in “zero-zero” (zero visibility at zero altitude) conditions and still manage to keep the plane nice and shiny.

There are two schools of thought to endowing cars with vision. One approach is to use a system of cameras; the other uses LIDAR, a form a radar that reads reflected laser light. Google decided to use LIDAR, while Tesla has embraced cameras. In simplest terms, here’s the difference between them.

Is that a bicycle or a pedestrian; two motorcycles or a car?

LIDAR. Using bounced pinpoints of laser light, LIDAR effectively forms a 3D model of the world around the car, trivially determining size and distance of all things around the car, day or night, sunny or cloudy. Unfortunately however, LIDAR is relatively expensive; they’re complicated, with moving parts (except for so-called flash LIDAR which is even more expensive); the 3D model of the world, though improving, is somewhat low-resolution — is that a bicycle or a pedestrian; two motorcycles or a car? — they don’t exactly do well in heavy rain, snow, or fog — lasers tend to get refracted through drops of water, liquid, frozen, or otherwise — and they have relatively modest ranges of only up to 100 meters (328 feet) which, if you’re traveling at 70mph, you cover every second. Not exactly enough time to stop should a family of ducks tragically happen to cross your path.

Cameras. Far more conventional to LIDAR, and thus orders of magnitude less expensive, and, without any moving parts, physically simpler as well, digital cameras try to interpolate a 3D world around the car as deduced from the otherwise 2D video images they capture, some of which work by utilizing stereo camera vision to further perfect the effect. Certain car manufacturers today make good use of this already for tasks such as lane keeping and, impressively on Mercedes’ new S Class, actually reading the road surface to compensate the suspension to best dampen an upcoming crater. Where cameras suffer however, is that the data they receive and process is essentially a 2D (or stereoscopic-pseudo-3D) representation of the world, rather than actual 3D data points as one gets with LIDAR. Accordingly, the computer processing required to interpolate and actually understand the 3D world from the 2D data is astronomically complex and thus very expensive financially, as well as with respect to required CPU power.

While Google continues to bank on LIDAR, Tesla strongly believes that camera vision will ultimately prove superior. Interestingly, Mobileye, the dominant supplier by far, providing around 90% of all autonomous vehicle hardware and software to manufacturers around the world, also has its sights set on camera vision rather than LIDAR. Mobileye CEO Ziv Aviram’s opinion is unambiguous: he believes that camera vision is simply more sophisticated than LIDAR, and offers much higher resolution data with which the car can “see” the world ahead. Indeed, they are already rumored to be preparing Tesla’s next generation Autopilot hardware.

Car control: drivers vs. passengers

To start, let’s define the spectrum of autonomous vehicles with two different end points:

Fully autonomous driverless cars: Cars that disallow any direct human control whatsoever. Example: Google SDC.

Driver-managed autopilot-enabled cars: Cars that use an aviation-inspired autopilot paradigm where the driver is usually tasked with overseeing and manage the car, but can take manual control if and when needed or desired. Example: Tesla model S with Autopilot.

Simply put, Google believes in a world entirely devoid of human drivers: fully autonomous driverless cars, then. In contrast, Tesla, as repeatedly and vigorously explained by Elon Musk, believes that self-driving cars should utilize an aviation-inspired autopilot system (or Autopilot, with a capital A, in Tesla Land), i.e., driver-managed autopilot-enabled cars:

“Autopilot is what we have in airplanes. For example we use the same term that is in airplanes where there is still an expectation that there will be a pilot. So the onus is on the pilot to make sure that the autopilot is doing the right thing.” -Elon Musk

To wit, while Google’s test cars have eschewed every piece of interface between driver and vehicle, Teslas still have the familiar steering wheel, brake, and gas “go” pedals.

If we applied Google’s approach to commercial jets, it would clip pilots of their wings, and relegate them to little more than idle passenger, albeit with vastly superior views.

Tesla’s approach echoes Airbus’ philosophy, namely, that a driver’s (pilot’s) job is to manage and oversee the car (airplane) to get from A to B, and not, as it were, to actually drive (fly) it from A to B, unless absolutely necessary.

In contrast, Tesla’s approach still allows humans to drive, but echoes Airbus’ philosophy, namely, that a driver’s (pilot’s) job is to manage and oversee the car (airplane) to get from A to B, and not, as it were, to drive (fly) it from A to B, unless absolutely necessary.

Hence Airbus’ long-championed “fly-by-wire” flight controls, where the flight computers do most of the flying. Boeing, in contrast, believes that pilots should do most of the flying, with the flight computers and autopilot systems not nearly as hands-off as with Airbus.*

If there seem to be numerous comparisons and analogies made to commercial aircraft, it’s because there are, and there should be. Indeed, the burgeoning autonomous car revolution can — and should — learn a lot from the past four decades or so of nearly-autonomous aircraft design, philosophy, and implementation.

Arguably, a big reason passengers are comfortable boarding aircraft is precisely because we still have pilots at the wheel (or, in an Airbus-speak, the “sidestick,” essentially a video-game-like joystick); the market would likely never embrace a pilotless flight, if only because it’s nice to know the humans are there to help manage the computers if anything goes wrong (never mind the multiple-redundant computer systems on board all commercial aircraft).

If the issue here were about driving and texting rather than driving and sleeping, we’d call it “distracted driving.”

On the other hand, Tesla’s aviation-inspired Autopilot may prove to be an imperfect interim solution: while Google’s system is designed quite literally to let a driver passenger enter a car and fall asleep at the wheel Tesla expects the driver to remain alert, attentive, and ready to come to action should the computer request assistance.

To be blunt, then, Google is applying a binary solution to the problem, while Tesla’s is more analog: the driver is neither fully driving nor fully asleep. Which is weird: if the issue were about driving and texting rather than driving and sleeping, we’d call it “distracted driving.” This does not make for good PR.

To be fair, Google’s completely hands-off approach is likely a taste of a slightly more distant reality, even if it does intend (try to) introduce such driverless cars around 2020; and in any event, Tesla has never explicitly ruled out full driverless capability one day, but is merely focusing on driver-managed Autopilot for what is likely a more realistic immediate future.

All cars will go fully autonomous in the long term, and it would be quite unusual to see cars that don’t have full autonomy within 15-20 years.

Indeed, during Tesla’s earnings call earlier this morning, Elon Musk stated in no ambiguous terms that “all cars will go fully autonomous in the long term,” and that it would be “quite unusual to see cars that don’t have full autonomy” within 15-20 years. So could well be that Tesla’s Autopilot is an interim step before fully adapting Google’s driverless future.

Whichever philosophy you subscribe to, the future is indisputable: semi- and even fully autonomous cars will be as common in the next five years as, if not regular cruise control, then at least as so-called “active cruise control systems” are today. Provided, of course, that the lawyers can keep up.

And yes, you will love them.

Follow me on Twitter @MarcHoag
Follow me on @Quora

__________
For you aviation geeks out there, yes I realize I’m grossly oversimplifying things here, but what I’m thinking of specifically are differences between the two manufacturers’ approach to the auto-throttle system, executing flight level changes, and, frankly, Airbus’ vastly easier autoland functionality. No I’m not a pilot, but it’s astonishing how much you can learn from sophisticated simulators like X-Plane (and, to be fair, MS Flight Simulator).

9 thoughts on “Google vs. Tesla: Two different philosophies on self-driving cars

  1. Systems engineering (not to mention FMEA analysis) will probably drive car manufacturers to use a combination of sensors with independent failure modes: camera, radar, lidar & ultrasonics.

    An unexpectedly interesting question is this: Where on the processing pipeline(s) should the point(s) of data fusion lie?, and how should we perform safety engineering on the post-fusion data flow pipeline….

    Liked by 1 person

    1. Great points William, to which I have one comment and one question 🙂

      (1) I agree that manufacturers will indeed ultimately have multiple redundant systems, again, just as aircraft have. A great example is the final of all fail-safes, the comically simple ram air turbine, essentially a glorified wind generator that drops out the underbody of Airbus (and, I believe, modern Boeing) aircraft, in order to provide a bare minimum of power should all engines — and thus electrical systems — fail.

      (2) Could you elaborate further on your question? Are you asking when/where the various systems overlap, e.g., when do we rely upon, say, LIDAR, and when do we rely on camera vision?

      Like

  2. one difference between aircraft and cars is security. aircraft, way up in the sky, without a proper connection to the internet, are far less susceptible than autonomous cars, where i’m sure IoT proponents will find ways to make everything accessible on the internet. as automation increases, the potential upside to a malicious actor increases as well.

    Like

Leave a reply to Kristopher Noronha (krist0ph3r) Cancel reply