So I’m currently in the process of writing a ridiculously long three-part series on electric cars, self-driving cars, and the fledgling field of autonomous driving laws. But until I get around to finishing it, the question of whether we’ll actually have self-driving cars any time soon seems to pop up constantly whenever I talk with anybody about self-driving cars generally.
The conversation always takes the same path. First, people’s eyes get that sort of incredulous look of “Really? You really like the idea of self-driving cars? You really think they’re going to be a thing?” (Maybe they just love to speak in italics?)
Then, once they realize I’m serious — or perhaps just slightly nuts — the conversation usually branches into one of three directions:
(a) They start to attack me at my automotive heart, claiming that I, of all people, should never (there go the italics again) want a self-driving car because I love cars and driving so much; or
(b) They sort of half-way concede that maybe, just maybe, self-driving cars could in fact become a thing, but so what, it’ll be decades before they’re an italicized thing.
(c) Regardless, people will never be okay being driven around by a computer, because computers make mistakes, they could crash, and the world would end in a Terminator-like takeover of malicious self-driving Google Cars, Apple iCars, and other ridiculously cute looking cars. Yes, that image at the top of this post is indeed a Google Car. Can’t you imagine it taking over the world?
This is a quick explanation (because I’ll go into more details in my forthcoming never-ending three-part series) on why they’re all wrong.
Ok so let’s just dive into each one of these points one after the other.
Why I, a petrolhead / auto-enthusiast / Jeremy Clarkson-quoting / TopGear fanatic cannot wait for self-driving cars
Simple. It’s precisely because I love cars and driving that I want self-driving cars. Sitting in stop-and-go traffic is not driving. Neither is sitting on the 5 freeway between San Francisco and LA, doing a mind-numbing 70 mph, on a road designed by engineers who think hexagons are actually circles.
Besides, we already use cruise control on freeways. And most cruise control systems are so-called “active” systems which can also brake your car to a complete stop if necessary, to avoid hitting another car, or worse, an elephant. And we already half lane-keep assists which keep your car from falling out of your lane. And we already have blind-spot monitoring systems which keep you from sharing paint with the car next to you. So what’s the difference? The point is, the differences in self-driving capability are only in degree, and not in kind.
Even if I didn’t want self-driving cars for myself, I certainly want it for other drivers.
And what about the other drivers? Even if I didn’t want self-driving cars for myself, I certainly want it for them. I don’t trust other drivers on the road. When my Dad taught me to drive, the resounding theme was always about being a defensive driver, and to never trust other drivers.
To give an analogy, think about one of those times you were sitting in your car, parked in a parallel spot along the curb, and an absurdly large
tank SUV tried to fit in the spot in front of you, a spot barely large enough for a turtle. Ten years ago, your heart would probably freeze up a bit, your nerves would tighten like a wind-up toy, and your fist would instinctively lay into the horn, all in a desperate attempt to prevent your car from being flattened.
You can breathe easy today knowing that the car in front of you simply will not hit you because of its parking sensors.
But today you can relax, at least if the SUV in front of you is relatively modern, because you know it almost certainly has parking sensors, incessantly beeping away to notify the driver when they’re about to give your car a nose job. And even if you aren’t sure, you can almost always see those little round parking sensor things that adorn the rear bumpers of almost all modern cars. The point is, you can breathe easy knowing that the car in front of you simply will not hit you.
Ok this is getting too long already. But you get the point. Ok then, next point:
Why it will not take decades, plural, for self-driving cars to become an italicized thing
Self-driving cars are not a binary thing, italicized or otherwise. Rather, “self-driving” occupies an entire spectrum of autonomous capability, from the lowly cruise control on one end, to full hands-off-take-a-nap automation on the other.
As we discussed above, self-driving cars are already here, albeit in limited form, at some point or another along that self-driving car continuum, and so to argue that it will take decades for them to arrive on our streets is simply wrong.
“Ok fine!” you’re thinking, “I mean fully autonomous self-driving cars, the hands-off-take-a-nap sort of self-driving cars!”
Okay, so let’s address that. First, let’s fix the nomenclature and call fully autonomous cars by what they really are: autopilot-enabled cars. Why autopilot-enabled? A few reasons. One, it just sounds cooler. Two, airplanes which have had fully autonomous controls for decades also use autopilots. And three, Elon Musk said so. And given that he’s at the forefront of such technology, I think it makes sense to follow his lead.
Now that that’s out of the way, let’s see how close we are to fully autopilot-enabled cars. Apparently, not that far:
Thursday, in this case, means today. Although technically, as it turns out, Tesla started gradually rolling out the update last night. And what does Autopilot mean in Tesla’s case? This (you can click the image to make it life size):
As a practical matter, not least because the lawyers haven’t yet caught up with fully autonomous cars, Autopilot will certainly be limited. For example, it’s generally believed to work only on certain freeways, and only when lane markings are sufficiently visible. LA’s freeways, like the infamous 405 in particular, were a big reason why Autopilot’s rollout was delayed so long: white lane markings on essentially white concrete — rather than the more ordinary black asphalt — aren’t exactly the easiest things for a computer to see. But then, they’re pretty hard for people to see at times too, especially in the rain (not that we ever get any in California, but that’s not the point).
The new Mercedes S Class already has a “very autonomous” Traffic Jam Assist.”
Meanwhile, the new Mercedes S Class also has a “very autonomous” — if not fully autonomous” — feature called Traffic Jam Assist. Similar to Tesla’s Autopilot mode, it will however work only at speeds up to 37 mph (why 37? Because it’s actually a nice round 60 km/h), and even then, when there’s actually traffic around (apparently it relies more on other cars’ positions rather than actually reading the road. A hack, then).
Suffice to say then, autonomous cars really are here today, and are not decades away. Yes, fully autonomous self-driving cars that can handle all freeways, surface streets, and weather conditions, are still a few years out — perhaps as many as ten years — but still, that’s not decades with an “s.”
But then, that’s ok: we still need the lawyers to catch up before 100% autonomous cars are even allowed on the roads in the first place.
Why autopilot-enabled cars will be safer than human-driven cars, just as autopilot-enabled aircraft are safer than human-flown aircraft, and why they will not overthrow humanity
Self-flying planes — autopilot-enabled planes — have been around since the 1960s, a development pioneered in the UK, a place not particularly known for its pleasant flying weather. Cloud, fog, rain, all that stuff. Not exactly pleasant to land a plane when you can’t see the runway. And today we have “zero-zero” autoland which is exactly what it sounds like: the ability for aircraft to land with zero visibility at zero altitude, i.e., on the ground. These are called CAT-III ILS (for Instrument Landing System) approaches, and they are awesome. Here’s an example of one, from the cockpit:
But it was more than woefully abysmal British weather that sped the development of aircraft autopilot systems. Even as far back as the early 20th century, the earliest methods of autopilot systems were developed because it was discovered that, at flights longer than an hour or so, pilots would either grow exhausted by the workload of maintaining level flight and navigation at best, and at worst, they would just fall asleep at the wheel.* (Yes, that’s an asterisk, and yes, it means there is a footnote here which I think you’ll find pretty interesting.)
100,000 accidents are each year are caused by fatigued drivers resulting in 1,550 deaths.
If the latter example sounds particularly familiar, it should: falling asleep at the wheel is an all-too common cause of vehicular accidents. According to DrowsyDriving.org, the NHTSA reported that fully 100,000 accidents each year are caused by fatigued drivers, resulting in 1,550 deaths, 71,000 injuries, and $12.5 billion in costs.
Furthermore, a staggering 3,000 people die each month in the U.S. due to traffic accidents. That’s the equivalent of a 9/11 attack happening every. single. month.
And then there’s the statistical odds of being killed in a car: 1 in 5,000. Your odds of being killed in an airplane? 1 in 11 million. For comparison, you have a 1 in 12,000 chance of being struck by lightening at some point in your life, and 1 in 500,000 of being killed by a gun.
Here’s a sketch I drew by hand using the cool new iOS 9 Notes app on my iPhone. On this sophisticated looking graph, the taller the vertical line, the more likely you are to die. So sometimes smaller is better, after all.
The point is, cars are dangerous. Very dangerous. Extremely, mind-numbingly dangerous. Not simply because cars are big, heavy, fast things with people trapped inside moving at incredible speeds, but because we don’t exactly have the most robust drivers’ education and licensing requirements in the US when compared with countries, like, say, Germany, for instance. And people wonder why the U.S. doesn’t have autobahns like in Germany.
If any other activity in the U.S. were this dangerous, this lethal to human life, it would be outlawed.
Seriously though, if any other activity in the U.S. were this dangerous, this lethal to human life, it would be outlawed.
So basically, in order for autonomous, autopilot-enabled cars to become an italicized thing, or more to the point of the question, whether they should become an italicized thing, the answer is indisputably yes, provided they can reduce our chances of dying a miserable agonizing death to something with better than 1 in 5,000 odds. Once this happens, they will in fact be mandated by law, with humans simply not allowed to drive in highly trafficked areas anymore.
Autonomous cars will in fact be mandated by law, with humans simply not allowed to drive in highly trafficked areas anymore.
Considering that Google Cars drive about 10,000 miles a month — or about what a typical driver drives in a year — and has suffered only 11 minor crashes, none of which was the Google Car’s fault, it seems they’re off to a great start.
As far as autonomous cars taking over the world? Nah. They’re having too much fun driving.
So what do you think? Let us know in the comments!
(Ok, so this wasn’t such a short post after all. Oh well. Sometimes I just can’t help myself.)
* Beyond weather and fatigue, it turns out that flying a commercial jet at cruising altitude is a remarkably delicate act. In the stratospherically (literally) high, thin air, aircraft fly marginally above their stall speed, and marginally below their maximum airspeed. Fly either too slow or too fast, then, and the results could be catastrophic.
This delicate envelope in which planes fly is therefore known, appropriately enough, as “coffin corner.” It was failure to remain within this delicate sliver of flight capability that Air France 447’s incompetent pilots lost the plane and all her crew when their autopilot failed due to frozen and thus inoperable speed probe sensors upon which the autopilot computer relied.
Little surprise then, that pilots engage the flight computer’s autopilot functionality shortly after takeoff and, weather permitting, disengage it only during final approach to landing. Simply put, computers are better than pilots at flying aircraft.
“But wait!” I can hear you crying out, “isn’t the point of a pilot to fly you from A to B?” No. The point of a pilot is to ensure the plane flies from A to B, and for the pilot to manage that flight; but the plane does almost all the flying, and in the future, could, admittedly, render pilots entirely obsolete. Similarly, with cars, it will soon be the cars that do all the driving; we will merely manage and instruct them where to drive us, until true 100% autonomy becomes the norm and most human-driven cars are removed from streets (in about 50-80 years).
“But wait!” I hear you cry again, “surely that will just make pilots even more incompetent, never mind the already useless drivers on the streets today!” It’s a valid point, to which driving laws will simply need to borrow a play from the aviation regulations, and make a point to bolster driving education just as pilots are now redoubling their flying skills. Accidents like Air France 447 — to speak nothing of the appalling Asiana 212 accident in San Francisco — underscore the need to enhance pilot instruction, ironically enough, precisely because of the increased automation. So it should be with drivers, as well.