By now you’ve probably read the recent article over at Autoblog about those reports detailing “thousands of failures in self-driving cars,” after which you probably threw your hands up in the air and said “see, obviously autonomous cars are crap, they’re dangerous, and they have no business being on the roads, at least not any time soon.”
And you’d be wrong. Very wrong indeed.
So apparently Google announced on Tuesday that its “Google Car” autonomous vehicles experienced technical failures 272 times between September 2014 and November 2015. While none of these incidents caused an accident or injury, they were worthy of mention because they required the drivers on board to manually override the autopilot computers and take control of the vehicles. Moreover, drivers “felt compelled” to take over on their own accord — rather than because of any mechanical issues — 69 times during that same period.
But that’s just Google. During the same timeframe, six other companies also reported 2,894 autopilot disengagements on public roads, per California statutory regulation.
The problem, however, is that none of these numbers are particularly meaningful without context such as total miles driven per car, and in the aggregate. Also, the report neglects to mention that, even with such flaws, the net result is still far safer than human-driven cars.
Autonomous cars should disengage when necessary or allow humans to take over immediately…. This is exactly how autopilot systems work on aircraft, too.
Alternatively, that autonomous cars disengaged either automatically or by human intervention is somehow undesirable misses the point entirely with respect to how an autonomous car should function: to wit, autonomous cars should disengage when necessary or allow humans to take over immediately. As discussed numerous times on this blog, especially here and here, this is exactly how autopilot systems work on aircraft, too.
The issue is not whether an autonomous car’s autopilot system is disengaged either automatically due to mechanical failure or by human intervention. The issue is how that disengagement functions, and whether it provides ample warning and allows sufficient time to the human driver to take control, and whether, during the brief but crucially important time during the transition from car to human, whether the autopilot functionality is gradually rather than abruptly terminated.
A logical — if not easy — solution then would be to implement a gradually phased disconnect of the autopilot systems.
Granted, the reaction time between the autonomous system shutting off and the human driver snapping back to reality and taking control can be on the order of several seconds; a potential eternity, to be sure. A logical — if not easy — solution then would be to implement a gradually phased disconnect of the autopilot systems: for one, the lane keeping assist should remain active; the emergency auto-brake should remain active; or, as Tesla does it, the car should navigate itself carefully to the shoulder of the road and come to a complete stop. After all, in the case where the human driver is for some reason incapacitated, the car will eventually need to come to a safe stop on its own.
These bumps in the road, as it were, are a necessary and foreseeable challenge on the path towards fully capable autonomous vehicles. And given the already monumentally safer results provided by today’s nascent semi-autonomous systems, it cannot be argued that the end goal of fully autonomous vehicles to which we are barreling inexorably forward cannot arrive soon enough.