By Mike Sheldrick
Last month, the Guardian prophetically wrote an influential, widely read piece entitled, “Statistically, Self-Driving Cars Are About to Kill Someone.”
The Guardian argued that it was statistically impossible that the roughly 100 million fatality-free miles racked up by Tesla’s Autopilot activated cars would continue as those millions turned to billions, and yes, even trillions. One hundred million miles are roughly the rate at which a fatal accident occurs in the U.S.
Then, it happened. Actually, it happened on May 7, on a divided highway in Florida. A man in a Tesla Model S was crushed when his Autopilot-driven car drove beneath a tractor-trailer making a left turn on the highway in front of the car.
It wasn’t clear if the Autopilot or the man was “driving” at the time — some reports suggest that the driver had not received a warning that a crash was imminent while he was watching a video. Tesla itself noted that the tractor trailer was white and traveling against a bright sky, and thus invisible to Autopilot as it crossed in front of the car.
Nine days later, Tesla Motors reported the incident to the National Highway Safety Administration (NHTSA), which for some reason, sat on the information for nearly two months, before it said, in response to press queries, that it would investigate the incident.
Tesla also sat on the information, insisting that it wasn’t material to shareholders. Eleven days after the accident, Elon Musk, Tesla’s CEO, sold a $2 billion block of stock for $215 per share. To be sure, the price has remained pretty much flat, buttressing Musk’s claim that the crash information was of little relevance to the stockholders or to his sale.
Nevertheless, the incident has raised a number of concerns. In its blog/release, Tesla noted that Autopilot is new technology in what the company calls “a public beta phase.” Before Autopilot can even be enabled in the vehicle the buyer must acknowledge that he or she must keep their hands on the wheel while using Autopilot. If they don’t, Autopilot gives them an audible warning to take control of the wheel.
With billions of liability costs potentially at stake, mainstream auto companies rigorously test their cars off-road on private tracks or simulated city streets. Tesla has been following a Silicon Valley tradition of continual improvement with bug-fix releases.
An authoritative account of the accident will have to await the NHTSA examination. In the meantime, however, the incident has amplified discussion about the wisdom of autonomous (or semi-autonomous) vehicles.
Some champions of autonomy insist that overall, driverless vehicles are far safer without relying on humans to intervene at times. Until humans are removed from the loop, they argue, self-driving autonomy will remain elusive.
Apart from the actual Tesla accident, other driverless car issues are now being widely bruited about. The “trolley problem,” a philosophical thought experiment in ethical behavior, has suddenly become widely known, Last month, in a story widely picked up, The New York Times, asked, “Should Your Driverless Car Hit a Pedestrian to Save Your Life”?
This may be the extreme case, but there are numerous other issues still to be resolved, such as hacking. Once again, the Guardian summarized these issues thoroughly and succinctly.
But there’s more. Few of these popular pieces have touched the knotty problem of apportioning fault and liability; or possibly even the larger problem of whether large numbers of drivers will want to give up control because they enjoy driving, especially in a world where they have little control over other aspects of their lives.