US states are scrambling to regulate self-driving cars, but the focus on physical safety ignores the more complex issues of data privacy and security
States across the US are scrambling to figure out how to regulate self-driving cars, wearable technologies that track our health, smart homes that constantly monitor their infrastructure and the rest of the devices emerging from the so-called “internet of things” (IoT). The result is a smattering of incomplete and inconsistent law that could depress the upside of the technology without really addressing its risks.
What’s most notable about these early regulatory attempts is not that they are varied – that is to be expected. It’s that the regulations deal mostly with physical safety, leaving privacy and cybersecurity issues almost wholly unexamined. This seems to be a pattern now, true too of drone regulation, where regulatory bodies have jurisdiction over physical threats, not informational ones.
The regulatory apparatus is stuck in the atomic age as the regulated technology thrusts into the fully networked age.
Seven stats and the District of Columbia have now enacted laws that address autonomous vehicles, and many more states have laws in the pipeline. The most obvious defect of these early attempts is that they don’t deal with data flows through connected cars. They typically define an autonomous vehicle, prescribe registration and notice requirements for putting them on the roads, and require that there be manual override and a licensed driver in a position to control the vehicle.
Some deal with the allocation of liability, insurance and more detailed safety issues. Some impose special taxes for vehicle owners (hello DC, which has special taxing needs). There is the usual industry criticism that state regulation will result in a patchwork of conflicting rules that will depress automotive innovation. What is to be done, they ask, when one state requires a steering wheel and foot-applied brakes, while another state does not?
In the absence of federal action, what often happens is that California establishes the standard as an early mover with a huge market. This was the case with data breach legislation, where California’s stringent requirements established the industry standard. With revenge porn liability, California moved first and other states followed, so diversity of state action is not in itself necessarily a persistent problem.
What is most troublesome about the autonomous vehicle laws is not how they differ, but how they are alike. They all fit the new paradigm of self-driving vehicles into century-old licensing regimes, without really dealing with what makes autonomous cars so different.
If we think about self-driving cars along a spectrum of autonomy, as suggested by the National Highway Transportation Safety Administration (NHTSA), the state laws are aiming at the mid-spectrum “highly autonomous” vehicles. These are cars that usually drive themselves, but may require human intervention under extraordinary circumstances.
By contrast, “fully autonomous” vehicles – those that need no human driver and may not even have human-operable controls – are not yet permitted. At the other end of the spectrum are the “partially autonomous” cars already on the road. These surrender some of the functions of driving to automatic processes, but need a fully alert human ready to take over at any moment.
The new state laws, in addition to addressing only highly autonomous cars, are focused only on the driver-vehicle physical interface.
That would be fine and proper if the physical interface were the only one that mattered. If the public safety risks posed by autonomous vehicles were solely threats to life and limb, it would be good enough to address the risks as an extension of 20th-century motor vehicle regulation.
But the logical interface between driver and car is just as important. Self-driving cars implicate data-flow issues that are common to many IoT technologies, resulting from constant real-time communications between users and their environments, and then between users and data collectors.
This is data that can reveal intimate and commercially valuable personal details, including geolocation and driving habits. BMW’s sensors are supposedly so sophisticated that they can tell if a child is on board – data that brokers have sought in order to entice parents to pull off the road for kid-friendly offers.
As well as privacy issues there are the security threats. Researchers have shown that the vehicle controls are vulnerable to hacks. This has raised the specter of bad actors taking over automotive braking or steering functions either just for kicks or as a cyberwar tactic.
Although there is an industry agreement on information privacy best practices, state laws don’t incorporate them. So far, state regulations fail to address or even acknowledge the data privacy and security problems associated with the collection, use, storage and dissemination of data gathered from autonomous vehicle use. They don’t deal with the potential for unauthorized third-party access to the data, nor do they deal with routine public safety questions such as whether police should have “back door” control over suspects’ cars when in active pursuit.
California has draft regulations that do address the informational privacy issues, if only glancingly. These require notice and consent before information can be collected from operators other than what’s needed to operate the vehicle.
A mandatory opt-in for data collection is only one of the best privacy practices. In 2014, the major automakers voluntarily adopted Fair Information Practice Principles. These include commitments to transparency, consumer choice, minimization of data collection and retention and de-identification. The principles require heightened protection for personally identifiable information, such as geolocation, driver behavior, and biometric data.
The voluntary best practices, without more, are not all that helpful besides the fact that they’re voluntary. There is enough ambiguity in them to drive an autonomous fleet through. A manufacturer can promise to de-identify personal information (like what time you left home and where you went), but different manufacturers will do this to different standards and some of these standards will allow re-identification. Manufacturers might choose to allow consumers to opt in before collecting their data, but if the smartphone market is anything to go by then consumers would have a much impaired experience if they declined.
At the federal level, a bill has been introduced that does a little less than nothing: it requires a government audit of the Department of Transportation to see if that agency is capable of enacting consumer protections. Anticipating that there will be federal policy at some point, tech and car companies (Uber, Google, Lyft, Ford) have formed a lobbying group to shape autonomous vehicle policy.
What is happening in the autonomous vehicle space recapitulates drone regulation. A spate of state laws have addressed drone flights over private property and critical infrastructure, government drone use and image capture. What federal regulatory policy there is comes out of the Federal Aviation Administration, so just as the NHTSA is an agency that sees to the safety of cars, so the FAA sees to the safety of aircraft. The FAA’s draft drone regulations, as might be expected, address the licensing of pilots and the prevention of drone crashes and flight interference.
And again, these regulations don’t address information privacy and cybersecurity – matters way outside the FAA’s competence. Into this vacuum has stepped the Department of Commerce, whose advisers have convened various interested groups to discuss drone privacy, and in May came up with a voluntary guide to best practice. Yet some of those groups, including the Electronic Frontier Foundation, criticized the process for being too dominated by industry, and refused to sign on.
In the absence of any federal capacity to regulate for data privacy and cybersecurity, these issues are bound to fall between the cracks of state and federal rule making.
The physical side of self-driving cars, and drones, may be significant, but the informational side is revolutionary. And for now, we have to trust those industries to regulate themselves.