As technology changes rapidly, policies lag too far behind
Self-driving cars and remote-controlled drones promise unprecedented levels of convenience and safety, when operating correctly. But what happens when something goes wrong and a collision occurs that results in injuries?
In today’s world, each driver is responsible, and there is a mature legal framework that dictates how insurance claims and court cases are handled. But the lines are not so clear with a self-driving car; in fact, the lines really haven’t even been drawn.
Is the “driver”—who isn’t even driving—still responsible, or is it the owner of the car, who may or not be driving? Perhaps the manufacturer of the vehicle will have legal responsibility, the company that makes the sensors or the firm that wrote the software code for the vehicle’s collision-avoidance/traffic-management system.
These complications only address what happens if something malfunctions within the ecosystem where everyone has incentives to promote safety. Things get really tough when you factor in the potential of bad actors trying to undermine the safety systems.
For instance, how does liability change, if it is determined that someone hacked into the software of one of the vehicle sensors/systems and caused the accident?
Cybersecurity has proven to be difficult to implement with 100% effectiveness, it often is even more difficult to make a perpetrator pay an appropriate penalty. Finding a sophisticated culprit can be almost impossible through a maze of servers and false identities, and jurisdictional limitations may prevent prosecution, if the responsible party can be found.
But a sophisticated cyberattack may not be the biggest threat to self-driving vehicles and remote-controlled drones, which are expected to operate safely because via constant wireless communication with each other, so they can make appropriate adjustments to avoid collisions.
What’s not clear is what happens when this wireless communication is disrupted, be it a signaling problem, unintentional interference or someone intentionally jamming communications. Do the vehicles come to an immediate halt or do they continue to move under the presumption that a lack of signal means there is no traffic? During a drone conference last year, panelists noted that addressing this situation needs to be an industry priority.
I used driverless cars and remote-controlled drones as examples, but the same underlying questions can be asked about solutions associated with the Internet of Things (IoT) sensors, smart grids and other critical technological trends that are built on a foundation of software-based solutions and wireless technology.
A potential jamming scenario is particularly disconcerting, because I’m told that jamming devices are not hard to use, are relatively cheap to buy on the Internet, and current laws make it extremely difficult to stop such activity.
It is illegal to use a jammer, but it is not illegal to own one—and they are readily available on the Internet, Joe Rolli—business development manager for the Harris precision navigation and timing business unit—said in this article that I wrote six months ago.
“It’s not illegal for them to possess it; it’s only illegal for them to buy it or to use it,” Rolli said. “So that makes it a challenge for law enforcement to do anything about catching somebody who was just trying to not be tracked. It becomes an issue.”