Page 28 - Cyber Warnings
P. 28
normal vehicles, they illustrate some of the potential risks that could arise with autonomous
vehicles, as one would expect many of the features and systems available in normal cars to be
available and included in autonomous vehicles. Furthermore, to the extent that autonomous
cars lack the ability for a passenger to take control of the vehicle to respond, the safety threat
posed by these vulnerabilities could be even more acute.
b. Bugs
The next source of risk related to personal safety with autonomous cars comes from the
technology itself. Karl Lagnemma, director of a start-up focused on the development of software
for self-driving cars explained the risk posed by software bugs, stating: “[e]veryone knows
security is an issue and will at some point become an important issue. But the biggest threat to
an occupant of a self-driving car today isn’t any hack, it’s the bug in someone’s software
because we don’t have systems that we’re 100-percent sure are safe.”
Steven Shladover, a researcher at the University of California, Berkeley, stated that having
“safety-critical, fail-safe software for completely driverless cars would require reimagining how
software is designed.” Although bugs in the software in other devices, like computers, smart
phones or other devices, are relatively common, the implications of software failure in an
autonomous car could have much more serious implications. This is a risk widely recognized by
American consumers, as 79 percent of consumers have cited fears that “equipment needed by
driverless vehicles—such as sensors or braking software—would fail at some point.”
c. Algorithms
The algorithms used in the autonomous vehicle’s decision-making process also present
potential risks to the safety of passengers and those in the vicinity of the vehicle:
• How should the car be programmed to act in the event of an unavoidable accident?
• Should it minimize the loss of life, even if it means sacrificing the occupants, or should it
protect the occupants at all costs?
• Should it choose between these extremes at random?
Unlike human drivers who make real-time decisions while driving, an automated vehicle’s
decision, although based on various inputs available from sensor data, is a result of logic
developed and coded by a programmer ahead of time.
The difficulty in making and coding the decision process is illustrated in the following
hypothetical:
An automated vehicle is traveling on a two-lane bridge when a bus that is traveling in the
opposite direction suddenly veers into its lane. The automated vehicle must decide how to react
with the use of whatever logic has been programmed in advance. The three alternatives are as
follows:
A. Veer left and off the bridge, which guarantees a severe, one-vehicle crash;
28 Cyber Warnings E-Magazine October 2016 Edition
Copyright © Cyber Defense Magazine, All rights reserved worldwide