Interestingly,
a modern (as well as more realistic) version of the trolley quandary
has recently come up. It features the issue of how to program the
software of autonomous cars. The quandary: you are riding in such a
car and an unavoidable accident is about to occur. Should the car's
software be written in a utilitarian manner—that is, to choose a
course of action that harms or kills the fewest people? What if those
harmed might include you, the owner of the vehicle? Would you buy the
car with that program, or would you want the car's autonomous program
altered to protect you at all costs, regardless of who else might be
harmed or killed?
This
is a problem that is currently causing a real ethical dilemma in the
autonomous car world. Recently, in late June, a Tesla Model S car, on
autopilot, failed to see a truck enter an intersection in Florida. The car kept
going and killed the rider/driver of the car. Now this is a real
trolley problem. What should be done about the car's software
program, to avoid future such accidents? [Update: another Tesla car
crashed, in somewhat similar circumstances.]
The
issue for Tesla seems to be that they are releasing the car's
software for beta-testing by the public. Beta-testing is a common
practice used by high tech companies, which has customers flush out
software bugs—such as in smart phones. It's a way of allowing those
companies to rush new technology into people's hands, and then
improve and debug the product, using customer feedback. These
companies admit that failure of their product is part of the game;
you don't progress at a fast pace without failure, they say. There is
a good argument that, while this practice may be acceptable for
smartphones, it can be dangerous for cars—where safety is a prime
issue. Major car companies traditionally thoroughly test safety items
before releasing them. Is Tesla playing with customers' lives?
Once
again, I find the autonomous car software problem not to be all that
likely. Sure, a death happened, but was it a different problem than
the trolley car, that could be solved a different way? You may posit
a simple scenario for the autonomous car (such as which way to direct
the car in an impending crash), as in the case of the trolley car
problem, but in the end it's just a thought experiment. It's an
abstract situation that may never really occur. Furthermore, the
unfolding of the actual accident may not present just those two
contrasting alternatives. In a real accident, there may well be many
other options that cannot be foreseen, or tiny events that could
completely alter the situation. I find it impossible to imagine that
anyone could program the car's software to adequately cover all
possibilities.
However
realistic or unrealistic the trolley car quandary or the autonomous
car situations are, I see a more general issue that needs to be
addressed. We have had countless technical innovations introduced
into society—most of them sold to us through the advantages they
offer us. They save time or money; they solve society's problems or
offer wondrous advantages. We have often rushed to make these
technical “solutions” reality; sometimes to later experience a
greater harm.
For
example, DDT was once offered as a miracle solution to mosquito
diseases. It then nearly wiped out several bird species. Oil and coal
offered humanity wondrous kinds of energy sources; now they threaten
to warm the climate to dangerous levels. The atom bomb was developed
to end World War Two; now we have nuclear proliferation that
threatens to make many species extinct—maybe including us. And how
about the innocent intent of Dr. Frankenstein? He created a creature
who subsequently wreaked havoc.
In
our rush to introduce new technology, we usually don't pause to
ponder the potential downsides. We throw caution to the winds, in the
name of the advancement of science and an easier lifestyle. Science
and technology are often billed as amoral disciplines—unconcerned
either with ethics or the questions of right and wrong, and thus we
can go forward with no concern to the downside of their applications.
Their use, however, often leads to moral quandaries. We could
benefit from more caution, from pausing and considering the potential
moral ramifications of unbridled technology.
No comments:
Post a Comment