Saturday, September 15, 2018

AI's Outlook—Part 1

Artificial intelligence (AI) is upon us. From those primitive programming of computer decision-making capabilities just a few decades ago, we now have software that can beat humans at their most intellectually- and creatively-demanding tasks. Recent programs have shamed human champions at chess, Jeopardy!, and even the complex and ancient Eastern game of Go. Other recent spectacular feats of AI include sophisticated robots and self-driving autonomous vehicles.

These incredibly fast and impressive developments have some people ecstatic at the possibilities of future AI applications and other people frightened at what these smart machines may do. What happens when an AI robot becomes far smarter and stronger than humans? Do we need to fear what they might do to us? So far, AI has proven to be superior to the human brain only at narrowly-defined tasks (such as chess and Go), but what will the future bring?

I have been taking an online course from the Delft University of Technology in the Netherlands. One of their researchers did a fine job of scoping out the future of AI. It's pretty complex and replete with differing interpretations, but here's my try at summarizing what he said.

When considering the future of AI, it helps to understand it by looking at three different issues: (1) autonomy, (2) super intelligence, and (3) consciousness.

1. Autonomy is the ability of robots and AI to do things on their own, without human oversight. That said, autonomy is not a crucial issue on its own, since even the thermostat that regulates your household temperature does it on its own. This is important—it's not the autonomy that's the key, it's what the robot will do, and what control might we have over its actions.

Specifically, the concern about the autonomy of AI comes in when it deals with ethical dilemmas. Will the robot's actions be commensurate with human moral values or not? This question comes into focus when we ponder the choice that a self-driving car would make in an emergency. Will it, for example, decide to prioritize its passengers' welfare or that of pedestrians that are in the vehicle's path? How should the AI software program be designed to appropriately reflect human values? Do we even know what that means?

Self-driving vehicles certainly will drastically reduce the many thousands of highway deaths each year due to driver error or inattentiveness. There's no question about that benefit. But what about that one accident in which an autonomous vehicle made a “bad choice” in our eyes? We've already seen a couple of instances; for example, when a self-driving car killed a pedestrian in Arizona last March.

How do we program AI vehicles? We're a long ways away from knowing how. And who is liable, when a self-driving car causes a death? The vehicle manufacturer? The AI programmer?

This concern about autonomy is significantly greater for AI that is used in military robots and drones. Can an autonomous robot, sent in to a dangerous situation (in which the threat to the lives of military personnel might be avoided), appropriately discern the difference between a crouching enemy with guns and a frightened family that contains several children?

More on AI next time...



No comments: