Tuesday, September 25, 2018
Friday, September 21, 2018
AI's Outlook—Part 2
Continuing the second of three
concerns about artificial intelligence begun last post:
2. Super-intelligence becomes a concern when we develop robots who are far smarter than we are—not just in a narrow sense (such as in chess or Jeopardy!), but in a more general sense. To date, AI's intelligence is narrow—that is, computers exhibit super-intelligence in only one area. But we soon will be seeing machines that exhibit what is called artificial general intelligence (AGI). When that comes about, will AGI decide to control us? Would it even care about us?
Another huge question is what impact AGI will have on human employment. AI already has stepped into mundane jobs (think of assembly line robots), but will soon disemploy many more people. What will we do with the millions of people who will no longer have work, because they've been replaced with robots? Very soon we'll see even rather sophisticated human careers—doctors, lawyers—terminated. We aren't prepared to deal with that problem.
Not a lot of effort is currently being applied to the issue of super-intelligence. Most researchers are rushing into the future, unable or unwilling to deal with this concern. Government barely understands the basics, let alone is currently prepared to address this matter.
3. Consciousness brings up another big question for which we currently have no good responses. Today's AI machines are most likely not conscious. But when—as evermore complex computers are built—will computers become conscious, or even sentient? Will they ever do so? Scientists are still struggling to define consciousness in humans and other life forms, let alone machines.
Is consciousness a continuum or is there a line which consciousness does not cross? Is a newborn baby conscious? A monkey? A tree? A rock? These questions are impossible to answer as we don't even yet know how to measure consciousness.
If AI machines do become conscious—after they become complex enough—will they be able to subjectively experience something? Could they experience pleasure or pain? How would we know? And would robots deserve rights, if they become conscious?
There are many unanswered questions about the future of AI. These machines may be a threat, or they may offer humans some type of salvation. Nobody knows at this point. Unfortunately, too few people—especially those in the public policy arena—are seriously considering these issues. Like so many technological innovations of the past, we may soon be dealing with huge problems, without having looked at the issues before they became problems.
2. Super-intelligence becomes a concern when we develop robots who are far smarter than we are—not just in a narrow sense (such as in chess or Jeopardy!), but in a more general sense. To date, AI's intelligence is narrow—that is, computers exhibit super-intelligence in only one area. But we soon will be seeing machines that exhibit what is called artificial general intelligence (AGI). When that comes about, will AGI decide to control us? Would it even care about us?
Another huge question is what impact AGI will have on human employment. AI already has stepped into mundane jobs (think of assembly line robots), but will soon disemploy many more people. What will we do with the millions of people who will no longer have work, because they've been replaced with robots? Very soon we'll see even rather sophisticated human careers—doctors, lawyers—terminated. We aren't prepared to deal with that problem.
Not a lot of effort is currently being applied to the issue of super-intelligence. Most researchers are rushing into the future, unable or unwilling to deal with this concern. Government barely understands the basics, let alone is currently prepared to address this matter.
3. Consciousness brings up another big question for which we currently have no good responses. Today's AI machines are most likely not conscious. But when—as evermore complex computers are built—will computers become conscious, or even sentient? Will they ever do so? Scientists are still struggling to define consciousness in humans and other life forms, let alone machines.
Is consciousness a continuum or is there a line which consciousness does not cross? Is a newborn baby conscious? A monkey? A tree? A rock? These questions are impossible to answer as we don't even yet know how to measure consciousness.
If AI machines do become conscious—after they become complex enough—will they be able to subjectively experience something? Could they experience pleasure or pain? How would we know? And would robots deserve rights, if they become conscious?
There are many unanswered questions about the future of AI. These machines may be a threat, or they may offer humans some type of salvation. Nobody knows at this point. Unfortunately, too few people—especially those in the public policy arena—are seriously considering these issues. Like so many technological innovations of the past, we may soon be dealing with huge problems, without having looked at the issues before they became problems.
Saturday, September 15, 2018
AI's Outlook—Part 1
These incredibly fast and impressive developments have some people ecstatic at the possibilities of future AI applications and other people frightened at what these smart machines may do. What happens when an AI robot becomes far smarter and stronger than humans? Do we need to fear what they might do to us? So far, AI has proven to be superior to the human brain only at narrowly-defined tasks (such as chess and Go), but what will the future bring?
I have been taking an online course from the Delft University of Technology in the Netherlands. One of their researchers did a fine job of scoping out the future of AI. It's pretty complex and replete with differing interpretations, but here's my try at summarizing what he said.
When considering the future of AI, it helps to understand it by looking at three different issues: (1) autonomy, (2) super intelligence, and (3) consciousness.
1. Autonomy is the ability of robots and AI to do things on their own, without human oversight. That said, autonomy is not a crucial issue on its own, since even the thermostat that regulates your household temperature does it on its own. This is important—it's not the autonomy that's the key, it's what the robot will do, and what control might we have over its actions.
Specifically, the concern about the autonomy of AI comes in when it deals with ethical dilemmas. Will the robot's actions be commensurate with human moral values or not? This question comes into focus when we ponder the choice that a self-driving car would make in an emergency. Will it, for example, decide to prioritize its passengers' welfare or that of pedestrians that are in the vehicle's path? How should the AI software program be designed to appropriately reflect human values? Do we even know what that means?
Self-driving vehicles certainly will drastically reduce the many thousands of highway deaths each year due to driver error or inattentiveness. There's no question about that benefit. But what about that one accident in which an autonomous vehicle made a “bad choice” in our eyes? We've already seen a couple of instances; for example, when a self-driving car killed a pedestrian in Arizona last March.
How do we program AI vehicles? We're a long ways away from knowing how. And who is liable, when a self-driving car causes a death? The vehicle manufacturer? The AI programmer?
This concern about autonomy is significantly greater for AI that is used in military robots and drones. Can an autonomous robot, sent in to a dangerous situation (in which the threat to the lives of military personnel might be avoided), appropriately discern the difference between a crouching enemy with guns and a frightened family that contains several children?
More on AI next time...
Tuesday, September 11, 2018
Thursday, September 6, 2018
Gnaughty Gnawer—Part 2
So yes, I confess. I had resorted to killing a sweet little mouse. Furthermore, I did it with premeditation and even a bit of a celebration afterward. What could be my justification to my friends who might condemn me for such a barbaric act? Well, my first thought is to suggest that some accusers examine the log in their own eye. How many of them live in an urban environment, in which they pay other people to kill on their behalf? They may call the exterminator to dispatch pests, buy meat that someone else has slaughtered, send their unwanted dog or cat to the local animal shelter (to later be euthanized), or buy high-tech products from companies that exploit the poor in China. (OK, so maybe that list is a little overstated, and maybe I'm guilty of rationalization.)
Years ago—in my youthful ignorance—I was inclined to use mouse poison, until a little research showed me that the effect of the poison was dehydration, which slowly—and likely painfully—kills. So I've decided that if I am going to kill, I believe it's better to do it quickly. The old-fashioned spring-loaded mousetrap is fast and painless, as an example.
But a more persistent accuser might ask me, did I really need to kill at all? There are traps that snare a mouse alive, which allows you then to transport it elsewhere. I once accompanied a sweet, caring friend who had live-trapped a mouse. We drove in her car for a mile or so and freed the critter. That seemed to be a kind thing to do for the mouse, but what about the carbon footprint of that drive, and what about the possibility that the pest was simply being transferred to someone else's house? For that matter, was it really kind to move the mouse to a new location, where it was unfamiliar with predators there or available food supplies? Could a cat there have been delighted to have a tasty meal dropped off—sort of the beneficiary of a feline meals on wheels?
Or maybe I should have caught the mouse, befriended it, and spent many hours training it—either to keep outside my meditation hut (I'd probably have to bribe it with regular meals) or maybe teach it tricks, so we both could join the circus and become famous and wealthy? Or I could make some endearing mousy video, post it on YouTube, and watch it go viral. My mouse could then bathe in the limelight that all celebrities enjoy and subsequently retire to Cheeseville, Wisconsin. I've got better things to do, however. This cheeky mouse stepped over the line and is now departed. I'll try hard not to let his death haunt my conscience.
Subscribe to:
Posts (Atom)