Tuesday, July 18, 2017

Smart Machines—Part 2

Something similar has happened in the field of embodied cognition. People have built robots for a long time, and some of them—especially when they look like a human and sort of act like one—have been very impressive. But again, these robots could do only what their designers had programmed their “brains” to do. 
What was frustrating and limiting was that every action the robot did required enormous computing power—yet those actions were quite simple. If the robot encountered something the scientists had not thought to include in its software, the robot would spectacularly fail. Maybe its designers had cleverly (and very complexly) programmed the robot to walk upstairs, but if it stepped on a marble, it'd tumble over and lie incapacitated. And any robot that accomplished impressive feats required such brainpower that it gulped large quantities of energy and drained its batteries quickly. (Our human brain is an energy hog.)
When researchers realized the finesse, efficiency, and proficiency of EC and began to build robots based on this principle, those robots are taking the next quantum leap into the future. The EC principle allowed them to create robots that move uncannily like humans and other animals, without much computer power and without the need for the robot's “brain” to plan and execute every move.
To watch a video of a robot programmed with EC is rather astonishing—if not also a little eerie. (This link gives you a number of such videos: https://www.youtube.com/user/BostonDynamics). Boston Dynamics has built several of them and has demonstrated how capable and autonomous they are. Take one of these robots outdoors, where it encounters terrain that it's never negotiated before—uneven ground, snow and ice, deep mud—and it does better than a human can. You watch one of the robot's feet slip or get bogged down, and it stumbles, awkwardly pirouettes, but quickly recovers. Another video shows a human researcher sneaking up behind a robot and giving it a violent shove with a stick. The robot stumbles forward, catches its balance, and continues its previous activity.
These amazing accomplishments have researchers very excited and furiously engaged in experiments to improve what these smart machines can do. The future in the fields of AI and EC is nearly upon us. What comes next? What novel accomplishments will we soon see? The promises are both thrilling and sobering—even rather ominous.
For example, what happens when the cognitive abilities of an AI computer exceed those of the human brain? It's only a matter of time. They have already demonstrated the ability to best humans in a few kinds of intelligence tests, but yet still fall quite short of the human brain's overall flexible abilities. That threshold will soon be crossed, however. When it happens, will an AI computer then be able to reason, become self-aware, or even possess consciousness?
These questions interfere with the sleep of some scientists and philosophers. They are worrisome to many people. A few people even fear that computers might take over the world and force us feeble humans to be their slaves. This fear has spawned a few fascinating books and movies. Nobody knows what will happen. What is disconcerting, at the least, is that research is moving quickly forward, with little consideration of where we are going or what precautions should be made. (That's an old story with human technology.)
Something similar might be said about EC robots. Most of the current research in this area is being sponsored by the military. What are their plans for these smart machines? Obviously, these robots will someday perform much more effectively on the battlefield than human soldiers—being stronger, faster, and more invincible. The death of a robot—no matter its price—is far less onerous than the death of a soldier. But what might happen when a platoon of EC robots invades what is believed to be a fortified bunker of enemy soldiers, and finds it instead occupied by a group of cowering women and children? Will these smart machines also have the moral sense to halt their invasion?
Both AI and EC robots promise some wonderful benefits. But like so much technology of the past, what was once seen as a blessing sometimes had a dark side. Are we being careful enough?

No comments: