Evolution trains robot teams

By Kimberly Patch, Technology Research News

Evolution has worked pretty well for biological systems, so why not apply it to the systems that control robots?

Evolutionary computing has been tapped to produce coherent robot behavior in simulation, and real robots have been used to evolve simple behavior like moving toward light sources and avoiding objects.

Researchers from North Carolina State University and the University of Utah have advanced the field by combining artificial neural networks and teams of real mobile robots to demonstrate that the behavior necessary to play Capture the Flag can be evolved in a simulation.

"The original idea... came from the desire to find a way to automatically program robots to perform tasks that humans don't know how to do, or tasks which humans don't know how to do well," said Andrew Nelson, now a visiting researcher at the University of South Florida.

The method could eventually be used to develop components of control systems used in autonomous robots, said Nelson. "Any task that can be formulated into a competitive game -- like clearing a minefield or searching for heat sources in a collapsed building -- could potentially be learned by a neural network or other evolvable [system] without requiring a human to specify the details of the task," he said.

Further off, the method could be applied to robots that must learn to operate in environments that humans don't understand well, said Nelson. "Currently autonomous robot control requires a human designer to carefully analyze the robot's environment and to have a very good understanding of exactly what the robot must do in order to achieve its task," he said.

The capture-the-flag learning behavior evolved in a computer simulation. The researchers randomly generated a large population of neural networks, then organized individual neural networks into teams of simulated robots that played tournaments of games against each other, said Nelson.

After each tournament, the losing networks were deleted from the population, and the winning neural networks were duplicated, altered slightly, and returned to the population.

"When they first start learning, [the networks] are unable to drive the robots correctly or even avoid objects or one another," said Nelson. "However, some of the networks are bound to be slightly better than others and this [is] enough to get the artificial evolution process started," he said. "After that, competition will drive the process to evolve better and better networks." During the course of their evolution, the neural networks learned basic navigation, the ability to distinguish between different types of objects, and the ability to tend the goal, according to Nelson.

After several hundred generations, the neural networks had evolved well enough to play the game competently and were transferred into real robots for testing in a real environment. "The trained neural networks were copied directly onto the real robots' onboard computers," said Nelson.

One of the main challenges in carrying out the process was making sure the simulated environment was similar enough to the real environment so that the networks could function in the same way in both, said Nelson. The robots used color video signals to sense their environment. In order to support color video signals, which carry a lot of information, the researchers had to use relatively large neural networks containing thousands of connections. "We had to find a way of processing video signals that would allow for simulation but still provide enough information [to] operate the robots," he said.

Another challenge was formulating an evolutionary training method that fostered competition both between populations of new, very poorly performing networks and between well-trained, highly-evolve networks, said Nelson. "We wanted the networks to be selected for reproduction based only on their ability to win, but not on any of our own personal human ideas about how to go about winning," he said.

There were several surprising results, said Nelson. In many neural network applications, the larger and more complicated a network is, the more difficult it is to train, he said. "In contrast... we found that the larger the network was, the easier it was to train. This could potentially be attributed to the use of artificial evolution to train the networks," he said.

The researchers also found that after a certain level, increasing the size of the evolving population did not result in evolving better networks. "With the form of artificial evolution we used, a population of 100 networks did not evolve better players than a population of 30 individuals," said Nelson.

The researchers are working to improve the quality and speed of the simulations in order to apply the research to more sophisticated problems. "One possible approach is to apply very fast high-fidelity computer gaming engines to develop robot simulation environments," said Nelson.

The method is also likely to throw light on the question how well artificial systems can learn complex behavior, said Nelson. "Is there a plateau beyond which blank-slate systems cannot be trained using interaction with the environment alone?"

Evolving entire control system components for modules used in today's robots is possible, but not practical, because human-designed controllers are still more efficient than evolved controllers for most of the simple tasks autonomous robots perform, said Nelson.

The method could be used to automatically tune well-defined components of robot control systems, said Nelson. "For example, a robot might retune its object avoidance mechanisms upon entering a new environment -- outdoors vs. inside," he said. This could be used practically in 3 to 6 years, he said.

The long-term benefit of evolutionary robotics research is that it may lead to controllers for robots that can automatically adapt to unknown environments, said Nelson. This ability is many years off, however -- more than 10, and perhaps as many as 50 years, he said.

Nelson's research colleagues were Edward Grant of North Carolina State University and T. C. Henderson of the University of Utah. The work appeared in the March 31, 2004 issue of Robotics and Autonomous Systems. The research was funded by the Defense Advanced Research Projects Agency (DARPA) and the University of North Carolina.

Timeline:   3-6 years; 10-50 years
Funding:   Government; University
TRN Categories:  Robotics; Artificial Life and Evolutionary Computing; Neural Networks
Story Type:   News
Related Elements:  Technical paper, "Evolution of Neural Controllers for Competitive Game Playing with Teams of Mobile Robots," Robotics and Autonomous Systems, March 31, 2004




Advertisements:



May 19/26, 2004

Page One

Solar crystals get 2-for-1

Shape-shifting remakes interfaces

Evolution trains robot teams

Group dynamics play out in VR

Briefs:
Nanotube sparks could cool chips
Nanotube makes metal transistor
Junctions expand nano railroad
Indexes bolster ebook search
Microchannel folds fluids
Electricity turns plastic green




News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

RSS Feeds:
News  | Blog  | Books 



Ad links:
Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN


© Copyright Technology Research News, LLC 2000-2006. All rights reserved.