Software agents ask for help

By Kimberly Patch, Technology Research News

If you're good at something, people naturally ask your advice about it. Researchers from the University of Porto in Portugal are tapping this learning strategy by programming tiny bits of software, called agents, to ask other agents for help as the group figures out how to control the timing of traffic lights.

The researchers' simulation consists of lights, lanes and cars arranged in a Manhattan-like grid. Each agent is in charge of one intersection. Each intersection has particular characteristics, but the basic task of controlling a traffic light in order to provide smooth traffic flow is the same for each agent.

The agents' first make an attempt to solve the traffic flow problem independently using various learning strategies, then take a look at how their colleagues are doing. "Agents start by solving the problem autonomously, [then], at given points in time, they advertise their quality of service to others," said Luís Nunes, a researcher at the University of Porto in Portugal, and an assistant teacher at the Institute for Business and Labour Studies (ISCTE) in Portugal.

Once the agents have checked out their colleagues' performances, the agents that are doing much worse than the agent with the top score select the top scorer as an adviser. Whenever their performance levels drop below a certain threshold, the advisees solicit advice, and the adviser proposes an action, said Nunes.

After a certain amount of time, they repeat the whole process, and at different times different agents come out on top. "One of the important aspects of this technique is that individual agents keep their autonomy in the decision process, and at the same time benefit from the knowledge acquired by other agents," said Nunes.

The process showed that exchanging advice can, indeed, speed the rate of learning. The method could eventually be used to route traffic on the Internet, balance tasks among networked computers, and help robots cooperate, said Nunes.

The agents learn to control the traffic lights by setting the percentage of green time allowed for each lane using the percentage of cars in each lane and an estimate of the longest waiting time for the cars in each queue. To determine a good green time setting, an agent considers two variables: the number of cars in each lane and the time the first car in each queue has been waiting at the red light.

When an agent seeks advice, it sends these variables to another agent, who then pretends it is facing the situation itself and produces the percentage of green-time it would assign under the conditions that the advisee is observing. The advisee takes in the advice in the same way it learns from its own experience, then generates its own response to the problem, Nunes said.

There is no best way to solve the traffic light task, said Nunes. "We were specifically looking for a problem in which there was no sure winner so that any of the agents can have the advantage at any time and thus be chosen as adviser."

As it turned out, the agents that came out on top in different tests did so by using a variety of learning strategies, he said.

The research also showed that agents learned more quickly and efficiently by exchanging advice, and that the group learning process produced much more reliable solutions across the board. In advice-exchanging simulations, all of the agents ended up performing well, said Nunes. In contrast, "when there was no exchange of advice, some agents simply [failed] to learn to deal with the problem, or they [took] much more time to learn," he said.

Too much of a good thing, however can also cause problems. The researchers had to tune the program after the agents exchanged too much advice in their simulations, said Nunes. "An excess of communication... can severely hinder the learning process," he said. Controlling the amount of communication produced much better results, he said.

One lingering problem is that the simulations take a lot of time, said Nunes. This is simply because it takes the agents many cycles to learn. "Even using a simulator that runs several times faster than real-time, the number of cycles it takes to learn is still quite large," he said.

The researchers' next step is extending the range of problems agents can handle using this cooperative technique, said Nunes. They're also working on ways to assess the quality of the advice the agents exchange, and on ways to combine advice from several sources, he said. "The long-term goal is to have societies of agents that can exchange advice in dealing with similar problems which require them to adapt constantly to new situations," he said.

Take Web search engines, for example. "Currently each search-engine has many users and uses a particular, fixed technique to look for the Web pages you require," said Nunes. Using agent interfaces for each user, a search engine could exchange advice concerning pages that were most interesting given the preferences of individual users, he said. "This might produce significantly more intelligent search engines."

One general advantage of the advice-exchanging approach is that agents using different strategies can work together, said Nunes. "One of the major differences between this and other related work is that each agent is using different learning approaches," he said. This eliminates the common quandary of whether to choose just one learning technique to deal with a problem, or taking the time to test several techniques separately to find the one that performs better, he said.

There are several communications problems to be solved in expanding the cooperation to other problems, said Nunes. "The use of heterogeneous agents demands that they share a common language," he said. At the same time, extending the technique to a wider range of learning techniques may require that the communication skills of the agents be more elaborate, he said.

The researchers are aiming to have a formal outline of the agents' learning behavior and the advantages of its application within three years, said Nunes. It will take a few more years before these artificial intelligence techniques could find practical use, he said. "Maybe in five to ten years time some techniques related to exchanging knowledge during the learning process may be inserted into commercial products," he said.

Nunes' research colleague was Eugénio Oliveira of the University of Porto and the Artificial Intelligence and Computer Science Laboratory (LIACC) in Portugal. They presented the research at the Artificial Intelligence and the Simulation of Behavior (AISB) Convention, in London, in April 2002. The research was funded by the Portuguese Ministries of Science and Education, the University of Porto and the Institute of Labor and Management Services (ISCTE).

Timeline:   5-10 years
Funding:   Government; University
TRN Categories:   Computer Science; Multiagent Systems
Story Type:   News
Related Elements:  Technical paper, "On Learning by Exchanging Advice," presented at the Artificial Intelligence and the Simulation of Behavior (AISB) Convention in London, April 3-5, 2002.




Advertisements:



September 18/25, 2002

Page One

Molecule chip demoed

Diamond electronics on deck

Huge lasers could spark fusion

Diamonds improve quantum crypto

Software agents ask for help

News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

RSS Feeds:
News  | Blog  | Books 



Ad links:
Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN


© Copyright Technology Research News, LLC 2000-2006. All rights reserved.