Environment may dictate intelligence

By Kimberly Patch, Technology Research News

Just how much does intelligence depend on context?

A pair of researchers has made the somewhat surprising finding that context makes a big difference -- at least in a simulation where tiny artificial neural networks must choose which of two groups to join.

This fundamental behavior could underlie much more complicated systems like the human brain, and may point to better methods of creating artificial intelligence systems, said Joseph Wakeling, a graduate student at the University of Fribourg in Switzerland.

In the researchers' simulations, 251 of the neural network agents evolved through many rounds of the simple task. The agents congregated into two groups, and those that ended up in the smaller group won. The agents decided which group to join based on the context of past results.

The agents were designed to mimic biological brains, which all consist of a set of inputs and outputs, a mechanism to make decisions, and a system for determining whether a given output has been successful in providing appropriate feedback, according to Wakeling.

In the exceedingly simple E. coli bacterium, the inputs test for glucose, and the decision is to move in the direction of glucose if the bacterium senses it, and otherwise move in a random direction.

The agents used sets of input neurons to remember a certain number of past turns. Forty-eight intermediary neurons were each connected to every input neuron via synapses of different strengths. These neurons provided the agents' analytic capability, making decisions about what to do in a current round based on the sum of the strength of the synapses connecting them to past decisions.

The intermediary neurons responsible for winning decisions to join the smaller group continued into the subsequent round unscathed, while those that chose what ended up to be the larger, losing group were changed. "Those agents who lose [reduced the strength] of the intermediary synapses that were activated that turn. The idea [is] that these synapses were responsible for a bad decision and should therefore not be used again," said Wakeling. The process is essentially a "Darwinian evolution of good behavioral patterns," he said.

The point of the experiment was to find the fundamental mechanics behind all naturally occurring neural systems, from the bacterium to the octopus to the human, Wakeling said. "What are the basic mechanisms at work that [are common to] the human brain, a mouse's brain, a lizard's brain?" he said. There might be some universal, simple mechanism that allows a large number of neurons to connect in a way that helps the organism to survive, he said.

The simulation used a specific task to try to tease out a universal intelligence mechanism rather than relying on specific characteristics like the number of intermediary neurons the brain has, said Wakeling "The only truly objective measure we [could] think of was success rate at solving some problem -- in other words, success in the context of the surrounding environment," he said.

As it turned out, the success rates of the agents were highly dependent on the exact nature of the competition, leading the researchers to conclude that intelligence is all about context, he said.

In the researchers' simulations, the agents that evolved together tended to think alike, and therefore did not do very well at choosing the less-crowded group. In fact, not one agent achieved even a 50-percent success rate even when some were granted more memory than others. In this context, they would all be better off flipping coins to make their decisions, according to Wakeling.

When the researchers introduced a single, rogue agent that had more memory than the others, however, the rogue agent was stunningly successful, choosing the right group 99.8 percent of the time. "The key is that the rogue is unique in having this extra memory," said Wakeling.

The researchers concluded that an agent can only be truly successful if there are other agents around whose weaknesses it can exploit. If the behavior of the others is highly unpredictable, or they are capable of biting back, the agent's chances of success are vastly reduced.

Many real-world strategies, like making financial investments, also fail because they're based on common knowledge, and most people's approaches will be similar, according to Wakeling.

Because of this mechanism, the researchers could not predict if a given agent would be good or bad at choosing the right group without knowing about the other agents it would be competing against. This shows that the question of how intelligent a system is can only be answered by examining how good it is at coping with its surrounding environment, said Wakeling.

Human beings, after all, are more intelligent than other animals because we are more successful at manipulating our environment to our own benefit, he said.

The conclusion may also point the way to better methods of making artificial intelligence. Designing an intelligence to operate within a certain environment may prove more useful than creating a consciousness in a box and then giving it a purpose, Wakeling said.

The research "takes the work on modeling competition a bit further in an interesting way," said Frank Ritter, an associate professor of information science and technology and psychology at Pennsylvania State University.

"Showing how different memory leads to different behavior, and how context is important for problem solving," is interesting work, he said. The conclusions make sense particularly in an exercise like choosing the smaller of two groups, where the problem is about the context, he added.

It's an open question whether the effect can be generalized, however, said Ritter. "In other architectures, and perhaps with other tasks, this effect might not be seen."

Wakeling's research colleague was Per Bak. They published the research in the November, 2001 issue of the journal Physical Review E.

Timeline:   5 years
Funding:   University
TRN Categories:  Applied Computing; Artificial Intelligence; Neural Networks;
Story Type:   News
Related Elements:  Technical papers, "Intelligence Systems in the Context of Surrounding Environment," Physical Review E, November, 2001; "Adaptive learning by external dynamics and negative feedback," Physical Review E, March, 2001.




Advertisements:



December 5, 2001

Page One

Nerve-chip link closer

Inside-out gem channels light

Computer follows video action

Environment may dictate intelligence

Physics methods may spot intruders

News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

RSS Feeds:
News  | Blog  | Books 



Ad links:
Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN


© Copyright Technology Research News, LLC 2000-2006. All rights reserved.