Probabilities ease genetic logic
By
Kimberly Patch,
Technology Research News
Genetic algorithms are used to find optimum
designs or answers to problems by mimicking the process of evolution:
starting with a population of random solutions or designs, mixing traits,
advancing the best new solutions or designs to the next generation, and
repeating the process through thousands or millions of generations.
One challenge in using genetic algorithms is being able to efficiently
handle the large amount of information generated during the process.
A common solution is to use several or even many parallel computer
processors. This brings another challenge: finding an efficient way for
parallel processors to pass information about the population back and
forth.
Researchers from the University of Algarve in Portugal have devised
a compact genetic algorithm that speeds the process by allowing a representation
of the population as a whole to be passed back and forth rather than more
voluminous information about individuals.
The algorithm could be used to leverage many network processors
to solve large problems, much like the Seti@home project, said Lobo. Seti@home
allows individuals to contribute home computer power to the effort to
monitor radio waves from space. "People around the world could contribute...
some of their idle computer time to run a compact genetic algorithm...
to solve difficult optimization problems," he said.
The system runs a separate compact genetic algorithm on each of
several processors. Each processor exchanges information with a central
coordinator processor every once in awhile. This allows each processor
to turn on or off at any given time so the system can scale to more or
fewer processors. This also makes the system fault-tolerant -- it can
keep going if one of the processors fails.
The algorithm belongs to a class of genetic algorithms known as
probabilistic model building genetic algorithms, estimation of distribution
algorithms, or iterated density evolutionary algorithms. This type of
algorithm replaces the traditional methods of introducing variation into
a population -- mixing traits and introducing mutations -- with a probabilistic
model of the population from which samples are taken to obtain a new population
of individuals.
The researchers' algorithm allows the processors to exchange the
probabilistic model rather than a population of individuals, which cuts
communication time because the model is much smaller than the population,
according to Lobo.
"In traditional parallel genetic algorithms, population members
need to be sent back and forth between the different processors," said
Fernando Lobo, an assistant professor of computer science at the University
of Algarve. "With the compact genetic algorithm... we only need to send
[a] probability vector, which can be transmitted much faster than the
whole population," he said.
A probability vector is a sequence of probabilities that each
represent the chance that a given bit in a string of bits is a 1 or a
0. The probability vector is determined by looking at all the bit strings
that represent individuals and determining that, for instance, six out
of ten first bits are one, four out of ten second bits are one, and so
on.
The communications savings grows as the population grows, according
to Lobo. A population of one million individuals and a 1,000-bit problem
represents one gigabit of information; the compact genetic algorithm cuts
this to 20,000 bits.
The drawback to this type of approach is the compact genetic algorithm
does not capture dependencies, or relationships among variables, which
means difficult problems must be solved in a brute-force way by testing
every possible solution to a problem. This is also true of other parallel
genetic algorithm architectures, said Lobo.
The work advances both theory and practice in a useful way by
taking advantage of the memory compression possible in the compact genetic
algorithm, said David Goldberg, an entrepreneurial engineering professor
and director of the genetic algorithms laboratory at the University of
Illinois at Urbana-Champaign.
Parallelism is a key technique for making genetic algorithms more
efficient, said Goldberg. The researchers have extended the range of using
parallelism by allowing compressed population information to be exchanged
rather than individuals, he said. "Down the road, the exchange of compressed
population models using higher-order linkage information should extend
the technique for use in solving harder problems," he said.
The ultimate goal is to make genetic algorithms easier to use,
said Lobo. "I'd like to see people using them without the need of understanding
their internal mechanisms... in the same way as they use other types of
artifacts," he said. For instance, anyone "can drive a car without knowing
how a car engine works."
Lobo's research colleagues were Cláudio F. Lima and Hugo Mártires.
The research was funded by the Portuguese Science Foundation.
Timeline: > two years
Funding: Government
TRN Categories: Artificial Life and Evolutionary Computing
Story Type: News
Related Elements: Technical paper, "An Architecture for
Massively Parallelization of the Compact Genetic Algorithm" the Computing
Research Repository (CoRR) at
Advertisements:
|
July 14/21, 2004
Page
One
Messenger taps social nets
Quantum crypto network
debuts
Teleport lifts quantum
computing
Probabilities ease
genetic logic
Briefs:
Ultraviolet powers
pixels
Nanorods gain gold tips
E-ink drawing
pad closer to paper
Retinal display
guides near-blind
Multi-projector
system gets diverse
Laser tweezer
traps nanotubes
News:
Research News Roundup
Research Watch blog
Features:
View from the High Ground Q&A
How It Works
RSS Feeds:
News | Blog
| Books
Ad links:
Buy an ad link
Advertisements:
|
|
|
|