Software
spins tales into animations
By
Chhavi Sachdev,
Technology Research News
When people describe or dispute accidents,
verbal testimony is often paired with a series of sketches or animations
depicting the events. Illustrations add more depth and detail to reports,
but manually producing them can be time-consuming work.
A group of researchers in France has developed software that could eventually
give forensic experts and insurance investigators a dynamic picture of
what happened by automatically turning the words into a computerized simulation.
“We have designed and implemented a system that takes written car accident
reports as an input and produces three-dimensional worlds where the entities
involved in the accidents are recreated and animated,” said Pierre Nugues,
now a computer science professor at the Lund Institute of Technology in
Sweden.
The accident-generating system, dubbed CarSim by its creators, translates
a written accident report into a symbolic template or formalism, and uses
that to generate a three-dimensional animation of the scene.
Beyond courtroom and insurance analyses, “CarSim could be used by media
companies, advanced Web sites, or education institutes to recreate dynamic
descriptions from texts,” Nugues said.
Another potential use is in training drivers. The U.S. National Transportation
Safety Board maintains thorough summaries of accident reports that are
difficult to visualize or understand, said Nugues. “Imagine a system that
would read a text and generate…a virtual world where you could be the
driver. So a possible application would be to use CarSim to train drivers
[to avoid] accidents in dangerous situations,” he said.
However, there is not much precedent to build on in the fields of visualization
and language, according to Nugues. Systems that try to automatically convert
words into images generally produce two-dimensional sketches.
CarSim's visualizer creates and animates a three-dimensional world from
words using images and motions from a database, said Nugues. Textual accident
reports that are converted to motion contain several entities designated
by nouns and pronouns. The report, for example, may refer to two cars,
one tree, and one or more roads. The program delineates entities as either
moving or static.
The program maps out the initial directions of the vehicles involved in
an accident, and the events that led to it using parameters such as overtaking
and turning. The program processes these parameters to generate a virtual
scene.
It is hard to convert text into motion automatically, Nugues said. This
is evident from the results of the CarSim experiments. When the system
was first given texts from real accident reports, it generated a satisfactory
animated scene only 17 percent of the time.
The researchers increased the accuracy to 35 percent after improving the
initial direction detection in the simulator and adding a coreference
resolution mechanism, Nugues said. "Coreference resolution is a step that
determines sets of nouns or pronouns that refer to the same entity," said
Nugues. This eliminates the possibility of an entity being counted again.
The cause of the low accuracy lies both in transforming the symbolic formalism
into visuals and in automatically converting the language used in the
accident report into the formalism, according to Nugues. When the researchers
manually wrote a formalism from the natural text, accident simulation
accuracy rose to almost 60 percent, he said.
The researchers hope to improve the system to at least 50 percent accuracy
in recreating accidents, said Nugues. The current system works with French
text and has a limited number of movements in its database.
CarSim is an interesting combination of natural language research and
traffic simulation, said Jan Allbeck, Technical Manager of the Center
for Human Modeling and Simulation at the University of Pennsylvania. But
the system lacks some useful attributes, she said.
“Their representation of accidents is very limited. They can represent
only cars -- no motorcycles [or] trucks. They cannot represent speed or
degrees of turns or road conditions,” Allbeck said. Therefore, the simulation
is likewise limited, she said.
The researchers plan to study more motion verbs to reproduce them more
accurately, said Nugues. They are also planning to better label real motions
and name them to reflect subtle variations, he said.
One way to do this would be by using motion-capture devices to observe
people and name what they are doing, he said. “We already had a small
experiment with dancers that recorded arabesques. We would like to design
a system that would analyze the captured data and [recognize] that they
are doing arabesques,” Nugues said.
Nugues's research colleagues were Sylvain Dupuy and Vincent Legendre at
the Institute of Science of Matter and Radiation (ISMRA) in France and
Arjan Egges, from the University of Twente in Holland. The research was
funded by the Institute. It could be in practical use within 5 years,
said Nugues.
Timeline: > 5 years
Funding: Institutional
TRN Categories: Natural Language Processing; Graphics
Story Type: News
Related Elements: Technical Paper: "Generating a 3D Simulation
of a Car Accident from a Written Description in Natural Language: The
CarSim System.” Presented at the Association for Computational Linguistics
Conference (ACL 2001) in Toulouse, France on July 7, 2001
Advertisements:
|
September
5, 2001
Page
One
Pen and paper networked
Quantum current
closer to computing
Correction
choices key for speech software
Software spins
tales into animations
Watched quantum pot
boils slower
News:
Research News Roundup
Research Watch blog
Features:
View from the High Ground Q&A
How It Works
RSS Feeds:
News | Blog
| Books
Ad links:
Buy an ad link
Advertisements:
|
|
|
|