Virtual people look realistically

By Kimberly Patch, Technology Research News

The first time you enter a room, you probably look around quite a bit to see what's there. The second time you enter the room, you'll probably look around a little less.

Researchers from Trinity College in Ireland have added memory to a neurobiological model of visual attention in order to generate more realistic animation for virtual reality characters.

The idea is to endow characters with internal characteristics like memory and attention that can guide their movements, according to Christopher Peters, a computer science researcher at Trinity College.

The key to providing a character with an internal representation of its environment is memory, said Peters. "The memory system provides a means of storage for information about what the character has previously perceived."

The researchers' gave the the characters synthetic vision modules that provided the sensory input to a memory model. The memory model used the classic stage theory psychological model that delineates memory as long-term and short-term to find information that should be stored for a longer period of time.

The setup allowed a character to determine whether it had seen an object before, said Peters.

The researchers modeled gaze behavior because it is related to visual perception and memory, and it also forms the basis of many higher-level behaviors, said Peters. "For example, if something in our environment provokes our interests, we may orient or senses toward that stimulus in order to enhance its processing," he said. If the stimulus proves dangerous, "we may behave so as to avoid it or leave the area."

The challenge was figuring out, given an internal representation, or memory, of an environment, what parts of the representation would take precedence in attracting a character's interest, said Peters. "If you're walking down the street, what determines the priority [of what you] look at?" he said.

Attention has to do with allocating processing resources in systems -- like living beings -- that are only capable of limited processing, said Peters. "In terms of gaze, we may decide to elaborate our processing of certain stimuli by looking directly at them," he said. The researchers found that a gaze model recently developed by University of Southern California researchers fit the bill.

The USC model simulates the early visual processing areas in the primate brain; it shows that very basic neural feature detectors in three areas of the brain probably explain a lot of how attention is directed to particular objects in scenes. Feature detectors detect areas like edges and color blobs. The model uses maps of feature detectors, and discounts maps that contain too little or too much activity while amplifying regions that have an activity level that is significantly different from other regions. Each feature map highlights one or a few regions that are different from the rest.

The Trinity researchers combined the scene-based attention metrics from the USC attention module with object-based information from their memory module to find objects in a scene that attract a character's attention because they account for temporal changes in the scene like character or object movement, said Peters.

The researchers' gaze generator module depicts appropriate gaze and blinking motions based on factors derived from psychology literature to provide a final animation for the virtual human, said Peters.

One of these factors is a head-move attribute, which defines how likely the character is to turn his head to look at an object. Another concerns the relationship between blinking and gaze shifts. It is common behavior to blink at the start of a head or eye movement, and such blinking is more probable as the size of the gaze shift increases, according to Peters.

The researchers are currently refining their models to add task-driven attention requirements. They are also looking into adding an auditory sense, according to Peters.

The ultimate goal is to provide virtual humans whose gaze behaviors are indistinguishable from real humans, said Peters.

A real-time virtual human performance that involves a full attention system will be practical in three to six years, said Peters. Peters' research colleague was Carol O'Sullivan. They presented the results at the Association of Computing Machinery (ACM) Special Interest Group Graphics (Siggraph) 2003 conference in San Diego, July 27 to 31. The research was funded by the higher education authority of Ireland (HEA).

Timeline:   3-6 years
Funding:   Government
TRN Categories:  Human-Computer Interaction; Data Representation and Simulation
Story Type:   News
Related Elements:   Technical paper, "Attention-Driven Eye Gaze and Blinking for Virtual Humans," Association of Computing Machinery (ACM) Special Interest Group Graphics (Siggraph), San Diego, July 30, 2003.




Advertisements:



March 24/31, 2004

Page One

Molecular logic proposed

System susses out silent speech

Virtual people look realistically

Pulse trap makes optical switch

Briefs:
Irregular layout sharpens light
Bacteria make clean power
Curve widens 3D display
Triangles form one-way channels
DNA has nano building in hand
Nanowires span silicon contacts




News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

RSS Feeds:
News  | Blog  | Books 



Ad links:
Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN


© Copyright Technology Research News, LLC 2000-2006. All rights reserved.