VR
tool keeps line of sight in hand
By
Ted Smalley Bowen,
Technology Research News
One of the challenges of navigating computer-generated
three-dimensional environments is figuring out how to avoid bumbling into
things or losing sight of what you're interested in.
Three-dimensional simulations could make it easier to see and work with
many types of information, from architectural designs to more abstract
quantities like economic data.
Researchers at the University of North Carolina have written algorithms
designed to keep users' lines of sight clear as they move through and
manipulate these virtual worlds using touch-sensitive, or haptic controls.
These haptic controls translate the force and torque of a person's movements
-- usually hand movements -- into changes within the virtual environment.
These changes can be anything from the movement of a cursor-like probe
to the shaping of a virtual object, assembly of virtual parts, or operations
on symbolically represented data sets.
In order for a person to follow the virtual action visually, his field
of vision must be tracked and oriented by virtual cameras. This tracking
and orientation, however, can easily fall out of synch with the movements
of the haptic controls.
Some existing tracking schemes use head and eye movements to gauge what
area of a virtual world a person is interested in at any given moment.
The UNC researchers sought to gauge interest by instead tracking the way
a person uses a three-dimensional haptic control arm, which is an elaborate
type of joystick. They also experimented with virtual painting, which
translates users' strokes with a haptic brush device into marks in a computer-generated
picture.
The researchers wrote software algorithms that repositioned the virtual
camera based on the previous few movements of the haptic control device
and the way objects are situated in the virtual space. The software also
accounts for the virtual camera's field of view and focus distance.
The algorithms react directly to the control device's movements, said
Ming Lin, associate professor of computer science at the University of
North Carolina. "We... infer users' intentions from the motion of the
haptic probe, [and] contact regions are used to determine regions of interest
during the manipulation."
At this point in their research the scientists have not tapped into the
force of the user's movements, she added. "We haven't explored the concept
of using the amount of force to deduce users' intentions."
Using the motion information, however, "we can implicitly define the objects
of interest as [those] that the user is grabbing [or] interacting with,"
said Lin.
The algorithms can be pre-set to address specific virtual worlds and data,
or can adapt as a virtual space is navigated and manipulated, said Lin.
"One of the key issues is the relative size of objects and the virtual
probe. [The algorithms] could potentially be self-adaptable," she said.
The researchers' algorithms include a method for adding a second camera
for a second view into the virtual space to handle crowded areas where
the main camera's view could become obstructed.
They tested the camera adjustment techniques with 10 computer science
students in virtual environments for rendering polygonal models, for painting
with a haptic brush interface, and navigating three-dimensional models.
The algorithms developed for the tests would require only minor adjustments
to work with current virtual reality and three-dimensional display systems,
Lin said. "The techniques... are quite easy to implement and are also
independent from each other. We do not anticipate too much effort to be
required to port these techniques to commercial systems," she said.
Further research would put the algorithms through their paces with more
complex models, said Lin. "Our current system is interactive with medium
size models [of] several thousand polygons, but it might not be fast enough
for haptic rendering of massive models consisting of several millions
of polygons [like] datasets from scientific computation and medical visualization
of human organs," she said.
The methods could improve virtual navigation with haptic input devices
like joysticks that have a single point of contact, but would not do much
for more complex input devices like gloves, said Grigore Burdea, associate
professor of electrical and computer engineering at Rutgers University.
The algorithms "will not... work, in my view, for dexterous haptic interactions
using gloves, where multiple contact points exist simultaneously," he
said.
More extensive user testing would be needed to ultimately gauge the techniques'
usefulness and ergonomic impact, Burdea said. One crucial aspect of the
work that looks good on the researchers' video is smooth viewpoint transition,
he added. One problem with virtual environments is they can potentially
lead to "simulation sickness if there's no filtering -- for example if
the probe is in contact with a bumpy surface, leading to rapid changes
in the detailed camera view," said Burdea.
Lin's research colleague was Miguel A. Otaduy. They presented the research
at the October IEEE Visualization 2001 conference in San Diego. The work
was funded by the National Science Foundation (NSF), the Office of Naval
Research (ARO), the Department of Energy (DOE), the Army Research Office
(ONR), the Government of the Basque Country in Spain, and Intel Corp.
Timeline: Now
Funding: Government, Corporate
TRN Categories: Applied Computing, Software Design and
Engineering
Story Type: News
Related Elements: Technical paper, "User-Centric Viewpoint
Computation for Haptic Exploration and Manipulation", IEEE Visualization
2001 conference, San Diego, October 21 to 26, 2001. >
Advertisements:
|
January
9, 2002
Page
One
Sunlight turns water
to fuel
Search method melds results
VR tool keeps
line of sight in hand
Laser pulse penetrates
glass
Model tracks desert spread
News:
Research News Roundup
Research Watch blog
Features:
View from the High Ground Q&A
How It Works
RSS Feeds:
News | Blog
| Books
Ad links:
Buy an ad link
Advertisements:
|
|
|
|