Conference system makes shared
space
By
Kimberly Patch,
Technology Research News
Videoconferencing
has been possible for a couple of decades, and the technology has matured
to the point where it is relatively easy to provide a high-quality, real-time,
full-motion video channel.
Video conferencing is relatively uncommon, however. This is probably
because its cost is fairly high, and it still is not a very satisfying experience.
It may be difficult to pinpoint the exact mechanisms at work, but it is
clear that videoconferencing is not as easy as meeting with people face-to-face:
studies in turn-taking have shown that people in a video conference take
25 percent fewer turns than when they are face-to-face.
Researchers from the University of California at Berkeley have developed
a videoconferencing system that supports stronger collaboration between
remote groups of people.
The system, dubbed MultiView, gives users a sense of shared space
and supports the way people use nonverbal communication in group situations,
said David Nguyen, a researcher at the University of California at Berkeley.
At the same time, the materials used to construct the system are relatively
inexpensive.
With modifications the system could also be used for first-person
multiplayer three-dimensional games, said Nguyen.
Existing multi-party videoconferencing systems use multiple cameras,
but they provide only one participant at a time with appropriate, undistorted
views, said Nguyen.
There are two key components of the Berkeley system. The first is
that it uses multiple video streams, each displayed one at a different location.
"You and I can be sitting in different positions at a conference table,
looking at the same physical display, but the video stream that I see can
be completely different than the video stream that you see," said Nguyen.
The researchers multi-viewpoint screen uses retroreflective materials
to control the direction of light. Retroreflective materials bounce light
back to the light source, unlike mirrors, which reflect light at the opposite
angle. Retroreflective materials are commonly used on street signs and reflective
safety gear.
The material allows light from the projector in front of each person
to be reflected toward the person. "Couple this with the appropriate diffusers
and projectors, and you have a multi-viewpoint screen," said Nguyen. "The
trickiest part of this design was putting together the right retroreflector
with the right diffusers in the right configuration [and] controlling for
specular and ghosting effects," he said.
The second key component is that the cameras are positioned to maintain
the geometric relationships when a group is teleconferencing with another
group, said Nguyen. This prevents the Mona Lisa effect in group settings,
where either everyone or no one feels like they are being looked at, he
said.
In the researchers' system, each person in a group has a camera
placed directly above the image of the person seated opposite in the remote
group. "If there are three individuals in [a] group, there are three cameras,
one on top of each image of each person," he said.
Participants get unique perspectives that correspond to the camera
locations. "As a result, when the remote person looks at your image, [that
person] is also looking directly into the camera which corresponds to you,"
said Nguyen.
Because this camera is displayed only to you via the multi-viewpoint
screen, you will get a very strong impression that the person is looking
directly at you, said Nguyen. At the same time, while that person is looking
at you, she is looking away from the other cameras and so the other people
see images of the person looking away from them.
"In our testing, we have been able to show that not only are members
able to tell when a remote person is looking or not looking at [him, the
person is] able to tell with a high degree of accuracy who they are looking
at specifically," said Nguyen.
The system provides a virtual conference room that contains a large
conference table; two groups of people sit on opposite sides of the virtual
table. The members of the group on one side view the members on the other
side as if the glass pane of the monitor were not there, said Nguyen. The
system allows visual communication to occur naturally, with the requisite
stereo vision, with life-size images and perspective depending on position.
This life-like environment, in turn, supports nonverbal cues, Nguyen
said. In addition to gaze awareness, posture, gesture, proximity and the
direction a person is facing can contribute to effective communications,
he said.
The system is designed to address shortcomings of existing videoconferencing
systems, said Nguyen. One concern in group videoconferencing is the mechanisms
for turn taking. In addition to showing that that people take fewer turns
than videoconferencing than in face-to-face situations, studies show that
if a person wants a particular person to respond or speak next, looking
at that person increases the probability that he will, said Nguyen.
"What I find most interesting are methods of building rapport with
a person," said Nguyen. One of the quickest ways to build rapport, according
the interpersonal communication literature, is to subtly mirror physical
actions like leg crossing or hand and arm behavior, he said.
The researchers also kept in mind ease-of-use when designing the
system, said Nguyen. "We designed the system so that someone can walk up
to it, sit down, be given minimal instructions and be up and running."
Sites store files containing the camera angles for particular remote
sites, and use the information to align their projectors before beginning
a videoconference.
The system is relatively inexpensive compared to current systems,
which can cost $10,000 per site, said Nguyen. The researchers' prototype
cost about $4,000 and supports as many as three people per site, he said.
The system doesn't currently use computer vision techniques although
they would be highly useful, said Nguyen. "In our current implementation,
everyone in the group has to sit in predetermined locations to correspond
with the position of the cameras on the remote side," he said.
There are computer vision techniques under development that, given
streams from multiple cameras, can synthesize a view from a position lacking
a camera. "This would allow the participants to enjoy the added freedom
of moving about the conference table with the appropriate perspective being
synthesized," Nguyen said.
The researchers are studying how the system affects group collaboration
after long-term use. The system is technically ready for practical use,
according to Nguyen.
Nguyen's research colleague was John Canny. They presented the work
at the Computer-Human Interaction conference (CHI 2005) held in Portland
April 2 through 7, 2005. The research was funded by the Corporation for
Education Network Initiatives in California (CENIC), an organization formed
by a coalition of California universities.
Timeline: Now
Funding: University
TRN Categories: Computer Vision and Image Processing; Applied
Technology
Story Type: News
Related Elements: Technical paper, "Multiview: Spatially Faithful
Group Videoconferencing," presented at the the Computer-Human Interaction
conference (CHI 2005), Portland April 2-7, 2005
Advertisements:
|
May 18/25, 2005
Page
One
Stories:
Machine reproduces itself
Conference system
makes shared space
Nanotube memory
scheme is magnetic
How It Works: Robot navigation
Briefs:
Nanoparticles drive
display
Thin silver
sheet makes superlens
Catalyst
boosts gasoline fuel cells
Virtual DNA makes
material
News:
Research News Roundup
Research Watch blog
Features:
View from the High Ground Q&A
How It Works
RSS Feeds:
News | Blog
| Books
Ad links:
Buy an ad link
Advertisements:
|
|
|
|