Eavesdropping gets people talking

By Kimberly Patch, Technology Research News

Although electronics have increased our communications options, devices like headphones can also be isolating. Sharing is difficult, and if you can't hear what the other person is experiencing, it's impossible to time a graceful interruption.

Researchers from Xerox's Palo Alto Research Center (PARC) have made using audio devices a much more social activity by enabling eavesdropping among companions who are wearing headphones in close proximity, like people taking audio museum tours together.

"The first factor we considered... was the ability of shared audio to facilitate social interaction," said Paul Aoki, a research scientist at PARC. "When you and your friends hear something at the same time, you can have much more rich social interaction."

Ultimately, the researchers are aiming for a "conversationally-compatible" computer interface that will make the information retrieval abilities of computers available in distinctly human social contexts, according to Aoki.

In a previous study, the researchers used electronic guidebooks at Filoli, a historic house in Northern California. The guidebooks were personal digital assistant computers that presented a user entering a room with a picture of the room with certain artifacts outlined. Users could tap a highlighted artifact to hear a 30-second description of the object.

In that study, the researchers offered pairs of people visiting the historic house together the ability to play descriptive audio through a speaker or a headset. Because they were taking the tours together, the participants preferred the speaker. "Each participant had their own guidebook, so using the speakers meant that they could hear their chosen audio content, overhear their friend's audio content, and listen for their friend's comments," said Aoki.

The researchers' observations of couples using the guidebooks this way showed how important sharing information is to basic patterns of social interaction. Video recordings taken during the study showed that "couples often treat the guidebook in the same way they would treat an additional person in their conversation -- they give the guidebooks turns in the conversation, they let the guidebook introduce topics of conversation, and they follow many of the same patterns of conversation with the guidebook that they would follow with a human storyteller," said Aoki.

The patterns of conversation worked out well because the guidebook descriptions are about the length of a person's average conversational turn, said Aoki. "And because they're hearing a description at the same time, they can have a shared moment of response to look at each other and say "oh," or laugh, or tell a related story, and then move on with their conversation," he said.

In addition, being able to overhear a friend's audio lets you know when it is okay to talk, said Aoki.

The open-air speakers that were key to this type of interaction, however, are impractical in many real-world settings. "You just can't have 50 museum visitors in Rome, all listening to audio through speakers," the Aoki said.

At the same time, the researchers realized that it was important for each person to have her own guidebook. There is a "tension between wanting to share and wanting control. Participants compared the guidebook to a TV remote control -- sharing content is good, but having only one person in control is bad," said Aoki.

The researchers allowed for audio eavesdropping through headsets in order to let users share audio without introducing a cacophony of differently-timed museum descriptions into the traditionally quiet museum atmosphere.

The eavesdropping-enabled guidebooks, dubbed Sotto Voce, or "in a low voice" in Italian, have only one earphone, leaving one ear free for human conversation. The main hardware components are off-the-shelf: a $10 headset, and a $60 networking card. The devices communicate over a wireless network, and the eavesdropping happens automatically. If a person is not listening to a description on his own headphones, and if he is standing near enough to his companion, he will hear the description his companion is listening to. "You never hear more than one audio track playing at a time, and you always hear your guidebook in priority" over your companion's, said Aoki.

And to help a person distinguish the descriptions she has tapped on from those she is tapping into, she will hear her companion's descriptions at a lower volume and with reverberation, which suggests distance. "As a result, you always have an ear open for conversation; you always hear descriptions that you want to hear; and if you're not otherwise occupied, you get a sense of what I'm hearing, which [also] lets you know when I'm busy," said Aoki.

Nobody but the pair hears the audio descriptions that they are playing, Aoki added.

A surprising part of the study was how well people adapted to the eavesdropping model, said Aoki.

"As it turned out, the couples who used eavesdropping with each other were actually more... engaged than the speaker-audio couples in the first study," said Aoki. "They were even less likely to wander away from each other, and they were less likely to spend a lot of time negotiating over whose turn it was to tap. Their interactions were even more like a really extended conversation," he said.

The researchers also tested the Sotto Voce guidebooks with 47 random visitors to the Filoli house, and found that they followed the same interaction patterns as the original couples, according to Aoki.

The researchers are currently working on improving the guidebook hardware, said Aoki. "We're playing with... audio equipment like bone conduction headsets," he said. These would allow people to hear audio through the bone surrounding the ear, leaving both ears free for human conversation.

They are also working on expanding its applications. There are "many situations outside of museums where people would want to share audio content," said Aoki. "Nearly any kind of tourist activity could benefit from this - city tours, parks, anything where people would want to be provided with [audio] information."

The audio-sharing guidebook is a great idea, said Robert Jacob, an associate professor of electrical engineering and computer science at Tufts University. "Personal electronic devices tend to isolate you from the world around you and from other people. This work is an interesting twist," he said.

Isolation is an issue for computer interfaces in general, said Jacob. "Most existing -- especially desktop -- systems assume the user's full attention. But now we're seeing computer use more closely integrated into people's lives and jobs, especially with PDAs and other notepad-like devices. The user interface needs to be designed to coexist more gracefully with the rest of the user's life and, especially, with interactions with other people," he said.

The project is innovative, said Luigina Ciolfi, a European Union research officer at the University of Limerick in Ireland. "The social nature of the museum visit experience is a crucial aspect that many existing technologies fail to support," she said.

Proposed electronic guidebook designs have pushed for shared note taking, instant messaging, or viewing a companion's location. "Sotto Voce allows for the first time social interaction to happen through the priority channel of audio," said Ciolfi.

Aoki's research colleagues were Rebecca E. Grinter, Amy Hurst, Margaret H. Szymanski, James D. Thornton and Allison Woodruff. They presented the research at the Association of Computing Machinery (ACM) Conference on Human Factors in Computing Systems, Minneapolis, Minnesota in April, 2002. The research was funded by Xerox.

Timeline:   Now
Funding:   Corporate
TRN Categories:   Computer Science;Human-Computer Interaction
Story Type:   News
Related Elements:  Technical paper, "Sotto Voce: Exploring the Interplay of Conversation and Mobile Audio Spaces," presented at Association of Computing Machinery (ACM) Conference on Human Factors in Computing Systems, Minneapolis, Minnesota, April, 2002, and posted at xxx.lanl.gov/abs/cs.HC/0205053.




Advertisements:



July 10/17, 2002

Page One

Photons heft more data

Eavesdropping gets people talking

Self-learning eases quantum programming

Conceptual links trump hyperlinks

Cell parts paint picture

News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

RSS Feeds:
News  | Blog  | Books 



Ad links:
Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN


© Copyright Technology Research News, LLC 2000-2006. All rights reserved.