Conversations control computers
By
Eric Smalley,
Technology Research News
Because
information from spoken conversations is fleeting, people tend to record
schedules and assignments as they discuss them. Entering notes into a computer,
however, can be tedious -- especially when the act interrupts a conversation.
Researchers from the Georgia Institute of Technology are aiming
to decrease day-to-day data entry and to augment users' memories with a
method that allows handheld computers to harvest keywords from conversations
and make use of relevant information without interrupting the personal interactions.
The researchers have built three prototype handheld computer applications
that tap keywords from conversations.
The Calendar Navigator Agent monitors the user's conversation for
keywords that have to do with scheduling, and acts on those keywords to,
for instance, pull up a handheld computer's calendar application and open
an appropriate page when the conversation turns to days and times. DialogTabs
collects keywords that can be used as an aid for the user's short-term memory.
And Speech Courier collects keywords in order to relate portions of a conversation
to a third party who is not present.
"The real application is in reducing otherwise redundant manual
input with your computer when you're talking with someone else," said Kent
Lyons, a researcher at the Georgia Institute of Technology. "This technique
cannot be applied everywhere, but... our three prototypes show some different
potentially practical applications," he said.
To initiate any of the applications, the user holds down a button
on his handheld computer to signal that the system should record and transcribe
his words.
Calendar Navigator Agent recognizes dates and times and uses them
to navigate and mark a graphical scheduling program. This frees the user
from having to manually navigate and mark the scheduler while he verbally
makes an appointment with another person. Pressing an undo button reverses
erroneous entries.
DialogTabs allows a user to capture segments of a conversation as
a memory aid. The software produces a tab on the side of the computer's
screen to mark a captured segment, and stacks the tabs vertically with the
most recent segment on top. The system displays the transcribed text of
a segment when the user hovers the mouse over a tab. Clicking on a tab brings
up a dialog box containing the text and gives the user the option to replay
portions of the recorded speech.
Speech Courier sends recorded audio and transcribed text to a designated
email address. This allows a user to capture a portion of a conversation
that indicates that a task should be done and use the captured speech to
assign the task to someone else.
The researchers' system protects privacy by only using speech from
the user's side of the conversation, said Lyons. "We have intentionally
used microphones that only capture the user's voice and [have] designed
our interactions knowing we only have the information from the single side
of the conversation," he said.
The researchers' next steps are to identify other scenarios where
this method could be used, and to quantify the usefulness of the technique,
including how speaking to a conversational partner and a computer simultaneously
impacts the flow of the conversation, said Lyons.
The technique will also become more useful as speech recognition
improves, said Lyons. "The big technology challenge is speech recognition,"
he said. "Current automatic speech recognition systems are not trained to
deal with conversational speech nor all of the other effects that can occur
while mobile."
For limited domains like calendaring, the technique could be used
in practical applications in two to five years, said Lyons. "Given the way
speech recognition research has progressed in the past decade I'd suspect
more general applications like Speech Courier are 10 plus years away," he
said.
Lyons research colleagues were Christopher Skeels, Thad Starner,
Cornelius M. Snoeck, Benjamin A. Wong, and Daniel Ashbrook. They presented
the work at the User Interface Software and Technology 2004 (UIST '04) conference
held October 24 to 27 in Santa Fe, New Mexico. The research was funded by
U.S. Department of Education's National Institute on Disability and Rehabilitation
Research and the National Science Foundation (NSF).
Timeline: 2-5 years
Funding: Government
TRN Categories: Human-Computer Interaction
Story Type: News
Related Elements: Technical paper, "Augmenting Conversations
Using Dual-Purpose Speech," User Interface Software and Technology 2004
(UIST '04) conference, October 24 to 27, Santa Fe, New Mexico
Advertisements:
|
January 12/19, 2005
Page
One
Video organizes paper
Conversations control
computers
DNA scheme builds computers
The History Files:
A Short History of the Computer
Letter to readers
Briefs:
Copy-and-paste goes
natural
RNA tiles form nanopatterns
Input device
tracks muscle tremors
Nano gas turbine designed
Ultrasound
makes blood stand out
Silicon surfaces
speed circuits
Branchy
molecules make precise pores
News:
Research News Roundup
Research Watch blog
Features:
View from the High Ground Q&A
How It Works
RSS Feeds:
News | Blog
| Books
Ad links:
Buy an ad link
Advertisements:
|
|
|
|