Software
orchestrates Web presentations
By
Ted Smalley Bowen,
Technology Research News
Like television before it, the Internet
was supposed to bring learning to anyone near the right screen at the
right time. While the Web holds more promise as an educational vehicle,
timing issues still make it difficult to coordinate the kind of multimedia
presentations that can convey lessons in living color.
Although Web
technologies are evolving to better handle streaming files that have
strict timing requirements, pulling together different types of files
drawn from multiple sources into a single, coherent presentation still
takes a lot of work.
With the classroom in mind, a research team from the University of Applied
Sciences in Germany has developed a scheme that allows teachers to organize
digital text,
audio and video into databases,
then draw from their own and other teachers' databases to compose multimedia
lessons.
The scheme allows teachers to pull together digital teaching material
without having to rely on programmers or having to become programmers
themselves, according to Thomas Schmidt, director of the computer center
at the University of Applied Sciences.
The researchers' Media Object Model software is a framework for composing
multimedia lessons for a classroom or the Web. The framework uses metadata
within the files, like information about formatting, authorship, and access
privileges, to smooth the sharing process.
The framework includes an object model, database, lesson planning toolkit
and interface that a teacher can use to assemble master documents, or
presentations, according to Schmidt.
Each database includes a reference list of its constituent parts and a
set of active references, or actions that can be performed on other teachers'
databases specified in the reference list. The framework also accommodates
annotations.
To pull together a presentation from disparate databases, teachers can
specify actions based on the type of metadata contained in each database.
They can also draw information from different databases based on the active
references contained within the data; the software keeps everything synchronized
and spatially coordinated, according to Schmidt.
There are already document models for assembling and presenting teaching
materials via the Web, and there are also several standards initiatives
aimed at adding time-sensitivity to the Web to allow animation, video
and other multimedia files to be streamed efficiently to users’ browsers.
The researchers' framework, however, allows teachers to combine and reuse
these types of media, and coordinates complicated interactions, said Schmidt.
"The database allows for a context-sensitive file system view. An author
will experience objects in the specific [presentation] context, even though
the same object may appear in a completely different context, as well.
This eases the authoring process of complex structured presentation enormously,"
he said.
The Media Object Model includes the Media Information Repository database
that stores information and keeps track of how the data is organized using
a modified form of structured query language (SQL).
The model's Web authoring tool uses a screenplay motif and shows a graphical
view of each presentation’s spatial arrangement and playback sequence.
The tool includes a set of methods and a programming interface that allows
extensions to be added so it can access other applications, according
to Schmidt.
Although the database, object model and toolkit are specific to the researchers'
scheme, it also uses standard Web technologies, such as Extensible Markup
Language (XML) and streaming media protocols, according to Schmidt. XML
is a common coding scheme used to create Web pages. Streaming media refers
to time-sensitive materials like video and audio.
A key component of the scheme is reordering media files from their semantic
data storage grouping into the playback sequence on the viewer’s end,
a task handled by a flow generator, said Schmidt.
"Our data structures on the storage layer are organized in a semantic
tree. Time, however, in our lives, is linear, so there has to be a resolver,
which requests the right data in time and reorganizes temporal instructions
in a linear fashion," he said.
To view sequential presentations, users must have Java virtual machine
software installed on their computers.
While the researchers tested their model using custom-written data objects,
adapters could be written to allow the software to handle existing files,
Schmidt said. "The information scheme we use is encoded in XML. So there
is no principal difficulty in providing in/out filters for [other] content,"
he said.
The researchers are working on adding graphical tools that will allow
users to edit the XML code, arrange presentation views and timing, and
edit the interactions between objects, Schmidt said.
Parts of the scheme are ready for classroom use, while others are still
prototypes. The model will be completed in 9 to 12 months, he said.
Schmidt's research colleagues were Bjoern Feustel, Andreas Karpati, Torsten
Rack. It was funded by the University of Applied Sciences.
Timeline: < 1 year
Funding: Government
TRN Categories:
Story Type: News
Related Elements: Technical paper, "An Environment for
Processing Compound Media Streams," initially presented at the 7th International
Conference of European University Information Systems at Humboldt University
in Berlin, March 28-30, 2001>
Advertisements:
|
April
3/10, 2002
Page
One
Electron waves compute
Software orchestrates
Web presentations
Porous glass makes
minuscule sensor
Internet map improves
models
Magnets channel biomatter
News:
Research News Roundup
Research Watch blog
Features:
View from the High Ground Q&A
How It Works
RSS Feeds:
News | Blog
| Books
Ad links:
Buy an ad link
Advertisements:
|
|
|
|