This case study highlights three aspects of work underway in the Distance Education and Learning Technology Applications (DELTA) division at North Carolina State University: (1) the Design of SMIL Templates; (2) Captioning of Streaming Content; and (3) a Multimedia Implementation for assisting faculty with building Online Courses.
Designing SMIL Templates:
At North Carolina State University (NCSU), work is underway on a streaming media architecture that will aid faculty and students in online instructional activities. The architecture includes an orchestered support for RealMedia and RealServer as part of NCSU's centralized resources. Continued ad hoc support for the QuickTime media format is also provided. In addition to supporting RealMedia files, current versions of RealServer (8.0 and higher) also support QuickTime and MP3 files.
An element of centralized support of RealMedia includes the development of NCSU tailored Synchronized Multimedia Integration Language (SMIL) templates for campus-wide usage. These templates are intended to guide faculty in the production of streaming media content. The templates are NOT intended to be required web design elements. A preliminary focus group that introduced faculty to SMIL templates was held in Summer 2001. Working with faculty in NCSU's College of Design, initial designs of SMIL templates have evolved.
Figure #1 shows our preliminary design. This template design incorporates streaming video content, streaming or static digital slides with links to various chapters in the online course, and captioned text for the hearing impaired.
Figure #1: Tailored SMIL template for use at North Carolina State University. Template design by Tony Brock, College of Design - North Carolina State University. Image shown courtesy of Tony Brock.
Captioning of Streaming Content:
As mentioned in the SMIL template discussion, there is a need to provide hearing impaired individuals access to the audio components of Web-based streaming and multimedia content. We provide an overview of various approaches to providing captioned content with the leading streaming media players.
Captioning is the process of rendering speech and audible language into written language that is synchronized with the delivery of the audio. This is not the same as subtitling of video. The assumed audience for subtitling is hearing people who do not understand the language of the dialogue. Captions, however, are intended for deaf and hard-of-hearing audiences. Captions notate sound effects and other dramatically significant audio while subtitles of videos assume a person can hear sounds associated with items like a phone ringing. Captions are in the same language as the audio while subtitles are a translation. More information on the concepts of captioning is available on the Web at: (http://www.robson.org/capfaq/).
There are six basic steps associated with creating captioned content for the Web.
Step #2 is the most time and labor intensive step since it requires listening to audio and creating a text transcription of the audio content. Although there are technologies designed to automatically capture speech to text files, current approaches only capture with approximately 30% accuracy. We specifically explored Virage Inc.’s VideoLogger tool that analyzes analog or digital video content as it is played. For more information on VideoLogger, see: (http://www.virage.com/products/videologger.html).
As a reminder, the leading streaming media players are: RealOne Player, from RealNetworks, QuickTime from Apple Computer, and Windows Media, from Microsoft. Microsoft has developed the Synchronized Accessible Media Interchange (SAMI) file format to facilitate captioning of Windows Media content. Details of the SAMI format are available on Microsoft’s web site at: (http://www.microsoft.com/enable/sami/details.htm).
Detailed instructions on creating text tracks, starting with transcription, for QuickTime are provided on Apple Computer’s Web site at: (http://www.apple.com/quicktime/products/tutorials/texttracks.html). Following these instructions synchronizes text tracks with audio and video. This is Step #3 described in the six steps for creating captioned content outlined above. For Step #4 – combine text, audio, video into one multimedia file, use the “Import” option under the “File” menu to import the appropriately prepared text file into a QuickTime movie. The movie then has a single track, a "text" track. Next, add this track to the ALREADY COMPRESSED movie. Since the text track is minimal in size, there is no need to recompress the QuickTime content.
For RealMedia content, we suggest the use of MAGpie to achieve Step #3 and SMIL to achieve Step #4. The Media Access Generator (MAGpie) tool is designed to aid Web developers in creating text tracks for streaming content on the Web. Assuming Step #2: Text Transcription is completed, MAGpie builds captions or text tracks and stores them in file formats for access by the leading streaming media players. MAGpie is freely available at the following Web site: (http://ncam.wgbh.org/webaccess/magpie/). Once the text track or caption is created with MAGpie (Step #3), it should be stored in SMIL streaming text format. The next task is to create a SMIL file that combines continuous text, audio and video into one multimedia file (Step #4). The RealOne Player plug-in plays SMIL files.
Who's Online/ONLINE: A faculty guide for building online courses
Who's Online/ONLINE is a Web site that was produced by Dr. Sarah Stein, at North Carolina State University, to aid faculty in building online courses, (http://lts.ncsu.edu/whosonline). Using QuickTime, video interview segments are linked to actual internet Web sites. The content highlighted on this Web site includes discussion & chat, virtual offices, plariarism, and American Disability Act (ADA) accessibility. For this specific Web site implementation, an Online transcript of the audio discussion is provided. The issue of embedding Web links into the video and audio content was a complex challenge. QuickTime provided us with the most reliable results for addressing this need.
Figure #2: Snapshot of the introductory web page to the NCSU's DELTA Who's Online/ONLINE project. This site was produced by Dr. Sarah Stein with the assistance of a grant from the Division of Distance Education and Learning Technology Applications (DELTA) at North Carolina State University. Image shown courtesy of Dr. Sarah Stein, (http://lts.ncsu.edu/whosonline).