Conference Main
Enhanced Realities Projects
   

HoloWall: Interactive Digital Surfaces

Jun Rekimoto
Sony Computer Science Laboratory, Inc.
3-14-13 Higashigotanda, Shinagawa-ku
Tokyo 141-0022 Japan
rekimoto@csl.sony.co.jp
www.csl.sony.co.jp
/person/rekimoto/holowall/

HoloWall is an interactive wall system that allows visitors to interact with digital information displayed on the wall surface without using any special pointing devices. It demonstrates several interactive environments, including a world of autonomous digital insects that respond to body movements and an interactive sound environment that reactively creates music sequences based on the user's actions.

Swamped! Using Plush Toys to Direct Autonomous Animated Characters

Bruce M. Blumberg
Synthetic Characters Group
Media Lab
Massachusetts Institute of Technology
E15-311, 20 Ames Street
Cambridge, Massachusetts 02139 USA
bruce@media.mit.edu
characters.www.media.mit.edu
/groups/characters/

Swamped! is a multi-user interactive environment in which instrumented plush toys are used as an iconic and tangible interface to influence autonomous animated characters. Each character has a distinct personality and decides in real time what it should do based on its perception of its environment, its motivational and emotional state, and input from its "conscience," the guest. A guest can influence how a given character acts and feels by manipulating a stuffed animal corresponding to the character. For example, the guest could direct her character's attention by moving the stuffed animal's head, comfort it by stroking its belly, or have it wave at another character by waving its arm.

AR2 Hockey

Toshikazu Ohshima
Mixed Reality Systems Laboratory, Inc.
6-145 Hanasaki-cho
Nishi-ku, Yokohama 220-0022 Japan
ohshima@mr-system.com
www.mr-system.com/

In AR2 Hockey (Augmented Reality AiR Hockey), players share a physical game field, mallets, and a virtual puck to play air hockey in simultaneously shared physical and virtual space. They can also communicate with each other through the mixed space. Since real-time, accurate registration between both spaces and players is crucial to playing the game, a video-rate registration algorithm is implemented with commercial head-trackers and video cameras attached to optical see-through head-mounted displays.

PingPongPlus

Craig Wisneski
Media Lab
Massachusetts Institute of Technology
E15-452, 20 Ames Street
Cambridge, Massachusetts 02139 USA
wiz@media.mit.edu
tangible.media.mit.edu
/projects/pingpongplus.html

The goal of this project is to explore systems for collaborative play that push the physical world back into the forefront of design, without relying on simple GUI controllers, such as a mouse, keyboard, and joystick. Various audio and visual augmentations have been added to a conventional ping-pong table with a non-invasive, sound-based ball tracking system. The "reactive table" displays patterns of light and shadow as a game is played, and the rhythm and style of play drives accompanying sound. At times, the game is subtly enhanced, and sometimes it is powerfully changed. In one mode, the table appears to be covered with water, so that playing on it creates patterns of subtle ripples. In another mode, images that race around the table change the entire scoring system and method of play.

Object-Oriented Displays

Naoki Kawakami, Masahiko Inami,
Yasuyuki Yanagida, and Susumu Tachi

Tachi Laboratory
MEIP, The Faculty of Engineering
The University of Tokyo
7-3-1 Hongo Bunkyo-ku
Tokyo 113-8656 Japan
kawakami@star.t.u-tokyo.ac.jp

In Object-Oriented Displays, users perceive and operate a virtual object as if it were real. Design and implementation of three types of object-oriented displays were demonstrated: MEDIA-Ace, a liquid crystal display (LCD) and position sensor; MEDIA-Cube, a position sensor and four LCDs arranged in the shape of a cubic body; and MEDIA-Crystal, which uses optical projection.

Mass Hallucination

Trevor Darrell
Interval Research Corporation
1801 Page Mill Road, Building C
Palo Alto, California 94304 USA
trevor@interval.com

This imaging display changes according to the number of people watching it, their behaviors, and whether they've watched the device before. It is reflexive: the displayed image is a function of the people watching the display. It encourages crowds of people to collectively manipulate the display with their bodies or faces. Yet it is also personal, in that it can recognize the appearance of a user for short-to-medium periods of time and tailor the display accordingly. As in Magic Morphin' Mirror, a SIGGRAPH 97 Electric Garden project by the same group, this display captures video along the same optical axis as video is displayed, so images of observers can be directly manipulated, composited, or distorted on the display. In contrast to the previous work, which only considered a single user at a time and had no persistence after they left, this display is designed to visually track a crowd of people and provide a shared graphical experience.

Foot Interface: Fantastic Phantom Slipper

Yuichiro Kume
Tokyo Institute of Polytechnics
1583 Iiyama
Atsugi, Kanagawa 243-0297 Japan
kume@photo.t-kougei.ac.jp
laplace.photo.t-kougei.ac.jp/

People should be able to use their feet just as freely in a virtual environment as they do in the real world. Wearable interfaces should not cause psychological and/or physical discomforts. This slipper-like multi-modal interface is based on those two assumptions. It features a slipper interface with cyberworlds. Each foot's movement is measured in real time with an optical motion capture system, and feedback signals are transmitted to the soles. Phantom sensations elicited by multiple tactile stimuli allow transmission of complicated feedback information such as objects moving around the feet. Optical markers for motion capture and vibrators for tactile stimulation are installed in the slippers. Players interact with virtual objects projected onto a floor screen, sense them, and use them to play games.

inTouch

Scott Brave
Media Lab
Massachusetts Institute of Technology
E15-468C, 20 Ames Street
Cambridge, Massachusetts 02139 USA
brave@media.mit.edu
tangible.media.mit.edu
/projects/intouch.html

Touch is a fundamental aspect of interpersonal communication. Yet while many traditional technologies allow communication through sound or image, none is designed for expression through touch. The goal of inTouch is to bridge this gap by creating a physical link between users separated by distance. InTouch consists of two separate identical objects, each consisting of three cylindrical rollers mounted on a base. The two objects behave as if corresponding rollers are physically connected, but in reality, the objects are only virtually linked. Sensors are used to monitor the states of the rollers, and computer-controlled motors synchronize those states, creating the illusion that distant users are interacting through a single, shared physical object.

Virtual FishTank

Stacy Koumbis
147 Sherman Street
Cambridge, Massachusetts 02140 USA
stacy@nearlife.com
www.nearlife.com

The Virtual FishTank is a simulated aquatic environment featuring a 400-square-foot tank populated by whimsical and dynamic fish. Participants can:

  • Create their own fish
  • Design behaviors for their fish
  • Observe their fish interacting with other fish.
  • Manipulate behavioral rules for a group of fish.
  • Discover how these behaviors can emulate schooling.
  • Analyze emerging patterns.
Through real-time 3D graphics, visitors are introduced to ideas from the sciences of complexity ideas that explain not only ecosystems, but also economic markets, immune systems, and traffic jams. In particular, visitors learn how complex patterns arise from simple rules.
Haptic Screen

Hiroo Iwata
Institute of Engineering Mechanics
University of Tsukuba
Tsukuba, 305 Japan
iwata@kz.tsukuba.ac.jp
intron.kz.tsukuba.ac.jp

Haptic Screen is a new force-feedback device that deforms itself to present shapes of virtual objects. Typical force-feedback devices use a grip or thimble, but users of Haptic Screen can touch the virtual object without wearing anything. Haptic Screen employs an elastic surface made of rubber. A 6 X 6 array of 36 actuators deforms the surface and controls its hardness according to the force applied by the user. An image of the virtual object is projected onto the elastic surface so that the user can directly touch the image and feel its rigidity.

Natural 3D Display System Using Holographic Optical Element

Koji Yamasaki
Laboratories of Image Information
Science and Technology
1-1-8-3F, Shinsenri-Nishi, Toyonaka,
Osaka 565-0083 Japan
yamasaki@senri.image-lab.or.jp

In this natural 3D display system, a holographic optical element (HOE) overcomes conflicts between convergence and accommodation. Users experience clear stereoscopic vision, without glasses, of a broad field of view. With its multiple-focus HOE, the system offers two pairs of viewing points in back-and-forth or horizontal locations.

Direct Watch & Touch

Takahisa Ando
Laboratories of Image Information Science
and Technology
Daiichi-Kasai Senri-Chuo Bldg. 3F, 1-1-8,
Shinsenri-Nishimachi, Toyonaka,
Osaka 565 Japan
ando@image-lab.or.jp
www.image-lab.or.jp/

This 3D display offers access to a virtual stereoscopic world without special glasses. When users "touch" the world with real tools (for example, a hammer, a surgical knife, a wrench, tweezers, etc.), directly and interactively, they hear and feel contact and transform virtual objects. This binocular parallax display combines virtual and real environments in full, high-resolution (XGA) color. It is a new approach to virtual reality that handles virtual objects with "real" tactile feedback.

Media & Mythology

Kimberly Abel Parsons
Visual Systems Laboratory
Institute for Simulation & Training
3280 Progress Drive
Orlando, Florida 32826 USA
kparsons@ist.ucf.edu

In ancient times, mythology was the high-tech method for storing data on a society's history, rituals, and ethical systems. The paradigm in use for these early information systems was storytelling. Media & Mythology explores the link between traditional mythologies from several cultures and new technology/new media. Man and Minotaur allows visitors a chance to portray the two ancient combatants and the gods that taunt them within a fully immersive, synthetic version of Dedalus' Labyrinth in ancient Crete. In Video Totem, expressionistic visitors create and view their own mythologies on a large digital totem pole. Dear Oracle integrates contemporary media into traditional soothsaying. The result is a new form of oracle: digital divination.

Natural Pointing Techniques Using a Finger-Mounted Direct Pointing Device

John Sibert
Department of Electrical Engineering and Computer Science
The George Washington University
Washington, D.C. 20052 USA
sibert@seas.gwu.edu

Pointing with the index finger is a natural way to select an object, and if it can be incorporated into human-computer interaction technology, a significant benefit will be obtained for certain applications. This demonstration presents a prototype solution. Based on an infrared signal power density weighing principle, a small infrared emitter on the user's finger and multiple receivers placed around the laptop screen generate data for a low-cost microprocessor system. The microprocessor sends its output to a laptop computer, where it is used to determine coordinates for the cursor location. The prototype is not only a proof of concept. It is also a tool for further research on human performance in pointing and further development of interactive techniques.

Virtual Head

Thom Brenner
Echtzeit GmbH
Kanstrasse 165
10623 Berlin, Germany
tbrenner@echtzeit.de

Virtual Head is a new approach that enhances communication in virtual environments and telepresence. It tackles one of the key problems in the field of innovative telecommunication technology: how to represent oneself in virtual environments in such a way that an emotional and natural way of communicating with others is possible.

The Virtual Head conferencing prototype renders three-dimensional images of every communication partner in real time. It establishes eye-to-eye contact among the communication partners by projecting live-video textures onto 3D geometry of a head. The application translates the head movement so that video images show the original movements. Compressed video and audio information is exchanged via a high-bandwidth network to establish a remote conferencing scenario. Video and audio are decompressed on both sides, and the images are projected onto a screen.

Stretchable Music with Laser Range Finder

Pete Rice and Joshua Strickon
Massachusetts Institute of Technology
E15-495, 20 Ames Street
Cambridge, Massachusetts 02139 USA
strickon@media.mit.edu
brainop.media.mit.edu/
~strickon/siggraph.html

Stretchable Music with Laser Range Finder combines an innovative, graphical, interactive music system with a state-of-the-art laser tracking device. An abstract graphical representation of a musical piece is projected onto a large vertical display surface. Users are invited to shape musical layers by pulling and stretching animated objects with natural, unencumbered hand movements. Each of the graphical objects is specifically designed to represent and control a particular bit of musical content. Objects incorporate simple behaviors and simulated physical properties to generate unique sonic personalities that contribute to their overall musical aesthetic.

Shall We Dance?

Kazuyuki Ebihara
ATR Media Integration & Communication
Research Lab
2-2 Hikaridai Seika-cho
Soraku-gun
Kyoto 631 Japan
ebihara@mic.atr.co.jp

Real-time 3D computer vision gives users control over both the movement and facial expression of a virtual puppet and the music to which the puppet "dances." Multiple cameras observe a person, and human silhouette analysis achieves real-time 3D estimation of human postures. Facial expressions are estimated from images acquired by a viewing-direction controllable camera, so that the face can be tracked. From the facial images, deformations of each facial component are estimated. The estimated body postures and facial expressions are reproduced in the puppet model by deforming the model according to the estimated data. All the estimation and rendering processes run in real time on PC-based systems. Attendees can see themselves dancing in a virtual scene as virtual puppets.

 


Main Comments and questions