TransCAIP: Live Transmission of Light Field from a Camera Array to an Integral Photography Display

Full Conference
One-Day Full Conference
Basic Conference/Exhibits Plus

TransCAIP provides a real-time 3D visual experience by using an array of 64 cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax auto-stereoscopic display with interactive control of viewing parameters.

Enhanced Life
This project demonstrates the potential of live 3D TV systems in a prototype system. The core technology is a fast and flexible data-conversion method from the multi-camera images to the integral photography format. Because the conversion method is applicable to general combinations of camera arrays and integral photography (and multi-view 3D) displays, it could be an essential technology for future 3D TV systems.

The overall goal is to develop a practical live 3D TV system that reproduces a full-color 3D video of a scene with both horizontal and vertical parallax in real time. The system gives users a perception of observing the 3D scene through a window without requiring them to wear special glasses. The main technical goal is to develop a fast and flexible data conversion method between asymmetric input and output devices, which runs in real time (more than five frames per second) on a single PC with GPGPU techniques and enables users to interactively control viewing parameters of the displayed 3D images for enhancing the 3D visual experience.

1. Live transmission of 3D scenes. TransCAIP transmits light fields [Levoy and Hanrahan 1996; Gortler et al. 1996] from an array of 64 cameras to an integral photography display with 60 viewing directions in real time. It enables users to observe a live 3D video of the scene with both horizontal and vertical parallax.

2. Real-time light-field conversion. To connect the asymmetric input and output devices, TransCAIP performs real-time light-field conversion between 64 input views of 320 x 240 pixels captured with the camera array and an integral photography image consisting of 60 views of 256 x 192 pixels. Using the 64 input views, it first renders 60 novel views corresponding to the viewing directions of the display by using an image-based rendering method [Taguchi et al. 2008] and then arranges the rendered pixels to produce an integral photography image. For generating high-quality novel views, this method estimates a view-dependent per-pixel depth map at each rendering camera viewpoint based on a layered representation. For real-time processing on a single PC, the conversion algorithm is fully implemented on a GPU with GPGPU techniques.

3. Interactive control of 3D viewing parameters. TransCAIP enhances users' 3D visual experience by allowing them to interactively control viewing parameters of the displayed 3D images. In the light-field conversion method, the rendering cameras are placed at a regular interval such that their viewing directions converge at the same point. The plane whose depth is equal to that of this point is called the convergence plane. The convergence plane corresponds to the display plane of the integral photography display. Since objects near the display plane are reproduced with a higher resolution than those farther from the plane [Hoshino et al. 1998; Zwicker et al. 2007], the system enables users to set the plane at a desired position in the target scene. The position of an object relative to the display plane is also determined by the convergence plane. Moreover, users can control the amount of depth reproduced on the display by changing the interval of the rendering cameras. Users can also control the location of the part of the scene reproduced on the display by changing the positions and view angles of the rendering cameras. Users can interactively perform the viewing parameter control as a software process without reconfiguring the hardware system.

Three-dimensional TV is a promising technology for providing a more natural and intuitive perception of 3D scenes than existing two-dimensional TV. In particular, live 3D TV systems, which transmit 3D visual information in real time, could have a significant impact on many applications in communication, broadcasting, and entertainment in the near future.

Yuichi Taguchi
The University of Tokyo

Takafumi Koike
The University of Tokyo, Hitachi Ltd.

Keita Takahashi
Takeshi Naemura