Grimage: Markerless 3D Interactions

Grimage combines multi-camera 3D modeling, physical simulation, and parallel execution for a new immersive experience. Put any object into the interaction space, and it is instantaneously modeled in 3D and injected into a virtual world populated with solid and soft objects. Push them, catch them, and squeeze them.

Enhanced Life

Various applications could benefit from this technology. For example, telepresence that enables people and objects to be present in virtual spaces. Real-time 3D modeling of people and objects enables high-quality virtual clones. It eases interaction between people in different physical locations who meet in a common virtual space (a social community, a workplace, a game, a learning and training environment, etc.).

By combining 3D modeling with real-time physical simulation, the system injects a complete model of the user into the simulation. Potential applications range from surgery planning to training for assembly tasks and critical situations such as earthquakes, fires, accidents, etc.

Goal

To combine computer vision, physical simulation, and parallelism to move one step toward the next generation of virtual reality applications.

Obviously this approach can be useful in combination with other systems. For example, Grimage could easily be complemented with a stereoscopic display or a haptic device like the Spidar to further improve the sense of immersion.

Innovations

The core innovations are the following:
  • Integration of complex software components developed by various people. This has only been possible the system relies on component-oriented and data-flow-oriented middleware (the FlowVR library) that enforces modular software development.

  • 3D modeling based on the exact polyhedral visual-hull algorithm for obtaining high-quality textured 3D models.

  • Physical simulations using Sofa software, which relies on a scene graph structure to organize and process the various models in a simulation. These models represent simulation components (deformable models, collision models, instruments, etc.).

  • Various parallelization levels: a pipeline-level parallelization supported by the FlowVR library, a parallel 3D modeling algorithm also implemented with FlowVR, multi-display rendering implemented with FlowVR Render, a FlowVR extension defining a transport protocol for graphics primitives, and a GPU parallelization used by Sofa to accelerate some if its computations.

Vision

Users experience a sequence of three manipulations:
  • Interaction with virtual solid objects. With their hands, users push down a pyramid of virtual cubes or real balls.

  • Interaction with soft objects that users can try to catch and squeeze.

  • Virtual cloning: at a given moment real elements in the interaction space are captured in a 3D snapshot. This model is then turned into a soft element that users can play with. For example, users can virtualize their hands and then squeeze the virtual hands.

Contact

Bruno Raffin
INRIA
bruno.raffin (at) imag.fr

Contributors

Jérémie Allard
Sim Group - MGH/CIMIT

Clement Ménier
INPG

Edmond Boyer
François Faure
UJF