SIGGRAPH 2007 - Liming Zhao, report from Univ. Penn Student Chapter
Although I had been informed of the magnificent weather and ocean view of San Diego, I was still enchanted by its cool ocean breeze while enjoying a wonderful Italian dinner in Little Italy. I knew many more surprises were to come as soon as SIGGRAPH 2007 started on Sunday, August 5th, at the San Diego Convention Center.
The real excitement began with my first visit to the Emerging Technologies exhibition. It must now be obsolete talking about HDTV, but how about HDTV with 3D images? Moreover, HDTV with 3D images without polarized glasses? Yes, this DLP technology offers the 3D ready HD solution by lighting the two million hinge-mounted microscopic mirrors, each of 1/5 of hair thickness, alternating for each eye at 120 Hz, to produce stereo images for the user standing in front of the monitor. If this does not surprise you, let’s take a look at Microsoft’s intelligent table—Surface. It is a table, with a LCD display, that offers a multi-touch touch screen and can recognize objects placed on the surface for multi-user interaction. Imagine this; you want to demonstrate your product at your favorite bar in front of your potential clients. With intuitive finger motion on Surface, you can easily rotate, scale and orbit the 3D model in front of them. If a client is not clear on the details, he can join the interaction or take over the control himself, as Surface offers multi-user interaction. Satisfied with the product demo, you decide to order some food and drinks. Doing so is as easy as a few clicks on the table on the corresponding descriptions of the food and drink. In the end, you just need to place your credit card on the Surface and select the tip percentage to pay the bill, as Surface is network ready for transactions.
After previewing the technologies that might change our lifestyle, let’s now take a closer look at the products that are making changes on the Exhibition Floor. Similar to what happened in previous years, SIGGRAPH once again hosted the year’s largest, most comprehensive exhibition of products and services for the computer graphics and interactive techniques marketplace. This year’s exhibition focuses on a variety of areas, each of which is explored below.
Motion Capture: One of this year’s features is the diversity of motion capture systems. Optical approaches, like Vicon’s solution, take advantage of calibrated high speed cameras and capture reflective markers on the performer’s suit. Each camera takes a snapshot of a motion from different angles (to minimize occlusion problems) and theses images are registered to compute markers’ 3D positions, through a stereo vision process. Given the 3D positions of the markers, joint angles are computed and used in real time to drive a virtual character, such as a soldier in game plays or a monster in movie special effects. Variations of this approach were also demonstrated to reduce the cost, fit to a particular skeleton structure, or reduce the necessity of wearing markers. Organic Motion’s motion capture system is one example of markerless motion capture. Instead of tracking the markers, it directly works on images of the performer. Using stereo vision techniques, it is able to reconstruct the full body 3D model and map skin and clothes texture onto the model, hence providing 3D motion capture without wearing markers. However, as this system is only their first public prototype, it suffered from problems like occlusion, level of details of the human model and interaction with the environment. Unlike the optical approaches that Vicon and Organic Motion presented, XSense and Animazoo presented their inertial motion capture suits. The technology behind these motion capture suits is the individual inertial cubes placed on each body limb (offering 3-axis orientation and 3-axis acceleration measurements). Given the orientation of each joint of the body and contact joint (eg. feet, hands or hip) information, the motion capture system can reconstruct the body pose at real time and map it to a virtual character just like optical approaches do. In this year’s motion capture exhibition, the full body capture systems were not the only ones attracting audience attention. Facial capture systems also held the spotlight. Companies like Mova Face take advantage of the reflective make up powder on the model’s face and calibrates high definition cameras to reconstruct face models and map motion and face textures at real time. Their solution has been successfully applied to games and movies.
Graphics Card and Parallel Computing: One of the hottest topics in recent graphics research is utilizing the power of Graphic Processing Unit (GPU) to do parallel computing. AMD/ATI and nVidia have been frequent guests to SIGGRAPH exhibitions, and this year they both brought with them their latest cutting edge graphics cards which they claim unleash the ultimate power of rendering, physical simulation and general purpose parallel computing. Companies like Mercury Computer System and RapidMind Inc. presented their proprietary solutions to parallel computing, providing their customers with powerful and easy-to-use interfaces to multicore systems (greatly increasing their computing power).
Apart from the newly-immersed technologies presented by the above companies, there were also a number of interesting technical talks as well as product presentations given by 3D modeling companies like AutoDesk and SoftImage, and FX and animation studios like ILM, Imageworks and Pixar.
Another of the major SIGGRAPH components is the research paper presentations. As a member of the UPenn Student Chapter, which has as goal to provide its members with education, tools, and exposure to professional graphics and animation, I focused my attendance to character animation related research paper presentations. Adrien Treuille et al from University of Washington presented the paper “Optimal Character Animation with Continuous User Control”. This is a kinematic approach that takes advantage of carefully organized motion capture data to produce smooth, near optimal real time character animation according to user controls under environmental constraints such as collision avoidance. Such an approach can be applied to synthesizing realistic human motions for computer games. Alla Safonova et al from Carnegie Mellon University presented the paper “Construction and Optimal Search of Interpolated Motion Graphs”. This physically based optimization approach solves a sequence of poses over a time window that satisfies user constraints like path following, obstacle avoidance and speed/direction requirements, while preserving the naturalness and physical reality of the motion. Such an approach is important for character simulation, virtual training and possibly animated features. Other interesting topics include automatic rigging and animation on a given character mesh, and database technologies for motion captured data.
In summary, this year’s SIGGRAPH conference was a success in terms of inspiring new research ideas, presenting the latest industrial technologies, and providing magnificent scenery and dining experiences in San Diego.