Fast Forward Paper Session
Imagine that you have 50 seconds to tell eager listeners about your latest discoveries. What methods would you use? Would you sing a song? Would you perform a magic trick? Would you go so far as to take your shirt off? In the Fast-Forward Papers Preview, this sort of behavior is perfectly normal and even expected. Presenters were given no more than 50 seconds to present their paper. Anything more and Markus Gross, the Papers program chair, promised to personally intervene. The result: fast paced, in your face, off the cuff, in the buff, hilarious, and sometimes shocking presentations.
Traveling alone, in pairs, and even triplets, the presenters casually walked, jogged, or ran up to the podium with the hopes of impressing the audience by (nearly) any means necessary. Presenters, ever mindful of the time constraint, relied on a vast assortment of methods to effectively portray their recent developments. Many presenters tried to condense their 30 minute paper presentation into the alloted 50 seconds, while others used their brain to get the audience's attention.
Eftychios Sifakis (Automatic Determination of Facial Muscle Activations From Sparse Motion Capture Marker Data), a self diagnosed public speaking phobic, walked on stage with only a bag of pretzels. He munched away while allowing his 50 seconds of slides tell us how he wrote some software to get around his little phobia. With a 3D scan of his face, his software will accurately animate his face's model to portray human emotion and speech.
A few presenters ventured to the extreme of attention-getting techniques. Bo Sun's (A Practical Analytic Single Scattering Model for Real Time Rendering) presentation consisted solely of sample images from his real-time rendering technique. The images were quite stunning; in fact they were real-time renderings of bikini models. There were promises for further experimental results at the actual presentation.
The two gentlemen on the Style Translation for Human Motion gave a hint of what their papers session would entail by first pantomiming a basic human gait, then quickly changing costumes and imitating the stealthy prowl of a Ninja and finally changing costume again to be a purple haired supermodel walking the cat walk.
The man presenting Interactive Collision Detection Between Deformable Models surprised everyone when he swiftly removed his shirt in order to demonstrate the differences in the collisions of surfaces on his shirt versus those on his skin. Bare chested, he left us with the questionably promising statement, “"Come if you want to see more."”
The most common but varied method employed to make a presentation creative was making the piece musical in some way. (Please note the term musical is used in the broadest sense possible). The lyrical performances this year included an Ode for a Mesh, a rap about Line Drawing from Volume Data, a Dr. Seuss-like rhyme on texture optimization, and a groovy hommage to Ken Perlin, by Rob Cook, on Wavelet Noise. A few other clever gimmicks included a couple of magicians who could convert 2D images into 3D models and see through cards, and a timely phone call from mom ("Yes mom, it's so easy to use even you can use it.").
Many of the presentations had memorable quotes - either scripted or extemporaneous. Those who were listening attentively enough were privy to some great quotes. The presenter for
Concerning Large Mesh Deformation Using the Volumetric Graph Laplacian extended the promise “"Trust me, it works!"” While the most humorous phrases were unplanned presenter utterances such as the well-timed "Oh no!" and "Pardon my French..." My personal favorite was from the presenter of Dynamic Responses for Motion Capture Animation who, after displaying animated reactions to various physical attacks (such as being punched in the face or kicked in the butt), told us that if we don’'t come, “it will "be a real slap in the face".
With everyone trying hard to have their paper noticed, it is easy to forget that this session was a Papers program overview and not live theater. Eitehr way, there were some papers which really caught my interest. I was highly impressed with the introduction of the Mesh-Based Inverse Kinematics system (which describes the ability to manipulate a mesh in a realistic manner without the use of a bone system or any bone-related rigging controls). My appetite was also whetted with the promise of seafood inverse kinematics and chocolate inverse kinematics. The presenters for Floral Diagrams and Inflorescences: Interactive Flower Modeling Using Botanical Structural Constraints effectively showed just how easily one could create a flowing plant in 3D with the use of a few simple tools. That said, I'm sure even Mother Nature would be impressed. And Fourier Slice Photography allows the unique and useful ability to adjust the focus of a snapshot after it has already been taken (this is made possible by the use of a special camera which captures a light field). Also worth mentioning is the use of Dual Photography, as presented by “Billy the Great”, which enables a camera to display an image that is hidden from its direct view. Finally, As-Rigid-As-Possible Shape Manipulation allows the user to cleanly manipulate a two dimensional image by moving a few points on the screen. This was promised to be very helpful when playing with one’s teddy bear.
Overall, the Fast-Forward Papers Preview was an incredibly interesting event, proving to be extremely entertaining with some peak moments of unquenchable hilarity. As expected, humor was the most successful way of commanding the audience’s attention. However, success also rested heavily on the ingenuity of the project itself, along with the clarity of its explanation. When all is said and sung, the presenters who were able to balance clever presentation methods with useful information on their research were the ones most likely to find themselves with a full room during their full-length papers presentation.
David Knott