Spotlight on SIGGRAPH: S3 Provides the Tools for Success

Spotlight on SIGGRAPH: S3 Provides the Tools for Success

By Deja Collins, in collaboration with S3 Chair Corinne Price

Editor's note: Spotlight on SIGGRAPH is a recurring feature, established to shed light on the various committees and avenues of volunteering within ACM SIGGRAPH and its annual conferences.

Established in 2007, the SIGGRAPH Student Services, colloquially known as S3, is an ACM SIGGRAPH committee whose primary mission is to provide additional value to the student members of ACM SIGGRAPH. The Student Services Committee also maintains continuity and institutional memory for the student volunteer and intern programs at SIGGRAPH and SIGGRAPH Asia, and collaborates with other SIGGRAPH entities (conferences, committees, etc.) on issues that affect student members. S3 currently offers (and is in the process of expanding) three key programs: resume and reel reviews, mentoring for ACM SIGGRAPH student members (known as "Mentor Me"), and a series of webinars and talks.

S3’s flagship program is a resume and reel review service known as "S3R3." S3 understands that resumes and reels are a constant source of confusion and frustration for students and new graduates, and aims to remove some of the guesswork by providing professional advice and opinions to ACM SIGGRAPH student members. Students are matched to industry professionals based on their desired work field and job skills, and provide the resume and/or reel for review and expert advice. S3R3 is offered on-site during both the SIGGRAPH and SIGGRAPH Asia conferences, as well as being offered virtually throughout the year. Below are the profiles of three prominent S3R3 reviewers: Patrick Coan, Murad Currawalla and Vince De Quattro.

Patrick Coan: Coan holds a Bachelor's of Science in IT-Multimedia Digital Entertainment and Game Design from ITT Technical Institute and works as a production artist in the field of interactive entertainment. He has freelanced as a web designer, game character designer and animator for film. Some of his credits include lead animation in "Madstreak," character and environment art direction for "Infinite Shooter," and design work for the Cascade chapter of ACM SIGGRAPH.

Murad Currawall: Motion graphics artist Murad Currawalla holda a BFA in Digital Media from the Otis College of Art and Design, and a Bachelor of Commerce from HR College of Commerce and Economics. Currawalla has professional experience in animation, compositing and rotoscoping. Additionally, he owns a VFX (visual effects) venture in Mumbai, India. Known as The CG Lab, the business has worked on music videos, commercials, and graphic design (just to name a few).

Vince De Quattro: For the majority of his career, De Quattro was a Technical Director for Industrial Light and Magic, working on five Academy-Award-nominated films ("Star Wars" Episodes I & II, "Pirates of the Caribbean," "Mighty Joe Young" and "Pearl Harbor"). Prior to ILM, De Quattro was a technical director and animator for Warner Brothers Digital Studios, Robert Greenberg Studios LA and Sony High Definition Television Center in Culver City. He has a Bachelor's Degree in Fine Arts and a Master's degree in Computer Animation in Film from the University of Southern California.

S3 is in the process of expanding its one-on-one mentoring program, Mentor Me. Mentor Me matches students with professionals who have agreed to offer guidance and insight into their respective fields. The mentor sessions are conducted by email, phone, and in some cases — face-to-face chats, which is typically during the SIGGRAPH or SIGGRAPH Asia conferences when the community comes together. S3's hope is that these mentorships turn into long-lasting professional connections for the members. To date, S3 has been able to match Student Volunteers (SVs) from the SIGGRAPH conferences, but is planning to expand to all interested ACM SIGGRAPH student members during the year.

S3’s final offering is a series of informative webinars and talks that offer advice, tips and insight on everything from careers to technical knowledge. The webinars are conducted throughout the year, which allows many to benefit from the wisdom of the high-profile speakers and mentors. Previous sessions have covered topics such as making the most of attending the SIGGRAPH conference, and unlocking information about iOS game development, animation, and digital media.

Student volunteers at SIGGRAPH 2014

Student Volunteers hard at work at SIGGRAPH 2014. Photo by John M. Fujii, Copyright © 2014 ACM SIGGRAPH.

Beyond its student programs, S3 also maintains the body of knowledge for the Student Volunteer program in the form of the Student Volunteers Chair Handbook. As ACM SIGGRAPH continues to expand and provide more conferences abroad, such as SIGGRAPH Asia, there is occasionally the need to create Student Volunteer committees, as seen in the main SIGGRAPH conference. When this need arises, the S3’s year round committee helps provide the background of support and provides institutional knowledge on how to get set up. This year, S3 provided on-site support to the SIGGRAPH Asia Student Volunteer program, and headed up SIGGRAPH Asia versions of S3R3 and Mentor Me for student members at the conference.

SIGGRAPH Student Services operates year-round, and strives to provide professional assistance for all ACM SIGGRAPH student members. S3 Chair Corinne Price welcomes suggestions from the community on potential additional benefits. "As S3 continues to grow and expand our offerings," she said, "we are always seeking input from student members and professionals throughout the community to know what services will most benefit you. Please do not hesitate to reach out to myself or one of the committee members if you have a great idea or something you would like to see.”

Through the programs S3 offers, student members receive valuable insight and critiques from industry experts, which equips them with many necessary tools for success in their occupational pursuits.

If you would like to express interest in volunteering on the S3 committee, please fill out the ACM SIGGRAPH volunteer form. If you would like to volunteer to be a mentor, reel reviewer or assist the committee in any other capacity, please contact the Student Services Chair directly.

Spotlight on Research: Moving Beyond MoCap

Spotlight on Research: Moving Beyond MoCap

On December 6, 2014, researchers from the Max Planck Institute for Intelligent Systems in Tübingen, Germany, presented a study entitled MoSh: Motion and Shape Capture from Sparse Markers. Their research, published in the SIGGRAPH Asia 2014 edition of ACM Transactions on Graphics, details a new method of motion capture that allows animators to record the three-dimensional motion of human subjects and digitally "retarget” that movement onto other body shapes.

As opposed to current motion capture(mocap) technology, MoSh doesn't require extensive hand animation to complete the mocap process. Instead, it uses a sophisticated mathematical model of human shape and pose to compute body shape and motion directly from 3D marker positions. This innovative technique allows mocap data to be transferred to a new body shape, no matter the variation in size or build. To illustrate the capabilites of the system, its inventors captured the motion of a sleek, female salsa dancer and swapped her body shape with that of a hulking ogre. The motion transferred smoothly between the two, resulting in an unexpectedly twinkle-toed block of a beast.

Devised by a team of researchers under the direction of Dr. Michael J. Black, MoSh is intended to simplify the creation of realistic virtual humans for games, movies, and other applications — while at the same time, reducing animation costs. According to Naureen Mahmood, one of the co-authors of the study, MoSh throws the gates to motion capture wide open for projects of all sizes and budgets. “Realistically rigging and animating a 3D body requires expertise, " said Mahmood. "MoSh will let anyone usemotion capture data to achieve results approaching professional animation quality.”

Because MoSh does not rely on a skeletal approach to animation, the details of body shape — such as breathing, muscle flexing, fat jiggling — are retained from the mocap marker data. Current methods discard such details and rely on manual animation techniques to recreate them after the fact.

“Everybody jiggles,” said Black. “We were surprised by how much information is present in so few markers. This means that existing motion capture data may contain a treasure trove of realistic performances that MoSh can bring to life.”

For more details on MoSh, download the full paper from the ACM Digital Library. Until November 2015, MoSh: Motion and Shape Capture from Sparse Markers is available to the public at no cost through open access (link below).

Open access to MoSh: Motion and Shape Capture via ACM SIGGRAPH

Direct link to MoSh: Motion and Shape Capture from Sparse Markers in the Digital Library (requires ACM SIGGRAPH membership, ACM membership or a one-time download fee to access)

Open access to additional conference content from SIGGRAPH 2014 and SIGGRAPH Asia 2014

President Obama Lands the First 3D Presidential Portrait

President Obama Lands the First 3D Presidential Portrait

By Kristy Barkan

In his 2013 State of the Union address, President Obama made room in his speech to talk about a subject that some may have found an odd choice for a presidential address: 3D printing. His comments on 3D printing were positive, noting that the technology "has the potential to revolutionize the way we make almost everything." A little over a year later, President Obama's prediction was proven true — and in a rather personal way.

This past June, computer graphics experts from the Smithsonian and the USC Institute of Creative Technologies landed at Pennsylvania Avenue to create a 3D scan of President Obama. No just any 3D scan, but the highest resolution digital model ever created of a head of state.

Günter Waibel, Director of the Smithsonian 3D Digitization Program Office, led the team that conducted the scan. In the White House's behind-the-scenes video of the project (below), Waibel is seen brandishing a life-sized plaster mask of Abraham Lincoln's face. "The inspiration for creating [Obama's] portrait," Waibel explained in the video, "comes from the Lincoln life mask in our National Portrait Gallery. Seeing the [Lincoln mask] made us think — what would happen if we could do that with a sitting president, using modern technologies and tools?"

President Obama's 3D scans

The resulting 3D models of President Obama's head. Image: White House

"This isn't an artistic likeness of the president," said Adam Metallo, 3D Digitization Program Officer at Smithsonian, as he sat before a computer displaying a seemingly perfect digital copy of President Obama's head. "This is actually millions upon millions of measurements that create a 3D likeness that we can now 3D print — and make something that's never been done before."

The models of President Obama were used to 3D print a life mask and a presidential bust that was unveiled at the first-ever White House Maker Faire in June. The data and printed models will be added to the Smithsonian’s National Portrait Gallery collection. 

Tom Kalil, Deputy Director for Policy for the White House Office of Science and Technology, shared his response to the scan in the White House making-of video. Kalil was enthusiastic about the project, especially how it served to highlight the value of 3D scanning and printing technologies. "The President getting his likeness scanned, as cool as that is, is also about a broader trend that's going on," he said. "The third industrial revolution … the combination of the digital world and the physical that is allowing students and entrepreneurs to be able to go from idea to prototype in the blink of an eye."

Obama in the mobile Light Stage

President Obama sits in front of the mobile Light Stage. Image: White House

Paul Debevec, Associate Director of Graphics Research at USC ICT, and former ACM SIGGRAPH Vice President, was part of the team that conducted the landmark presidential scan. "Ten years ago, it was barely possible to think this could be done," said Debevec. Since 2001, Debevec has lead the Light Stage project at USC ICT, a series of increasingly advanced scanning and lighting rigs that allow users to collect a tremendous amount of geometry and illumination data from human subjects. The data gathered with the Light Stages is used to create believable digital representations of the subjects scanned.

"We used similar Light Stage technology to the polarized gradient illumination scanning process used for the Digital Emily project shown at SIGGRAPH 2008, and the Digital Ira project shown at SIGGRAPH 2013," said Debevec. "But we had to quickly create a mobile rig which could ship to Washington, squeeze through doorways, and scan very quickly. So we put 50 of our custom light sources and fourteen cameras from our lab’s Light Stage X system onto a rolling gantry, and framed the cameras so tightly so that if the subject was in frame, the subject was in focus."

Debevec was impressed with the results produced by the mobile Light Stage, but at the time, he may have been too busy enjoying the experience of spending an afternoon with the President of the United States to notice. "It was a great honor to be invited to the team by Günter Waibel," said Debevec, "and President Obama was not only a great subject — but he was genuinely interested in the technology." 

More about the scanning and modeling process from Paul Debevec:

"Jay Busch and Xueming Yu from USC ICT wired and programmed the light sources. Graduate student Paul Graham programmed the scanning sequence, and Graham Fyffe designed a simplified set of lighting patterns which allowed us to perform a scan in just over a second. The Smithsonian’s Vince Rossi and Adam Metallo then used hand-held structured light scanners to record the rest of President Obama’s head and shoulders for the 3D printed bust. Back at the Smithsonian offices, Graham Fyffe solved for the 3D shape of the face using techniques from our Ghosh, et al. SIGGRAPH Asia 2011 paper “Multiview Face Capture using Polarized Spherical Gradient Illumination” at better than a tenth of a millimeter resolution, along with diffuse, specular, and surface normal maps from the polarized lighting. 

"The Smithsonian team integrated the Light Stage model of the face with their model from the structured light scans and, with generous support from Autodesk and 3D Systems, printed the model life-size in 3D using Selective Laser Sintering to create the 3D Presidential Portrait.

"Being part of the SIGGRAPH community inspires us to do our very best work and to push our techniques further every day, and without that — I don’t think we could have played the role we did in this project. "

Visual Effects Society Honors Ridley Scott with Award

Visual Effects Society Honors Ridley Scott with Award

Today, the Visual Effects Society (VES) announced director-producer Ridley Scott as the recipient of this year's VES Lifetime Achievement Award. The award, which will be presented at the 13th Annual VES Awards in early 2015, is given to individuals who have amassed "an outstanding body of work that has significantly contributed to the art and/or science of the visual effects industry." The roster of past recipients of the award is a who's who of FX filmmaking, and includes George Lucas, Dennis Muren, Steven Spielberg, James Cameron and John Dykstra (among others).

Ridley Scott may not be a VFX artist, but his films have raised the bar for visual effects on the silver screen. "Ridley's impact upon the visual effects and technical form is unparalleled," said Jeffrey Okun, VES Board Chair. "He has given us a body of groundbreaking work to aspire to."

Scott began his directorial career in 1977, with "The Duellists." This fledgling effort landed him the Best First Film Award at the Cannes Film Festival. On the heels of his first film, Scott directed the sci-fi thriller "Alien," an absolute blockbuster. Shortly after, in 1982, helmed the cult classic "Blade Runner." Scott's most recent directorial credits include "Prometheus" and "The Counselor." His much-anticipated film "Exodus: Gods and Kings," starring Christian Bale, will be released this December. 

“The best filmmaking has always been the result of collaboration between artists, craftspeople and technicians, both in front and behind the camera,” said Ridley Scott. “Over the years I have been very fortunate to work on films that are visual at their core and thus I have always been immensely reliant on the expertise of our visual effects teams. To be honored by the Visual Effects Society with this Lifetime Achievement Award is indeed extremely gratifying.”

 

Perfecting Destruction: Racing Game Focuses on the Crash

Perfecting Destruction: Racing Game Focuses on the Crash

By Cody Welsh
Ask any serious gamer about some of the coolest developments in the field, and you’re bound to run across Bugbear Entertainment’s “Next Car Game.” Bugbear is best known for its “FlatOut” game series, which is much in the same vein as their latest creation — a series that fills the memories of gamers with scenes you might think resulted from placing a derby car in a very large blender. Now, the spiritual successor to “FlatOut” continues the tradition by implementing various new technologies to create a conglomeration of beautiful carnage. The main component behind the graphical finesse (or lack thereof) in “Next Car Game” is the deformation of vehicles placed in the game engine. Plenty of other games have had this idea — but, not to this degree; contrary to projects such as BeamNG, which focus more on the realism of collisions, Bugbear has decided to make everything look a little “prettier.” The game engine used to meld cars into the shape of whatever they crash into is called ROMU — the very same property used by Bugbear since 2000, though it now likely does not resemble the original product very much. The name ROMU, a developer explained, is Finnish for “scrap, junk, or wreck.” Soft-body simulation is utilized for most of the vehicle destruction, with additional “plates” added to the outermost regions to more closely simulate what would happen in a real collision. The bending and buckling of the chassis is handled by this component of ROMU, while the objects on the car more easily detach themselves (though they are also subject to deformation, in the meantime). With the advent of faster processors, it is that much easier to provide a higher level of detail — at a faster rate — than in times past, and the result is astonishing.

If Bugbear were to stop at exactly the point that they’ve been to before, this new — albeit “spiritual” — successor to “FlatOut” would be flat and predictable. Realizing this, the development team continues to implement additional features, never overlooking graphical fidelity. There is much to explore in “Next Car Game” — the car is handled differently if it’s a “plane on a plane” or if all the tires are simulated individually, for example. The game includes numerous graphical touches that enhance its realism, such as smoke rising from the tires when the vehicle drifts around an opposing car, and the rear bumper rattling and springing if it’s partially damaged. New additions are announced on the Next Car Game Blog fairly regularly. This month, developers spoke about upcoming additions to the game: “First and foremost of future additions is reworked tire physics,” one Bugbear developer wrote. “It’s no small thing, for the tire physics affect every single thing in the gameplay – the way your car handles is tied to the tires, the crashes, slides, swerves… everything.” At the same time, physics can only explain so much about a game that looks eqully impressive from a still image. The product contains much of what might be expected from a commercially produced game: anti-aliasing, dynamic lighting, particle systems, tactile usage of bump-maps and usage of photographic material as often as possible. The full roster of technologies used might be hard to keep up with, but nobody who plays games regularly (and has a sufficient graphics card) is likely to complain. The development of “Next Car Game” was launched by a KickStarter campaign, and the game is clearly devoted to its fans. Which is a good thing: the KickStarter development model is not new, and the result of not holding up to one’s promise can be catastrophic. Luckily, Bugbear seems up to the challenge, delivering a stunning product despite the fact that it isn’t even complete, yet.