President Obama Lands the First 3D Presidential Portrait

President Obama Lands the First 3D Presidential Portrait

By Kristy Barkan

In his 2013 State of the Union address, President Obama made room in his speech to talk about a subject that some may have found an odd choice for a presidential address: 3D printing. His comments on 3D printing were positive, noting that the technology "has the potential to revolutionize the way we make almost everything." A little over a year later, President Obama's prediction was proven true — and in a rather personal way.

This past June, computer graphics experts from the Smithsonian and the USC Institute of Creative Technologies landed at Pennsylvania Avenue to create a 3D scan of President Obama. No just any 3D scan, but the highest resolution digital model ever created of a head of state.

Günter Waibel, Director of the Smithsonian 3D Digitization Program Office, led the team that conducted the scan. In the White House's behind-the-scenes video of the project (below), Waibel is seen brandishing a life-sized plaster mask of Abraham Lincoln's face. "The inspiration for creating [Obama's] portrait," Waibel explained in the video, "comes from the Lincoln life mask in our National Portrait Gallery. Seeing the [Lincoln mask] made us think — what would happen if we could do that with a sitting president, using modern technologies and tools?"

President Obama's 3D scans

The resulting 3D models of President Obama's head. Image: White House

"This isn't an artistic likeness of the president," said Adam Metallo, 3D Digitization Program Officer at Smithsonian, as he sat before a computer displaying a seemingly perfect digital copy of President Obama's head. "This is actually millions upon millions of measurements that create a 3D likeness that we can now 3D print — and make something that's never been done before."

The models of President Obama were used to 3D print a life mask and a presidential bust that was unveiled at the first-ever White House Maker Faire in June. The data and printed models will be added to the Smithsonian’s National Portrait Gallery collection. 

Tom Kalil, Deputy Director for Policy for the White House Office of Science and Technology, shared his response to the scan in the White House making-of video. Kalil was enthusiastic about the project, especially how it served to highlight the value of 3D scanning and printing technologies. "The President getting his likeness scanned, as cool as that is, is also about a broader trend that's going on," he said. "The third industrial revolution … the combination of the digital world and the physical that is allowing students and entrepreneurs to be able to go from idea to prototype in the blink of an eye."

Obama in the mobile Light Stage

President Obama sits in front of the mobile Light Stage. Image: White House

Paul Debevec, Associate Director of Graphics Research at USC ICT, and former ACM SIGGRAPH Vice President, was part of the team that conducted the landmark presidential scan. "Ten years ago, it was barely possible to think this could be done," said Debevec. Since 2001, Debevec has lead the Light Stage project at USC ICT, a series of increasingly advanced scanning and lighting rigs that allow users to collect a tremendous amount of geometry and illumination data from human subjects. The data gathered with the Light Stages is used to create believable digital representations of the subjects scanned.

"We used similar Light Stage technology to the polarized gradient illumination scanning process used for the Digital Emily project shown at SIGGRAPH 2008, and the Digital Ira project shown at SIGGRAPH 2013," said Debevec. "But we had to quickly create a mobile rig which could ship to Washington, squeeze through doorways, and scan very quickly. So we put 50 of our custom light sources and fourteen cameras from our lab’s Light Stage X system onto a rolling gantry, and framed the cameras so tightly so that if the subject was in frame, the subject was in focus."

Debevec was impressed with the results produced by the mobile Light Stage, but at the time, he may have been too busy enjoying the experience of spending an afternoon with the President of the United States to notice. "It was a great honor to be invited to the team by Günter Waibel," said Debevec, "and President Obama was not only a great subject — but he was genuinely interested in the technology." 

More about the scanning and modeling process from Paul Debevec:

"Jay Busch and Xueming Yu from USC ICT wired and programmed the light sources. Graduate student Paul Graham programmed the scanning sequence, and Graham Fyffe designed a simplified set of lighting patterns which allowed us to perform a scan in just over a second. The Smithsonian’s Vince Rossi and Adam Metallo then used hand-held structured light scanners to record the rest of President Obama’s head and shoulders for the 3D printed bust. Back at the Smithsonian offices, Graham Fyffe solved for the 3D shape of the face using techniques from our Ghosh, et al. SIGGRAPH Asia 2011 paper “Multiview Face Capture using Polarized Spherical Gradient Illumination” at better than a tenth of a millimeter resolution, along with diffuse, specular, and surface normal maps from the polarized lighting. 

"The Smithsonian team integrated the Light Stage model of the face with their model from the structured light scans and, with generous support from Autodesk and 3D Systems, printed the model life-size in 3D using Selective Laser Sintering to create the 3D Presidential Portrait.

"Being part of the SIGGRAPH community inspires us to do our very best work and to push our techniques further every day, and without that — I don’t think we could have played the role we did in this project. "

Visual Effects Society Honors Ridley Scott with Award

Visual Effects Society Honors Ridley Scott with Award

Today, the Visual Effects Society (VES) announced director-producer Ridley Scott as the recipient of this year's VES Lifetime Achievement Award. The award, which will be presented at the 13th Annual VES Awards in early 2015, is given to individuals who have amassed "an outstanding body of work that has significantly contributed to the art and/or science of the visual effects industry." The roster of past recipients of the award is a who's who of FX filmmaking, and includes George Lucas, Dennis Muren, Steven Spielberg, James Cameron and John Dykstra (among others).

Ridley Scott may not be a VFX artist, but his films have raised the bar for visual effects on the silver screen. "Ridley's impact upon the visual effects and technical form is unparalleled," said Jeffrey Okun, VES Board Chair. "He has given us a body of groundbreaking work to aspire to."

Scott began his directorial career in 1977, with "The Duellists." This fledgling effort landed him the Best First Film Award at the Cannes Film Festival. On the heels of his first film, Scott directed the sci-fi thriller "Alien," an absolute blockbuster. Shortly after, in 1982, helmed the cult classic "Blade Runner." Scott's most recent directorial credits include "Prometheus" and "The Counselor." His much-anticipated film "Exodus: Gods and Kings," starring Christian Bale, will be released this December. 

“The best filmmaking has always been the result of collaboration between artists, craftspeople and technicians, both in front and behind the camera,” said Ridley Scott. “Over the years I have been very fortunate to work on films that are visual at their core and thus I have always been immensely reliant on the expertise of our visual effects teams. To be honored by the Visual Effects Society with this Lifetime Achievement Award is indeed extremely gratifying.”

 

Perfecting Destruction: Racing Game Focuses on the Crash

Perfecting Destruction: Racing Game Focuses on the Crash

By Cody Welsh
Ask any serious gamer about some of the coolest developments in the field, and you’re bound to run across Bugbear Entertainment’s “Next Car Game.” Bugbear is best known for its “FlatOut” game series, which is much in the same vein as their latest creation — a series that fills the memories of gamers with scenes you might think resulted from placing a derby car in a very large blender. Now, the spiritual successor to “FlatOut” continues the tradition by implementing various new technologies to create a conglomeration of beautiful carnage. The main component behind the graphical finesse (or lack thereof) in “Next Car Game” is the deformation of vehicles placed in the game engine. Plenty of other games have had this idea — but, not to this degree; contrary to projects such as BeamNG, which focus more on the realism of collisions, Bugbear has decided to make everything look a little “prettier.” The game engine used to meld cars into the shape of whatever they crash into is called ROMU — the very same property used by Bugbear since 2000, though it now likely does not resemble the original product very much. The name ROMU, a developer explained, is Finnish for “scrap, junk, or wreck.” Soft-body simulation is utilized for most of the vehicle destruction, with additional “plates” added to the outermost regions to more closely simulate what would happen in a real collision. The bending and buckling of the chassis is handled by this component of ROMU, while the objects on the car more easily detach themselves (though they are also subject to deformation, in the meantime). With the advent of faster processors, it is that much easier to provide a higher level of detail — at a faster rate — than in times past, and the result is astonishing.

If Bugbear were to stop at exactly the point that they’ve been to before, this new — albeit “spiritual” — successor to “FlatOut” would be flat and predictable. Realizing this, the development team continues to implement additional features, never overlooking graphical fidelity. There is much to explore in “Next Car Game” — the car is handled differently if it’s a “plane on a plane” or if all the tires are simulated individually, for example. The game includes numerous graphical touches that enhance its realism, such as smoke rising from the tires when the vehicle drifts around an opposing car, and the rear bumper rattling and springing if it’s partially damaged. New additions are announced on the Next Car Game Blog fairly regularly. This month, developers spoke about upcoming additions to the game: “First and foremost of future additions is reworked tire physics,” one Bugbear developer wrote. “It’s no small thing, for the tire physics affect every single thing in the gameplay – the way your car handles is tied to the tires, the crashes, slides, swerves… everything.” At the same time, physics can only explain so much about a game that looks eqully impressive from a still image. The product contains much of what might be expected from a commercially produced game: anti-aliasing, dynamic lighting, particle systems, tactile usage of bump-maps and usage of photographic material as often as possible. The full roster of technologies used might be hard to keep up with, but nobody who plays games regularly (and has a sufficient graphics card) is likely to complain. The development of “Next Car Game” was launched by a KickStarter campaign, and the game is clearly devoted to its fans. Which is a good thing: the KickStarter development model is not new, and the result of not holding up to one’s promise can be catastrophic. Luckily, Bugbear seems up to the challenge, delivering a stunning product despite the fact that it isn’t even complete, yet.
USC Student Chapter to Attend SIGGRAPH 2015 in Force

USC Student Chapter to Attend SIGGRAPH 2015 in Force

Members of the newly-launched USC ACM SIGGRAPH Student Chapter are particularly well positioned to volunteer for SIGGRAPH 2015. Literally. The University of Southern California is just two miles from the Los Angeles Convention Center, which will host SIGGRAPH 2015 — the world's largest international conference on computer graphics and interactive techniques — from August 9 to 13, 2015. 

Because of USC's close proximity to the LA convention center, USC ACM SIGGRAPH was eager to inform its members of the opportunity to serve as student volunteers at SIGGRAPH 2015 — and, consequently, attend the conference for free. To drive the message home, USC chapter leadership invited the chair of the SIGGRAPH 2015 student volunteer program, Christine Holmes, to come to their November meeting and speak to their chapter members in person. 

In addition to providing an overview of volunteering at SIGGRAPH, Holmes offered advice about completing the student volunteer application and gave eye-opening examples of how serving as a student volunteer can become a pivotal point in a student's career trajectory. According to USC ACM SIGGRAPH Student Chapter Chair, Shannon Kraemer, the crowd was abuzz with excitement. "Christine gave us many reasons to get excited [about SIGGRAPH]," said Kraemer. "We are most excited to arrive at SIGGRAPH 2015 with a strong Trojan (USC's mascot) presence, and to hopefully form a community that will keep our chapter strong for many years to come."

Other events put on by the USC ACM SIGGRAPH Student Chapter have included a screening of the 2014 Computer Animation Festival, a 3D printing demo and a lecture about the history of computer graphics by USC Professor Saty Raghavachary. In February, the chapter will visit the USC Institute for Creative Technologies, where members will be provided with a behind-the-scenes look at some of the groundbreaking computer graphics research underway at the facility.

For more information on the USC Student Chapter, visit the USC ACM SIGGRAPH Chapter Facebook page.

For details on how to start your own professional or student chapter, check out the Start an ACM SIGGRAPH Chapter page of the ACM SIGGRAPH website.

SIGGRAPH Asia Previews Accessible to Wider Audience

SIGGRAPH Asia Previews Accessible to Wider Audience

Both the North American SIGGRAPH conference and SIGGRAPH Asia are well attended by international audiences, and though most attendees understand English, there is a difference between grasping a language and being fully comfortable digesting complex research in that language.

In sensitivity to that fact, the ACM SIGGRAPH International Resources Committee has created closed captioning in multiple language for the SIGGRAPH Asia 2014 Emerging Technologies and Technical Papers preview videos.  Subtitles for the Technical Paper video are available in Arabic, Chinese (Simplified), French, Italian, Japanese, Norwegian, Polish, Portuguese, Russian, Shona and Spanish. Subtitles for the Emerging Technologies video are available in Arabic, Chinese (Simplified), French, Italian, Japanese, Polish, Russian, Shona and Spanish.

To activate closed captioning on the videos (below), click the "CC" button in the lower right of the YouTube player. To change the language, click the gear icon.

SIGGRAPH Asia 2014 Technical Papers Preview

SIGGRAPH Asia 2014 Emerging Technologies Preview