From Paper to VR: 360-degree Animation of 2D Illustrations

From Paper to VR: 360-degree Animation of 2D Illustrations

By Cody Welsh

In the modern day, there are a number of software options for artists who wish to animate their illustrations without a lot of the tedious work that might otherwise be involved in the process. One company that produces this kind of software is Live2D, whose products focus on dynamic and configurable 2D models of illustrations. Cubism is their flagship product at the moment, boasting many of the tools that a regular 3D modeling program might possess. Recently, Live2D announced the development of a new software program: Euclid.

Euclid is intended to be the next-generation replacement for Cubism, and is notable because Live2D asserts that the software can render a 2D illustration from any orientation — that is, in an "orbit" around a given illustration. Cubism, by comparison, can only render such illustrations from a maximum of 40 degrees to the left and right, if one is generous enough with their definition of the illustration. A prototype of the new and unreleased product has been shown, with a demonstrator making use of the Oculus Rift to view a particular rendering of an illustration from various angles. So far, it’s impressive: all of the animated features seem to retain their definition, regardless of the orientation they’re viewed at. That is, of course, the goal of Euclid.

An emphasis has been placed on more natural rendering of difficult characteristics, such as hair and other facial features. This is achieved mostly by means of layering in 3D space, to preserve the illusion of two-dimensional viewing. In addition, one may combine multiple illustrations to enhance the effect. Selective use of features contained within the product is also possible: traditional 3D modeling techniques may be applied to some parts of a character design, while Live2D animation may be applied to others. An example was given of a character that had its main body modeled normally, while the head and facial features were given the Live2D treatment.

Euclid 3D Comparison

Considering that Cubism already contains a separate SDK for game developers, it is possible that an extension will also exist for the new product, so as to take advantage of existing architecture. In a recent press release, it was noted that Live2D Euclid technologies can be rendered in real-time within a 3D environment, allowing them to be manipulated by game engines as well as computer graphics applications. Oculus has already been demonstrated as a potential candidate for this approach, which would mean the two concepts could be combined to create a more immersive game (or CG movie) world.

Although competing technologies exist, the software Live2D has come up with is compelling. Only time will tell its full capabilities.

ACM Names Fellows for Achievements in Computing

ACM Names Fellows for Achievements in Computing

The Association for Computing Machinery (ACM) has announced the names of 47 members who will be recognized as ACM Fellows for their exceptional contributions to computing. Fellowship is ACM's most prestigious member grade, and is held by only 1% of ACM members. The 2014 ACM Fellows hail from some of the world’s leading universities, corporations and research labs. They have achieved advances in computing research and development that are driving innovation and sustaining economic development around the world.

One of the newly-named ACM Fellows, Adam Finkelstein of Princeton University, is a dedicated ACM SIGGRAPH volunteer who chaired the SIGGRAPH 2014 Technical Papers program. Finkelstein was selected for his contributions to non-photorealistic rendering, multi-resolution representations and the field of computer graphics in general.

The 2014 ACM Fellows have been cited for contributions to key computing fields including database mining and design, artificial intelligence and machine learning, cryptography and verification, Internet security and privacy, computer vision and medical imaging, electronic design automation and human-computer interaction. According to ACM President Alexander L. Wolf, the honors extended to these new fellows are well-deserved. “Our world has been immeasurably improved by the impact of their innovations," he said. "We recognize their contributions to the dynamic computing technologies that are making a difference to the study of computer science, the community of computing professionals, and the countless consumers and citizens who are benefiting from their creativity and commitment.”

Heartfelt congratulations to this year's ACM Fellows, who will be formally recognized at the ACM Awards Banquet in June in San Francisco. Additional information about the ACM 2014 Fellows, as well as previous ACM Fellows and award winners, is available on the ACM Awards site.

2014 ACM Fellows

Samson Abramsky
University of Oxford
For contributions to domains in logical form, game semantics, categorical quantum mechanics and contextual semantics

Leslie Lamport
Microsoft Research
For contributions to the theory and practice of distributed and concurrent systems

Vikram Adve
University of Illinois at Urbana-Champaign
For developing the LLVM compiler and for contributions to parallel computing and software security

Sharad Malik
Princeton University
For contributions to efficient and capable SAT solvers, and accurate embedded software models

Foto Afrati
National Technical University of Athens
For contributions to the theory of database systems

Yishay Mansour
Tel-Aviv University
For contributions to machine learning, algorithmic game theory, distributed computing, and communication networks

Charles Bachman
Retired
For contributions to database technology, notably the integrated data store

Subhasish Mitra
Stanford University
For contributions to the design and testing of robust computing systems

Allan Borodin
University of Toronto
For contributions to theoretical computer science, in complexity, on-line algorithms, resource tradeoffs, and models of algorithmic paradigms

Michael Mitzenmacher
Harvard University
For contributions to coding theory, hashing algorithms and data structures, and networking algorithms

Alan Bundy
University of Edinburgh
For contributions to artificial intelligence, automated reasoning, and the formation and evolution of representations

Robert Morris
Massachusetts Institute of Technology
For contributions to computer networking, distributed systems, and operating systems

Lorrie Cranor
Carnegie Mellon University
For contributions to research and education in usable privacy and security

Vijaykrishnan Narayanan
Pennsylvania State University
For contributions to power estimation and optimization in the design of power-aware systems

Timothy A. Davis
Texas A&M University
For contributions to sparse matrix algorithms and software

Shamkant B. Navathe
Georgia Institute of Technology
For contributions to data modeling, database design, and database education

Srinivas Devadas
Massachusetts Institute of Technology
For contributions to secure and energy-efficient hardware

Jignesh M. Patel
University of Wisconsin, Madison
For contributions to high-performance database query processing methods, in particular on spatial data

Inderjit Dhillon
University of Texas at Austin
For contributions to large-scale data analysis, machine learning and computational mathematics

Parthasarathy Ranganathan
Google Inc.
For contributions to the areas of energy efficiency and server architectures

Nikil D. Dutt
University of California, Irvine
For contributions to embedded architecture exploration and service to electronic design automation and embedded systems

Omer Reingold
Weizmann Institute of Science/Stanford University
For contributions to the study of pseudorandomness, derandomization and cryptography

Faith Ellen
University of Toronto
For contributions to data structures, and the theory of distributed and parallel computing

Tom Rodden
University of Nottingham
For contributions to ubiquitous computing and computer supported cooperative work

Michael D. Ernst
University of Washington
For contributions to software analysis, testing, and verification

Ronitt Rubinfeld
Massachusetts Institute of Technology
For contributions to delegated computation, sublinear time algorithms and property testing

Adam Finkelstein
Princeton University
For contributions to non-photorealistic rendering, multi-resolution representations, and computer graphics

Daniela Rus
Massachusetts Institute of Technology
For contributions to robotics and sensor networks

Juliana Freire
New York University
For contributions to provenance management research and technology, and computational reproducibility

Alberto Sangiovanni-Vincentelli
University of California, Berkeley
For contributions to electronic design automation

Johannes Gehrke
Cornell University
For contributions to data mining and data stream query processing

Henning Schulzrinne
Columbia University
For contributions to the design of protocols, applications, and algorithms for Internet multimedia

Eric Grimson
Massachusetts Institute of Technology
For contributions to computer vision and medical image computing

Stuart Shieber
Harvard University
For contributions to natural-language processing, and to open-access systems and policy

Mark Guzdial
Georgia Institute of Technology
For contributions to computing education, and broadening participation

Ramakrishnan Srikant
Google Inc.
For contributions to knowledge discovery and data mining

Gernot Heiser
University of New South Wales/National Information and 
Communications Technology Australia (NICTA) Research Centre of Excellence
For contributions demonstrating that provably correct operating systems are feasible and suitable for real-world use

Aravind Srinivasan
University of Maryland, College Park
For contributions to algorithms, probabilistic methods, and networks

Eric Horvitz
Microsoft Research
For contributions to artificial intelligence and human-computer interaction

S. Sudarshan
Indian Institute of Technology Bombay
For contributions to database education, query processing, query optimization and keyword queries

Thorsten Joachims
Cornell University
For contributions to the theory and practice of machine learning and information retrieval

Paul Syverson
Naval Research Lab
For contributions to and leadership in the theory and practice of privacy and security

Michael Kearns
University of Pennsylvania
For contributions to machine learning, artificial intelligence, and algorithmic game theory and computational social science

Gene Tsudik
University of California, Irvine
For contributions to Internet security and privacy

Valerie King
University of Victoria
For contributions to randomized algorithms, especially dynamic graph algorithms and fault tolerant distributed computing

Steve Whittaker
University of California, Santa Cruz
For contributions to human computer interaction

Sarit Kraus
Bar Ilan University
For contributions to artificial intelligence, including multi-agent systems, human-agent interaction and non-monotonic reasoning

 

 

Visual Effects: Are Computer Graphics Always the Answer?

Visual Effects: Are Computer Graphics Always the Answer?

By Kristy Barkan

Up until the mid ‘90s, effects artists were people who made things with their hands. They wore paint-spattered clothing and rifled through tool boxes. They created aliens out of latex, turned foam into granite and constructed nine-foot spaceships from odds and ends found on dusty workshop shelves. The Millennium Falcon was a product of such physical craftsmanship, as was the creepy-yet-loveable E.T., the immaculate Discovery spacecraft in "2001: A Space Odyssey" and Major Toht’s grisly melting head in "Raiders of the Lost Ark."

Today, people with the knowledge and skill to create such models and effects are surprisingly rare. The vast majority of modern visual effects are created on computers, and artists-in-training learn software, not soldering. Computer graphics (CG) technology has advanced to the point where models and effects can now be rendered with near-perfect photorealism, and countless directors have stopped using physical effects altogether. Though the science and artistry involved in the creation of movie-quality computer graphics is tremendous, should that necessarily mean the complete extinction of practical effects — a field that only hit its stride a few decades ago?

When computer graphics first came on the scene in the early '80s, the technology was a novelty. It was a party trick to admire and wonder at, but nothing that came close to the realism of physical models Visual effects artist Lorne Peterson ("Star Wars" episodes I, II, III, IV, V and VI, "Jurassic Park," "Raiders of the Lost Ark," "E.T.") distinctly remembers the moment when computer graphics entered his world. It was the early ‘80s and Peterson was sitting at his desk at Industrial Light & Magic, working on a model for an upcoming film. Visual effects supervisor Dennis Muren popped his head in the door and waved to Peterson from across the room. “You’ve got to see this!” he said.

Peterson followed Muren to another part of the facility, where a fellow ILM employee was sitting before a 12-inch, state-of-the-art Macintosh computer. Peterson recalls peering over the man’s shoulder to see what he was working on. It was a shot from "Star Wars: Episode VI," which ILM had just finished, with a computer-generated shadow rendered over top of it. The shadow cascaded across the Death Star with surprising realism. Peterson was impressed, but far from dazzled by the new technology. “I thought…wow, that’s pretty neat. That will be handy for shadows…but probably not much else.”

Effects artists (from left) John Dykstra, Al Miller, Grant McCune, Steve Gawley, Jon Erland, Lorne Peterson and Bill Shourt oversee filming of the Millennium Falcon for "Star Wars: Episode IV – A New Hope." Credit: ILM

Peterson is not the only one who was taken by surprise by the rapid adoption of computer graphics. Model maker Fon Davis has worked in the industry for more than two decades, and created models for all three of the Star Wars prequels. According to Fon, both computer graphics and practical effects have their place, but physical models possess certain inimitable qualities. “There is a randomness that occurs with a real object’s texture, aging and shadows,” he said. “A lot of people can spot CG even though they can’t put their finger on why. It’s really hard to fool the human brain.”

Scott Squires, an Academy Award-nominated visual effects supervisor and 35-year industry veteran, is a big fan of computer graphics — but admits that digital has its downsides. “One of the problems with computer-generated models is that they’re perfect. An example is mirror windows on a skyscraper. By default, a CG version represents each window as perfectly flat and aligned. But in reality, skyscraper windows are not perfectly flat — nor are they aligned. This takes both time and effort in CG to consider and to accomplish.”

Imperfect as they may be, digital effects have come a long way since "Tron." Modern computer graphics are highly sophisticated, and afford directors a greater degree of flexibility than practical effects. With digital effects, there are no restrictions on how many times a director can change his mind (aside from time and budget), and no production constrains (such as having a single take to blow up a $2 million model).

Scott Ross, who served as the general manager of ILM in the ‘80s and co-founded the post-production studio Digital Domain, explains. “There are limitations to physical models,” he said. “But CG can do just about anything… set extensions, fire, water, snow, pyro, creature animation, matte paintings — virtually anything you can imagine. In fact, nowadays, anything you can imagine.”

A few years back, director Peter Jackson made the decision to digitally re-imagine some of the creatures of Middle-earth for "The Hobbit" trilogy. The same creatures (orcs and goblins) appeared in the Lord of the Rings trilogy, but as actors in foam latex prosthetics, courtesy of Weta Workshop. In a November 2012 interview with the LA Times, Jackson explained that he decided to switch to computer graphics so he could make the beasts appear less human. “Prosthetic makeup is always frustrating,” he said. “At the end of the day, if you want the character to talk, which a lot of the orcs and goblins do, you can design the most incredible prosthetics — but you’ve still got eyes where the eyes have to be and the mouth where the mouth has to be.” The resulting computer-generated orcs and goblins were skillfully crafted, but to some fans, lacked the grit and realism of their prosthetic-clad predecessors.

Azog the orc from The Hobbit: Desolation of Smaug

Left: A Weta Workshop employee paints an Uruk-hai with special effects makeup for "Lord of the Rings." Courtesy Weta Workshop. Right: Azog from "The Hobbit: Desolation of Smaug."
"The Lord of the Rings: The Fellowship of the Ring," "The Hobbit: The Desolation of Smaug" and the names of the characters, items, events and places therein are trademarks of The Saul Zaentz Company d/b/a Middle-earth Enterprises under license to New Line Productions, Inc.

Many industry veterans believe the best approach to creating realistic environments and creatures is to use a combination of physical and digital effects, selecting the approach that works best for each shot. Model maker Fon Davis is one such veteran. “I’ve learned from people like Dennis Muren that it really helps your CG artists if you shoot something practical to be used as reference,” he said. “Then, there’s no question what a real object looks like on camera in that environment and lighting.”

Unfortunately, not everyone making the decisions in Hollywood feels the same way. According to Scott Squires, “Model making is becoming a lost art of sorts. Many of the new visual effects companies do not have practical model shops.”

In fact, physical model making is a skill that’s rarely taught these days. Ben Fox, an FX technical director at Framestore, NY, grew up with a love for the traditional arts and hands-on creating. If he’d been born 20 years earlier, he might have been drawn to model making. As it was, practical effects work wasn’t even on his radar. “I never had the opportunity to take any model or miniature-making courses in college,” he said. “Though, come to think of it, that would have been great.”

While the demand for physical models may be waning, the appreciation for them is not. When Lorne Peterson and the rest of the ILM model shop got a chance to revisit the Star Wars universe in 1998 (creating models for the prequels), he was surprised to find the workshop regularly besieged by visitors from the computer graphics department. “They were mostly young guys,” Peterson said. “Star Wars was what made them fall in love with visual effects in the first place.” The CG staff visited so frequently to watch the model makers work that ILM had to institute a sign-up policy to limit the number of visitors in the workshop at any given time.

Though practical effects may have become eclipsed by computer graphics, they haven’t gone the way of the dinosaur just yet. In an August 2014 interview with The Hollywood Reporter, Rian Johnson (the future director of "Star Wars: Episode VIII") complained about the lack of practical effects in modern movies – and pointed out that "Star Wars: Episode VII" will be different from the Star Wars prequels in that respect. “They’re doing so much practical building for this one,” Johnson said. “It’s awesome.”

The question remains: do films like "Star Wars: Episode VII" represent the last gasp of practical effects? Or could Episode VII's release mark the beginning of a new era: one where practical and digital effects are not in competition, but acknowledged as equally important parts of the filmmaking toolset, used together to create the most realistic environments, characters and effects possible?

When asked what he thought the future might hold for model makers and other practical effects artists, Lorne Peterson laughed, recalling his first introduction to computer graphics. “I’m terrible at predicting the future,” he said. “Don’t ask me.”

Spotlight on SIGGRAPH: S3 Provides the Tools for Success

Spotlight on SIGGRAPH: S3 Provides the Tools for Success

By Deja Collins, in collaboration with S3 Chair Corinne Price

Editor's note: Spotlight on SIGGRAPH is a recurring feature, established to shed light on the various committees and avenues of volunteering within ACM SIGGRAPH and its annual conferences.

Established in 2007, the SIGGRAPH Student Services, colloquially known as S3, is an ACM SIGGRAPH committee whose primary mission is to provide additional value to the student members of ACM SIGGRAPH. The Student Services Committee also maintains continuity and institutional memory for the student volunteer and intern programs at SIGGRAPH and SIGGRAPH Asia, and collaborates with other SIGGRAPH entities (conferences, committees, etc.) on issues that affect student members. S3 currently offers (and is in the process of expanding) three key programs: resume and reel reviews, mentoring for ACM SIGGRAPH student members (known as "Mentor Me"), and a series of webinars and talks.

S3’s flagship program is a resume and reel review service known as "S3R3." S3 understands that resumes and reels are a constant source of confusion and frustration for students and new graduates, and aims to remove some of the guesswork by providing professional advice and opinions to ACM SIGGRAPH student members. Students are matched to industry professionals based on their desired work field and job skills, and provide the resume and/or reel for review and expert advice. S3R3 is offered on-site during both the SIGGRAPH and SIGGRAPH Asia conferences, as well as being offered virtually throughout the year. Below are the profiles of three prominent S3R3 reviewers: Patrick Coan, Murad Currawalla and Vince De Quattro.

Patrick Coan: Coan holds a Bachelor's of Science in IT-Multimedia Digital Entertainment and Game Design from ITT Technical Institute and works as a production artist in the field of interactive entertainment. He has freelanced as a web designer, game character designer and animator for film. Some of his credits include lead animation in "Madstreak," character and environment art direction for "Infinite Shooter," and design work for the Cascade chapter of ACM SIGGRAPH.

Murad Currawall: Motion graphics artist Murad Currawalla holda a BFA in Digital Media from the Otis College of Art and Design, and a Bachelor of Commerce from HR College of Commerce and Economics. Currawalla has professional experience in animation, compositing and rotoscoping. Additionally, he owns a VFX (visual effects) venture in Mumbai, India. Known as The CG Lab, the business has worked on music videos, commercials, and graphic design (just to name a few).

Vince De Quattro: For the majority of his career, De Quattro was a Technical Director for Industrial Light and Magic, working on five Academy-Award-nominated films ("Star Wars" Episodes I & II, "Pirates of the Caribbean," "Mighty Joe Young" and "Pearl Harbor"). Prior to ILM, De Quattro was a technical director and animator for Warner Brothers Digital Studios, Robert Greenberg Studios LA and Sony High Definition Television Center in Culver City. He has a Bachelor's Degree in Fine Arts and a Master's degree in Computer Animation in Film from the University of Southern California.

S3 is in the process of expanding its one-on-one mentoring program, Mentor Me. Mentor Me matches students with professionals who have agreed to offer guidance and insight into their respective fields. The mentor sessions are conducted by email, phone, and in some cases — face-to-face chats, which is typically during the SIGGRAPH or SIGGRAPH Asia conferences when the community comes together. S3's hope is that these mentorships turn into long-lasting professional connections for the members. To date, S3 has been able to match Student Volunteers (SVs) from the SIGGRAPH conferences, but is planning to expand to all interested ACM SIGGRAPH student members during the year.

S3’s final offering is a series of informative webinars and talks that offer advice, tips and insight on everything from careers to technical knowledge. The webinars are conducted throughout the year, which allows many to benefit from the wisdom of the high-profile speakers and mentors. Previous sessions have covered topics such as making the most of attending the SIGGRAPH conference, and unlocking information about iOS game development, animation, and digital media.

Student volunteers at SIGGRAPH 2014

Student Volunteers hard at work at SIGGRAPH 2014. Photo by John M. Fujii, Copyright © 2014 ACM SIGGRAPH.

Beyond its student programs, S3 also maintains the body of knowledge for the Student Volunteer program in the form of the Student Volunteers Chair Handbook. As ACM SIGGRAPH continues to expand and provide more conferences abroad, such as SIGGRAPH Asia, there is occasionally the need to create Student Volunteer committees, as seen in the main SIGGRAPH conference. When this need arises, the S3’s year round committee helps provide the background of support and provides institutional knowledge on how to get set up. This year, S3 provided on-site support to the SIGGRAPH Asia Student Volunteer program, and headed up SIGGRAPH Asia versions of S3R3 and Mentor Me for student members at the conference.

SIGGRAPH Student Services operates year-round, and strives to provide professional assistance for all ACM SIGGRAPH student members. S3 Chair Corinne Price welcomes suggestions from the community on potential additional benefits. "As S3 continues to grow and expand our offerings," she said, "we are always seeking input from student members and professionals throughout the community to know what services will most benefit you. Please do not hesitate to reach out to myself or one of the committee members if you have a great idea or something you would like to see.”

Through the programs S3 offers, student members receive valuable insight and critiques from industry experts, which equips them with many necessary tools for success in their occupational pursuits.

If you would like to express interest in volunteering on the S3 committee, please fill out the ACM SIGGRAPH volunteer form. If you would like to volunteer to be a mentor, reel reviewer or assist the committee in any other capacity, please contact the Student Services Chair directly.

Spotlight on Research: Moving Beyond MoCap

Spotlight on Research: Moving Beyond MoCap

On December 6, 2014, researchers from the Max Planck Institute for Intelligent Systems in Tübingen, Germany, presented a study entitled MoSh: Motion and Shape Capture from Sparse Markers. Their research, published in the SIGGRAPH Asia 2014 edition of ACM Transactions on Graphics, details a new method of motion capture that allows animators to record the three-dimensional motion of human subjects and digitally "retarget” that movement onto other body shapes.

As opposed to current motion capture(mocap) technology, MoSh doesn't require extensive hand animation to complete the mocap process. Instead, it uses a sophisticated mathematical model of human shape and pose to compute body shape and motion directly from 3D marker positions. This innovative technique allows mocap data to be transferred to a new body shape, no matter the variation in size or build. To illustrate the capabilites of the system, its inventors captured the motion of a sleek, female salsa dancer and swapped her body shape with that of a hulking ogre. The motion transferred smoothly between the two, resulting in an unexpectedly twinkle-toed block of a beast.

Devised by a team of researchers under the direction of Dr. Michael J. Black, MoSh is intended to simplify the creation of realistic virtual humans for games, movies, and other applications — while at the same time, reducing animation costs. According to Naureen Mahmood, one of the co-authors of the study, MoSh throws the gates to motion capture wide open for projects of all sizes and budgets. “Realistically rigging and animating a 3D body requires expertise, " said Mahmood. "MoSh will let anyone usemotion capture data to achieve results approaching professional animation quality.”

Because MoSh does not rely on a skeletal approach to animation, the details of body shape — such as breathing, muscle flexing, fat jiggling — are retained from the mocap marker data. Current methods discard such details and rely on manual animation techniques to recreate them after the fact.

“Everybody jiggles,” said Black. “We were surprised by how much information is present in so few markers. This means that existing motion capture data may contain a treasure trove of realistic performances that MoSh can bring to life.”

For more details on MoSh, download the full paper from the ACM Digital Library. Until November 2015, MoSh: Motion and Shape Capture from Sparse Markers is available to the public at no cost through open access (link below).

Open access to MoSh: Motion and Shape Capture via ACM SIGGRAPH

Direct link to MoSh: Motion and Shape Capture from Sparse Markers in the Digital Library (requires ACM SIGGRAPH membership, ACM membership or a one-time download fee to access)

Open access to additional conference content from SIGGRAPH 2014 and SIGGRAPH Asia 2014