Interview by John Davison, Chief Correspondent for ACM Computers in Entertainment.
This excerpt of an interview with award-winning visual effects artist Luca Fascione is provided courtesy of ACM Computers in Entertainment (CiE), a website that features peer-reviewed articles and non-refereed content covering all aspects of entertainment technology and its applications.
Luca Fascione is a multifaceted visual effects artist and the Head of Technology and Research at Weta Digital. His trailblazing achievements were honored earlier this year with a Scientific and Engineering Academy Award. CiE's John Davidson recently had a chance to interview Luca to discuss apes, motion, and emotion.
Q: When did it first become clear that an unmet need existed and what was your development process like?
A: Weta Digital is very active in research and innovation around movie-making technology. Our Senior VFX Supervisor Joe Letteri likes to keep a rolling focus on areas where we can improve the quality for the movies we contribute to, especially in the space of creatures. We have many research disciplines at Weta: physical simulation (fluid simulation, for things like explosions and water, or rigid body dynamics for destruction scenes), physically based rendering (light transport and material simulation, so that our pictures can closely match the footage they need to integrate into), virtual cinematography (performance capture and virtual stage workflows). Every few years we identify a project that demands a larger scope, often inspired by the upcoming productions slated for the studio, and we put significant time and resource into making a true step advancement.
FACETS was one such project.
FACETS (the system we use to capture facial performance, as opposed to the body) was built as part of the Research and Development preparation ahead of Avatar, because we wanted to improve the process for capturing faces. The old process, used on films like King Kong, was closer to an ADR session:1 Andy Serkis would do Kong's body one day, and then on a different day he would work through Kong's facial performance. At that time, the face capture process was a "normal" 3D capture session, the only difference being that the markers were much smaller and glued directly to the actor’s skin, instead of velcro-strapped to his capture suit as they are for the body capture. As the markers were much smaller, the volume in which a performance could be recorded was correspondingly smaller, which meant Andy had to be effectively sitting in a chair trying to keep his head relatively still while acting. This made it extremely difficult for the system to provide valuable data for Andy’s more extreme movements, as well as introducing many practical problems in terms of timing and consistency. Further, once the capture sessions ended, the work to extract motion and animation curves from the data was extremely labor and computer intensive, requiring a very skilled operator and many iterations.
Although Kong had a substantial amount of visual effects work, especially for its time, there aren't that many facial-driven shots and the process was focused on a single digital character. When Avatar came, it was immediately clear our existing workflow would never be practical at the scale required for dozens of Na’vi characters on screen at any given time. The “capture body, then capture face” idea was just too hard, and besides, a large portion of the shots in the movie required capturing multiple characters at once: doing it all in separate body/face sessions would have been a logistical nightmare. We also knew that a combined body/face session would be so much stronger, based on what we’d seen as well as the feedback gathered during face capture sessions and in discussion with the performers. Facial and body movements are synchronized in many unexpected ways that are not immediately apparent unless you study them. Additionally, the post-processing of data to be handed off to animation and support for advanced motion editing were also a clear requirement. It quickly became apparent that as the work for Avatar would increase by well over a tenfold in this segment, requiring a corresponding increase from our existing system was just not possible…
Read the rest of the interview with Luca Fascione on ACM CiE.
Leave a Reply
You must be logged in to post a comment.