SIGGRAPH '18- ACM SIGGRAPH 2018 Talks

Full Citation in the ACM Digital Library

SESSION: I can see clearly now

Confocal non-line-of-sight imaging

Non-line-of-sight (NLOS) imaging aims at recovering the shape of objects hidden outside the direct line of sight of a camera. In this work, we report on a new approach for acquiring time-resolved measurements that are suitable for NLOS imaging. The system uses a confocalized single-photon detector and pulsed laser. As opposed to previously-proposed NLOS imaging systems, our setup is very similar to LIDAR systems used for autonomous vehicles and it facilitates a closed-form solution of the associated inverse problem, which we derive in this work. This algorithm, dubbed the Light Cone Transform, is three orders of magnitude faster and more memory efficient than existing methods. We demonstrate experimental results for indoor and outdoor scenes captured and reconstructed with the proposed confocal NLOS imaging system.

Real time muography simulator for scanpyramids mission

In October 2015, the ScanPyramids (SP)1 mission started looking for unknown structures inside Egyptian pyramids with non-invasive technologies. Possibly the most successful imaging technology was muography which is similar to X-ray radiography but with muons. Muons are naturally occurring weakly interacting elementary particles that travel freely through space, attenuated by dense matter. In 2017, ScanPyramids reported their findings of a large void in the pyramid of Khufu located above the grand gallery. The work, first published in the scientific journal Nature [Morishima et al. 2017], entailed the collaboration of three scientific teams using three separate muography techniques. Sensors from each team acquired muons detection data over several months inside and outside the pyramid before analysis revealed the above-mentioned large void. Interpretation of muography analysis results can be ambiguous. It is therefore common practice to assist the interpretation with a numerical simulation of the muons interaction with matter inside the expected object-of-interest. For this purpose, muography experts traditionally rely on GEANT4 [Agostinelli et al. 2003], a Monte-Carlo simulator. This simulator is verified to be accurate; It is however not capable of delivering live simulations due to computational complexity nor optimized for handling complex 3D geometry.

In response to this limitation, the author has designed and developed a Real-Time Muography Simulator (RTMS) for the purpose of the ScanPyramids mission. Thanks to leveraging already existing 3D rendering engines, the new bespoke simulator presents significantly reduced computational loads, hence enabling live simulation on a conventional laptop. Live simulation in RTMS permits to: understand raw detector outputs in context, assist the analysis of results, make live interactive hypothesis during meetings and facilitate the process of deciding optimal detector positioning.

In this talk, the author will present muography results, deliver the basic of muons physics with a parallel approach to photons, explain how classical 3D render engines inspired RTMS design and describe the simulator approach.

Divergence projection with electrostatics

The pressure component of the Navier-Stokes equation can be solved by projecting out the divergent component of the velocity field. The Poisson equation used matches the electrostatic field equation, allowing a re-interpretation of the projection operation as a solution of electrostatic potential. Using a hierarchical dipole approximation, we achieve an efficient evaluation of the projection operator across sparse domains in a single pass. The update of each voxel is fully decoupled; allowing full parallelism and distribution.

DeepFocus: learned image synthesis for computational display

Reproducing accurate retinal defocus blur is important to correctly drive accommodation and address vergence-accommodation conflict in head-mounted displays (HMDs). Numerous accommodation-supporting HMDs have been proposed. Three architectures have received particular attention: varifocal, multifocal, and light field displays. These designs all extend depth of focus, but rely on computationally expensive rendering and optimization algorithms to reproduce accurate retinal blur (often limiting content complexity and interactive applications). To date, no unified computational framework has been proposed to support driving these emerging HMDs using commodity content. In this paper, we introduce Deep-Focus, a generic, end-to-end trainable convolutional neural network designed to efficiently solve the full range of computational tasks for accommodation-supporting HMDs. This network is demonstrated to accurately synthesize defocus blur, focal stacks, multilayer decompositions, and multiview imagery using commonly available RGB-D images. Leveraging recent advances in GPU hardware and best practices for image synthesis networks, DeepFocus enables real-time, near-correct depictions of retinal blur with a broad set of accommodation-supporting HMDs.

SESSION: Well worn

Collaborative costume design and construction on Incredibles 2

When Incredibles 2 moved into production, we knew it would look bigger and better than the original film, with the improvements in technology since 2004. Costumes are important in this stylish world, and it was a challenge finding consensus on the look and deciding how to apply our technical advances. Director Brad Bird was driven by 2D animation and interested in graphic character shapes. Costume Designer Bryn Imagire preferred a natural cloth look, knowing that the world would be rendered realistically The character tailoring and shading teams needed to resolve the dichotomy of stylized yet realistic form, shading, and movement, dress a large number of distinctive characters with clothing that enhances their story arcs, and make the costumes perform well in shots. This goal necessitated an extremely collaborative workflow and greater trust between people and departments, also empowering technical artists to have more ownership over garment design and look in the film.

Dressed for saving the day: finer details for garment shading on Incredibles 2

On Incredibles 2 the characters shading team was tasked with creating the look for both reality-based clothes reminiscent of the 1950's era as well as creatively styled superhero suits. A major design goal was to add details in whatever ways possible to help give this sequel a more sophisticated look and feel when compared to the original film and appropriately supplement other visual advances since then in lighting and fx.

Two methods that helped us achieve this was using Bump-To-Roughness to help preserve fine details in the clothing and using curve procedurals, or fuzz, to add realism to the garment shading. Bump-To-Roughness (BtoR) helps preserve specular roughness variation details while shaping the specular distribution in a natural way. The procedural fuzz provides a tunable specular response near the surface, that breaks up the edge silhouette and compliments the imperfect nature of realistic objects without additional modeling or grooming. In this talk we will describe ways in which these two methods helped us achieve a variety of impressive looks to our garments.

Coco animsim: increasing quality and efficiency

Coco's clothing design and story presented challenges for the Simulation and Animation departments, requiring a new approach to create appealing clothed silhouettes and believable motion on characters. Many of the characters are skeletons, whose silhouettes are significantly altered by clothing, increasing the influence of simulated elements on acting. The main living character in the film, Miguel, uses clothing to disguise himself, so seeing interaction with his cloth was also essential for acting. To achieve these requirements we created a new process, AnimSim, consisting of fast, stable clothing simulations, simple cloth interaction tools, and a refined partnership between the simulation artists and character animators. Our collaboration increased quality and efficiency for both departments by providing animators with context early in the process and cleaner animation for simulation artists.

Better collisions and faster cloth for Pixar's Coco

Among the many technical challenges of Pixar's Coco was the need to handle cloth simulation for a densely populated city of skeleton characters. Skeletons posed new challenges to the collision algorithms of our in-house cloth system, Fizt. Continuous collision detection and response is an obvious solution to handling fast motion of thin geometry, but it presents us with a serious problem. In our production pipeline, geometry often starts in intersection. Animation also frequently causes kinematic surfaces to pinch the cloth between them and drive the cloth through itself. We present a solution for robustly allowing intersection recovery while employing standard continuous detection techniques.

Coco also demanded more cloth than any previous Pixar film. To keep up with demand, Fizt needed to run much faster. We share our techniques for gaining performance in linear system assembly and solution, which should be applicable to most implicit solvers.

SESSION: Augmenting your reality

Augmented reality, art, and public space

An overview of how AR art has evolved and, in particular, how Heavy Projects' work has transitioned from guerrilla-style AR street art interventions to large scale, interactive, public space murals working with such clients as the University of Geneva, Qualcomm, SXSW, Google I/O, and San Francisco Design Week. This talk provides workflow examples of "Digital Neuron" AR mural [Geneva, 2016], "Parabola" AR mural [San Francisco, 2017], and "Evolution of an Idea," the largest AR mural in the world [185'x25', San Diego, 2015]. In illuminating these projects, this talk outlines best practices for other digital artists to create outdoor AR artworks. In short, this talk will provide general insight into the progression of AR art, discuss prominent digital artists currently working in this space, and deliver a practical workflow of how to create works in the new medium of interactive AR art.

Augmented reality for virtual set extension

We introduce an intuitive workflow where Augmented Reality can be applied on the fly to extend a real set with virtual extensions. The work on intuitive Virtual Production technology at Filmakademie Baden-Württemberg has focused on an open platform tied to existing film creation pipelines. The Virtual Production Editing Tools (VPET) are published and constantly updated on the open source software development platform Github as a result of a former project on Virtual Production funded by the European Union.

SESSION: Hares & hairs

Hair today, cloth tomorrow: automating character fx on peter rabbit

In "Peter Rabbit" (2018) the characters in the film needed to evolve from their 2D watercolour illustrated past into modern photo-real and engaging performances of a live-action world. The direction of the film necessitated high quality dynamic fur, cloth and feathers in a wide variety of shots, covering nuanced performances to frantic action sequences in a number of environmental conditions. Developing a better mechanism for creating character FX on such a diverse range of characters in over 1,100 shots was a major and necessary challenge.

To achieve the scope and scale of the work, we created a number of workflows using complex hair and cloth tools embedded in multiprocess FX rigs for not only the artists, but also for automatic processes in each and every animation review. We termed this "AnimCFX" and the goal was to leverage our large farm processing capability to produce a significant portion of the work and enable more focused iteration time on bespoke hero shots with our small character FX team.

Simulating woven fabrics with weave

In Peter Rabbit, modeling and surfacing artists needed to create photorealistic clothing for CGI characters. Existing techniques such as using repeating texture and displacement maps do not hold up for close-up shots. Peter's iconic blue denim jacket also needed to seamlessly match a real, hand-stitched and worn reference used in the live action shoot. Furthermore, we needed to support many different types of fabrics for dozens of characters. We had initially developed a system called Weave for simple capes and flags in The LEGO Movie and The LEGO Batman Movie, but to support higher levels of detail and flexibility, we extended it to procedurally generate highly customizable patterns of woven fabrics. The novelty of our system lies in its capability to generate realistic weaving and stitching patterns, fuzz, wear and tear in a simple and artist-driven framework.

Hierarchical controls for art-directed hair at Disney

Creating appealing shapes and silhouettes of a character's hair while maintaining the organic motion produced by physical simulation is a challenge in Disney's very stylized animated worlds. This talk describes the introduction of hierarchical sculpting controls into our hair pipeline and presents a set of tools for creating and manipulating this consistent structure to achieve art-directed hair motion. From grooming through animation, simulation and technical animation, hierarchy is leveraged both for efficiency and for preservation of the hairstyle's structure. To date this hierarchical workflow has been used on two feature productions, allowing for the efficient art-direction of a wide variety of hair types and styles.

Engineering full-fidelity hair for Incredibles 2

For Incredibles 2, we achieved interactive full-fidelity procedurally-generated hair, yielding artist freedom and productivity surpassing our previous hair tools. We implemented highly parallel algorithms for hair growth and deformation and used fast, modern techniques for graph evaluation, subdivision surfaces, Poisson disk sampling, and scattered data interpolation. Working with full-fidelity hair presented a data scale challenge to our shot pipeline, but we overcame it by allowing a trade-off of control flexibility for speed, maintaining geometric complexity.

SESSION: It's a material world

Plausible iris caustics and limbal arc rendering

In this paper, we apply anterior segment tomography measurements from contact lens research to photorealistic eye rendering. We improve on existing analytic rendering models by including a conical extension to the usual ellipsoidal corneal surface and we demonstrate the advantage of using a more accurate iris depth. We also introduce a practical method for automatically rendering the limbal arc as an intrinsic part of sclerotic scattering.

A compact representation for multiple scattering in participating media using neural networks

Many materials, such as milk or wax, exhibit scattering effects; incoming light enters the material and is scattered inside, giving a translucent aspect. These effects are computationally intensive as they require simulating a large number of events. Full computations are expensive, even with accelerating methods such as Virtual Ray Lights. The dipole approximation [Jensen et al. 2001] is fast, but a strong approximation. Precomputing the material response for multiple scattering [Moon et al. 2007; Wang and Holzschuch 2017] integrates well with existing rendering algorithms, allowing separate computation for single- and double- scattering, and fast computation for multiple scattering. Their main issue is efficient storage for the precomputed multiple scattering data.

Perceptually validated analytical BRDFs parameters remapping

The need to manually match the appearance of a material in two or more different rendering tools is common in digital 3D product design, due to the wide range of tools and material models commonly used, and a lack of standards to exchange materials data. Since the effect of BRDF parameters on rendered images is non-uniform, visually matching to a reference is time consuming and error-prone. We present an automatic BRDF remapping technique to match the appearance of a source material model to a target, providing a mapping between their parameter spaces. Our framework, based on Genetic Algorithm optimization and an image space similarity metric, provides a faithful mapping among analytical BRDFs, even when the BRDF models are deeply different. Objective and perceptual evaluations confirm the efficacy of the framework.

Prelit materials: light transport for live-action elements in production rendering

We introduce a new conceptual model for including live-action footage in the light transport simulation of production renderers. By explicitly declaring which elements of the scene were present during photography, our pathtracer can generate realistic bidirectional lighting and shadowing between live-action and synthetic elements in a single pass. Using the film Peter Rabbit as a case study, we show how this can be used for automatic integration of synthetic elements into plates throughout the visual effects review process, from layout to interactive lighting. Auxiliary channels allow a compositor to perform several post-render adjustments, including rebalancing lighting on live-action elements.

SESSION: En masse

Other worldly crowds in Coco

Coco, Pixar's largest human-based crowds film to date, was ambitious both visually and technically. Over a third of the film contains crowd scenes, ranging from a mansion-filled dance party to thousands of skeleton families journeying across a bridge, to a colossal cheering stadium. This complexity required vast amounts of both animation specificity and look variation in our characters.

Asset management, animation directability, and rendering would have been extremely difficult with our previous pipeline for human crowds at this scale. An array of techniques were developed to tackle these challenges, including crowd asset and workflow improvements; a new skeletal rigging and posing system to procedurally control animation; more automated, aggressive shading and geometric level of detail; and optimized geometry unrolling in Katana to significantly reduce scene processing time and file IO.

Up close with simulated crowds

We discuss advancing the fine detail of deforming hero-quality simulated crowd agents in animated feature film production. To support character animation that is suitably framed arbitrarily close to camera, our approach uses a novel deformation system that combines simulation and hero-quality custom animation. Level-of-detail optimizations are handled at render time, and artists are only tasked with the design of a single high-quality resolution for each character asset. Key optimizations in our rig structures are outlined as they are fundamental to scalability, permitting our crowds to look good while numbering in the millions.

Automating the handmade: shading thousands of garments for Coco

Coco presented a challenge for the garment shading team. Firstly, the scale of the movie is significant with both the human and skeleton worlds filled with primary, secondary, and background characters. Secondly, the garments speak to a specific culture and our shading needed to be very detailed to convey both modern Mexico and Mexican culture through time. We had to employ new techniques for shading seams and embroidery, optimizing render cost, and handling variation in crowd scenes.

Taming the swarm: rippers on pacific rim uprising

When constructing shots of non-human crowds that exhibit complex behaviors, the standard approach based on the well-established rules of boid simulation is likely to fall short when used for a group of characters with "intent". In Pacific Rim Uprising, Double Negative VFX tackled the challenge of producing a large crowd of highly articulated robotic creatures performing the complex and coordinated task of "assembling" a mega-Kaiju. This task required a number of innovative approaches to both crowd authoring and rendering, and close collaboration between the RnD and artists.

SESSION: Olaf's image capture adventure!

DIY absolute tele-colorimeter using a camera-projector system

Image-based reflectance measurement setups lower costs and increase the speed of reflectance acquisition. Unfortunately, consumer camera sensors are designed to produce aesthetically pleasing images, rather than faithfully capture the colors of a scene. We present a novel approach for colorimetric camera characterization, which exploits a commonly available projector as controllable light source, and accurately relates the camera sensor response to the known reflected radiance. The characterized camera can be effectively used as a 2D tele-colorimeter, suitable for image-based reflectance measurements, spectral prefiltering and spectral up-sampling for rendering, and to improve color accuracy in HDR imaging. We demonstrate our method in the context of radiometric compensation. Coupled with a gamut-mapping technique, it allows to seamlessly project images on almost any surface, including non-flat, colored or even textured ones.

adidas TAPE: 3-d footwear concept design

3-D tools have been successfully established in several areas of the footwear design process. Yet, 3-D tools often find little adoption during initial concept creation for various reasons. These tools are often slow, difficult to use and limit creativity in ways unacceptable to most designers. The lack of 3-D content creation in the beginning makes it inherently difficult to implement ideal production pipelines that enrich and reuse assets during all steps of the content creation. At adidas, we have successfully established a simple, sketch based 3-D tool which feeds into our 3D design pipeline and finds astonishing acceptance within the design community. Our team presented a digital 3-D footwear design process and pipeline at Siggraph 2017 [Suessmuth et al. 2017]. Tape, the first tool in this pipeline, allows our designers to create meaningful 3-D assets in the early stages of design. In this talk, we explain the origin of Tape and walk the audience through all key features, their purpose in terms of footwear design and their implementation.

Sword tracer: visualization of sword trajectories in fencing

This paper describes a system for visualizing sword trajectories in fencing. Fencing swords are very thin and move so fast that it is difficult for audiences to follow their movements even in slow-motion video replays. The system thus tracks the tips of the swords in the image coordinates and visualizes their movements with computer graphics (CG). We call it "Sword Tracer."

The handiwork behind "Olaf's frozen adventure"

This talk presents a behind-the-scenes look into some of the visual and technical challenges creating the roughly 22-minute featurette "Olaf's Frozen Adventure," and specifically how we often incorporate 2D animation techniques into our process. Though it may not be easily apparent on screen, 2D animation is still an important part of our legacy. We often utilize 2D techniques to solve unique visual production challenges, as well as during the design phase to provide clarity to art direction leadership for approvals, which in turn provides artistic direction to artists.

SESSION: Be there or be square

Animation to games, virtual department of games in Tokyo University of the arts

In recent years the borderline between games and animation has been blurring on the technical front and in terms of "expression," there are many commonalities between games and animation when "creating the world," where the characters live and the story unfold. So then, why not create video games from already existing animation titles?

Animation to Games project, or AtoG, is a collaborative project between Square Enix's Business Division 2 and Tokyo University of the Arts, where seven animated works by the students were chosen and then directed by the students, with BD2 game creators participating as mentors over a nine-month period of game development. In this session, we would like to talk about the AtoG project and the many insights we learned about art expression and games, as well as the future of game development and education.

Making of "out of the cradle"

"Out of the Cradle" is a new documentary TV program that employs the latest cutting-edge computer graphics, and traces mankind's footsteps back to Africa, as based on the most recent academic theories. Without question, most expert studies on this topic in writing, are far too esoteric for the average person. And though the injection of an entertainment element into such areas of academia may seem the ideal solution for the purposes of education, it's often easier said than done.

With this new documentary, we believe that we were able to attain this objective, by utilizing our technical skills in visual expression honed through years of making video games and full-length CG movies. In this particular session, we would like to introduce you to all the activities and work that went into this program up until its completion.

SESSION: Clean up your room!

Denoising at scale for massive animated series

In the modern era of physically-based shading, removing the substantial amount of high frequency noise produced by Monte Carlo rendering techniques is a key challenge for production renderers. Beyond the recent advances in sample-based and feature-based denoising, production constraints and scale introduce additional mandatory features for candidate denoisers. In this talk, we discuss how denoising is deployed in Shining, the production renderer developed by Ubisoft Motion Pictures for the Rabbids Invasion animated TV series. The scale of the show, as well as the required control for artists, led us to the integration of a sample-based denoiser, which enables per-AOV denoising control, with a minimum overhead regarding engine integration and production workflow. As a result, all-effects denoising is made possible for the new TV series season and proved useful in numerous lighting and material scenarios. At the core of the denoising pipeline, our BCD algorithm, recently made open source, provides a robust and fast mechanism to filter out Monte Carlo noise while retaining features, for complex lighting and viewing conditions, with trivial per-AOV setup.

Practical denoising for VFX production using temporal blur

We present a simple, efficient, and reliable approach to denoising final ray traced renders during VFX production. Rather than seeking to remove all noise, we combine several simple steps that reduce noise dramatically. Our method has performed well on a wide variety of shows in Image Engine's recent portfolio, including Game of Thrones Season 7, Lost in Space, and Thor: Ragnarok.

Achieving and maintaining real-time rigs

Our characters have a lot of moving parts. This complexity makes achieving and maintaining real-time performance a challenge. Our journey of bringing our rigs to 24 fps consisted of many different milestones. We aggressively adopted cutting edge technology during active production and developed a system to continuously monitor asset "health" performance metrics. New applications were created for production to monitor asset health using Blue Sky's next generation pipeline, Conduit.

Page array data structures for flexibility and performance

Visual effects impose demanding requirements for data structures and algorithms. They are expected to be flexible enough to support any idea an artist or TD could think of, while being as fast as a custom implementation developed for one purpose. Our solutions are built on page array data structures. Our arrays can represent a wide variety of geometry data, including polygons, and support reference counted page sharing and constant-value page compression for memory efficiency. Our method permits reasonably fast reading and writing in serial or parallel. We can also process data in a page-aware manner for even better performance.

SESSION: Effects blender

The robots of LAIKA

LAIKA, one of the world's largest stop-motion studios is also one of the industry leaders in robotic technology for film production. We present a talk explaining some of the ways we use our robots to move cameras, bounce sets, and generally work hand in actuator with our busy animators.

'The greatest showman': crafting a period New York city with scaled miniatures and painterly backgrounds

Unraveling Purl: continuing Pixar's experimental story initiative

Last year, SIGGRAPH '17 featured Smash and Grab: Off the Rails Filmmaking at Pixar [Serritella et al. 2017], kicking off the results of a new, experimental storytelling initiative at Pixar Animation Studios. This initiative enables new creative voices and explores new storytelling techniques, pipelines, and workflows in production. The program's second short film, Purl, directed by Kristen Lester, continues to raise the bar for the program, and creates another example of what's possible when filmmakers are granted this type of creative freedom within a fully functioning studio. Purl is an eight minute short film that explores the use of "digital backlot" management, an evolution of Smash and Grab's motion capture application, shareable animation libraries, our real-time shading software "Flow," and production practices that allow a busy feature animation studio to continue telling even more stories. This talk looks at the team's process in creating Purl, as well as the impact of maintaining the initiative at Pixar.

Pacific Rim Uprising: developing the mega kaiju transformation

This talk will be center around an analysis of the evolution of creative and technical workflows in developing the Mega Kaiju transformation sequence. At its essence, the challenge of this sequence is about combining 3 creatures into a single enormous Kaiju. As the sequence was under developed at the previs level, DNEG was tasked with developing the look and narrative feel of the transformation during post-production. A very compressed schedule coupled with the technical complexities of merging creatures in a believable way, meant the team needed to lock down the vision of each individual shot while simultaneously developing pragmatic inter-departmental workflows.

SESSION: Potpourri

A holistic approach to asset quality and efficiency

ILM is known for the aesthetic quality of its assets, as well as the great volume and diversity of its vfx work. We describe here our holistic work on asset efficiency, that ensures that our thousands of assets are constructed efficiently, and pass across disciplines in an optimized fashion. Our approach is comprised of several major components that have been used together in a novel way: an application agnostic sanity check framework, extensive asset analytics, and a QC movie and review system.

Lighting pipeline for one: or how to keep sane in a discworld

Troll Bridge is a crowdfunded live-action short film more than 15 years in the making and involving the work of more than 400 volunteers across the world. In order to light and render over 500 shots it was necessary to build a workflow driven pipeline with minimal development overhead. In this talk we discuss how we brought the full CG environment of the bridge and the talking horse character to life by designing a lighting and rendering pipeline tailored to the needs of an independent working artist. We give insights into the challenges we encountered and how we kept our render budget low while still delivering feature film quality.

Fast, high precision ray/fiber intersection using tight, disjoint bounding volumes

We improve the performance of subdivision-based ray/fiber intersection for fibers along Bézier curves by pruning with tight, disjoint bounding volumes in ray-centric coordinate systems. The resulting method calculates precise intersections on the surface of a fiber with accurate normals, and performs significantly faster for a high number of subdivisions than state-of-the-art methods pruning sub-regions with axis-aligned bounding boxes.

Efficient hybrid volume and texture based clouds

We developed workflows for Cars 3 and Incredibles 2 that maintain most benefits of volumetric clouds while leveraging the render speed and artistic control of traditional texture based approaches. For Cars 3, we rendered volumetric clouds using Houdini and RIS to create layered, reference plates that matte painters used to quickly paint sky texture maps. The Houdini based workflow let us intuitively scout sequences and set up paintings. We detail artist-friendly tools in Photoshop to allow for intuitive composition adjustments. On Incredibles 2, we replaced the Photoshop portion of the pipeline with Nuke and rendered three passes: key, fill, and world position. Standardizing the workflow around world aligned, high resolution, hemispherical textures greatly streamlined the process and provided downstream lighting department with greater control integrating the sky with the foreground.

SESSION: Production junction

DNEG at 20: creative milestones

DNEG has grown from a small studio with 30 employees in 1998 to a world leading giant, with over 5000 employees, 20 years later. This talk will celebrate some of the major creative milestones that established its prowess, led to significant award wins, and solidified its relationships with many regular key creative collaborators.

SESSION: Gouging the surface

Making space for cloth simulations using energy minimization

Geometry interpenetrations are a common issue in creature effects workflows, particularly in cases which require simulations, for example hair and clothing. Production rigs often introduce self intersections in regions like armpits and elbows, which can cause ugly instabilities and undesirable behavior when they interfere with simulated objects such as garments. To achieve visually acceptable results, these simulations often require a small gap to allow sliding between opposing surfaces, and the process of making these modifications can often be quite manual. Here we present a production proven creature effects tool for resolving these issues automatically.

Clean cloth inputs: removing character self-intersections with volume simulation

Simulation artists frequently work with characters that self-intersect. When these characters are sent as inputs to a cloth simulator, the results can often contain terrible artifacts that must addressed by tediously sculpting either the input characters or the output cloth. In this talk, we apply volume simulation to character meshes and remove self-intersections before they are sent to the cloth simulator. The technique has successfully dealt with very challenging animation scenarios in a production setting, and was applied to all the characters on the short film, Bao.

Patch-based surface relaxation

From rigging to post-simulation cleanups, surface relaxation is a widely used procedure in feature animation. Over the years, Pixar has experimented with several techniques for this task, mostly based on variants of Laplacian smoothing. Notably, none of existing approaches are suited to reproduce the patch layout of a baseline mesh. This is of particular interest for modeling the span of edge flows, or for restoring the rest configuration of a mesh under large deformations. To achieve this goal, we developed a new patch-aware relaxation method for general polygonal meshes. Our approach encompasses three main contributions. We first introduce a weighting scheme that uses local decal maps to encode the structure of edge flows formed by the desirable patch layout. We then propose an update rule that transfers a reference patch arrangement to a deformed mesh. To control volume preservation, we also present a surface-constrained regime that exploits decal maps to slide points within the surface. We demonstrate the effectiveness and versatility of our tool with a series of examples from Pixar's short Bao and feature film Incredibles 2.

Regularization of voxel art

Voxel based modeling is an attractive way to represent complex multi-material objects. Multi-labeled voxel models are also ubiquitous in material sciences, medical imaging or numerical simulations. We present here a variational method to regularize interfaces in multi-labeled digital images. It builds piecewise smooth quadran-gulated surfaces efficiently, with theoretical guarantees of stability. Complex topological events when several materials are considered are handled naturally. We show its usefulness for digital surface approximation, for voxel art regularization by transferring colorimetric information, and for upscaling voxel models to speed up coarse-to-fine modeling.

Procedural fluid textures

We present an efficient system for synthesizing textures over fluid surfaces in a solid texturing context. The technique is simple and intuitive for artists using modern, commercially available fluid simulators. Instead of working with 2D surface maps like other fluid texture synthesis approaches, we advect 3D reference space transforms with the fluid simulation. The reference transforms are then projected onto the final surface mesh with a radius of influence control, and used for solid texturing lookup. Ray intersections of the fluid surface interpolate any transforms with overlapping control radii to determine the reference lookup of the solid texture. The final texture exhibits excellent spatial and temporal coherence with none of the artifacts that plagued previous map based approaches.

SESSION: For the love of tech art

The technical art of sea of thieves

Sea of Thieves posed a unique challenge - developing a stylised, open world game in Unreal Engine 4, a demanding and contemporary game engine focused on photo-realistic rendering. Our game contains a large number of dynamic elements and is designed to run on hardware ranging from integrated GPUs on a laptop, to the most powerful modern gaming PCs. Over the course of development, we have come up with a number of innovative techniques focused both on keeping an open world game like ours performant and visually appealing.

We introduced several techniques that we used to stylise and supplement the look of our FFT water implementation for the game's oceans. We also created a new cloud rendering and simulation system for this game, allowing for fast rendering of three-dimensional, art-directed cloudscapes without using expensive raymarching techniques.

To bring the world to life, we also developed other graphical features, including a physically-based system of rendering rope-and-pulley systems, our use of baking simulation data to textures and real-time surface fluid simulations to model incidental water behaviour on the GPU.

Reinterpreting memorable characters in Incredibles 2

Unlike other Pixar sequels in which characters must be resurrected, on Incredibles 2, we were encouraged to delve into the original archived character designs and deliver on qualities that could not be achieved prior to 2004 when the first Incredibles was made. More than any other film, we leaned on 2D drawing and design techniques to drive the way we modeled and rigged. We share our methods on how we both redesigned and stayed true to the essence of these legacy characters.

Making Coco's Pepita

Capturing the authenticity of Mexican culture was a major focus on Coco and was examplified in making "Pepita". Being one of the films most memorable characters, she was inspired by Mexican folk-art creatures known as "alebrijes". Balancing the performance needs of the character while retaining the cultural details presented several challenges. We needed to evoke the chiseled look of wooden alebrijes while maintaining the range of expressivity desired in animation. We had to keep the graphic patterns on the wings and fur from distorting by avoiding feather-with-feather intersections and orienting the design to work with simulated hair motion. Finally, we had to integrate these brightly-colored and fantastic creatures into believable environments.

SESSION: Skinny & flexible

Making mrs. incredible more flexible

In Pixar's Incredibles 2, Mrs. Incredible was once again tasked with using her power of elasticity to help those in need. Advances in technology since the first film came out nearly 15 years ago, coupled with increased audience sophistication and finer scrutiny to detail, demanded that stretching the model, shading and garments look more believable than in the previous movie. Helen's wide range of actions takes a more prominent role in this film, and it was often times close to camera. To address these potential issues, new techniques had to be developed and new processes added to our Cloth, Shading and Rigging pipelines.

Robust skin simulation in Incredibles 2

Robustly simulating the dynamics of skin sliding over a character's body is an ongoing challenge. Skin can become non-physically "snagged" in curved or creased regions, such as armpits, and create unusable results. These problems usually arise when it becomes ambiguous which kinematic surface the skin should be sliding along. We have found that many of these problems can be addressed by performing 2D ray-tracing over the surface of the mesh. The approach is fast and robust, and has been used successfully in Incredibles 2.

Mobilizing mocap, motion blending, and mayhem: rig interoperability for crowd simulation on Incredibles 2

The stylized world of Incredibles 2 features large urban crowds both in everyday situations and in scenes of panicked mayhem. While Pixar's now academy award winning animation software, Presto, has allowed us to create expressive and nuanced rigs for our crowd characters, our proprietary approach has made it difficult to utilize animation from external sources, such as crowd simulations or from motion capture. In this talk, we discuss how we can automatically approximate our complex rigs with skinned skeletons, as well as how this has opened up our crowd pipeline to procedural look-ats, motion blending, ragdoll physics, and motion capture. In particular, the use of motion capture is novel for Pixar, and finding a way to integrate this workflow into our animator-centric pipeline and culture has been an ongoing effort. The system we designed allows us to capture motion data for multiple characters in the context of complex shots in Presto, and it facilitates choreography of nuanced and specifically timed crowd motions. Together with traditional hand animated motion cycles, our crowd choreography tools in Presto [Arumugam et al., 2013], and skeletal agent based simulation in SideFX's Houdini [SideFX, [n. d.]] via our MURE tools [Gustafson et al., 2016], the crowds team on Incredibles 2 produced rich scenes of busy streets and urban panic.

Bringing skeletons to life for Coco

What is a Pixar skeleton? Answering this question presented a significant challenge to Coco's character team as we balanced our stylized production aesthetic with the desire to make realistic and believable skeletons. We discuss our challenges and decisions through character modeling, rigging, and shading.

SESSION: USD certified lean, eh?

Zero to USD in 80 days: transitioning feature production to universal scene description at dreamworks

Productions at DreamWorks starting with How To Train Your Dragon 3 will use Universal Scene Description (USD) [Pixar Animation Studios 2016] as the primary asset and shot representation across the production pipeline, from modeling to compositing. In this talk, we discuss the motivation for adopting USD at DreamWorks and our strategies for adoption on a highly constrained timeline - 80 working days from the initial discussion to having the first production-ready USD scenes. We review our methodology for organizing and planning an extensive USD integration, present details of our implementation, and discuss the successes and challenges encountered in the adoption process.

Forging a new animation pipeline with USD

The Peter Rabbit movie features 5 hero characters and dozens of secondary characters animated across more than 1100 shots.

We introduce some practical and production-proven solutions to integrate Pixar Universal Scene Description (USD) into Autodesk Maya® based on our now opensourced AL_USDMaya plugin, and how they were used to create a high performance and intuitive animation platform.

Using USD shading to provide the "extra" touch on Incredibles2

A shading pipeline which allows us to be able to provide the immediate basic shading for the BG human skin and hair at creation, while adapting and evolving to show needs and artistic direction. Along with our new hair system in Presto, we also had new usd applications such as usdview were widely used from simple asset context viewing, though all attribute and parameter inspecting/debugging processes. For the first time, groom, model, rig, and simulation artists and were working together on the same platform as our Animators. And while shading wasn't in Presto, Universal Scene Description (USD) wove all things together in a most accessible and inspect able format. This approach allowed us to create significant amounts of background characters within a short time frame.

Walter: an open source VFX framework for USD and alembic

In a VFX Studio, sharing 3D contents between departments can be very challenging. Departments are producing some artistic work using data coming from previous one(s). Since different software is used, such pipeline is cluttered with many files formats, leading to a lot of data duplication and incompatibility. Finally, as scene complexity is increasing every year, interacting with such data can be extremely difficult for artists.

SESSION: Visual visage

Digital Albert Einstein, a case study

We present the production process for a series of short films featuring a digital actor with the likeness of Albert Einstein. The results are an artistic interpretation of Albert Einstein reappearing in contemporary context citing some of his famous quotes. This homage to the physicist and humanist further investigates how documentary film formats can extend their horizon by meaningful inclusion of digital actors. The creation process relied on a set of specialized tools which reduced the labor effort significantly. Digital assets have been released under Creative Commons to support the ongoing effort in creating convincing digital characters.

Avengers: capturing thanos's complex face

In Marvel's Avengers: Infinity War, Thanos (played by actor Josh Brolin,) is entirely CG and is one of the the main characters in this live action movie. The plot depends on the emotional performances of this digital creature and it was imperative that Thanos's facial performance convey the actor's performances faithfully. Digital Domain's performance capture process, Direct Drive is a major departure from traditional blendshape solver techniques and was used to create Thanos's performances. We will present an overview of our updated multistage facial retargeting process. We have removed the reliance on high-resolution, per-shot facial capture and refined the process of training the system. This system is faster to set up, needs far less artist input, and preserves elements of the performance that were previously lost using traditional facial capture techniques.

High-quality, cost-effective facial motion capture pipeline with 3D regression

We present our improved marker-based facial motion capture pipeline that leverages on 3D regression from head-mounted camera (HMC) images to speed up and reduce the cost of high quality 3D marker tracking. We use machine learning to boost productivity by training regressors on traditionally tracked performances and applying those models to the remaining performances. Our specialized regressor for HMC marker-based tracking shows improvements in quality and robustness for marker tracks. The regressor results are automatically refined by a simple blob detection tool and then imported back into the tracking tool such that manual correction can be applied as needed and subsequently included as additional training data. This iterative approach reduces 70% the amount of artist time required for traditional tracking methods and does not add much setup time nor planning as alternative techniques.

It: how to build a terrifying clown

SESSION: Creating the unreal

Rampage: a product of evolution

Over the span of more than twenty years, Weta Digital has brought numerous memorable digital characters to the screen. For the feature film Rampage, we leveraged those two decades of experience and improved upon existing internal methods to bring to life contemporary versions of monsters inspired by the 1986 Midway arcade game. The destruction wrought by these creatures also prompted the creation of a highly-detailed debris field in the middle of present day Chicago, where the finale of the story takes place.

Accelerating film environment creation using game development tools

For Justice League MPC faced the challenge of creating an abandoned city nearly twelve square kilometers in size, and for A Wrinkle in Time a fast-moving forest sequence that required rapid iterations with closely managed art direction. To deliver the high quality that viewers expect with the control that film makers require we needed procedural, extensible tools - but still allow for detailed artistic control. In this paper we look at how MPC leveraged real time game engines such as Unreal Engine and Unity to produce massive environments, and speed up the iteration cycle of complex VFX.

Creating the unreal: speculative visions for future living structures

Science fiction films such as Blade Runner have taken us to worlds where we can experience and share future visions of cities, structures, and life styles conceived by prominent creators. Not only for experts in leading production studios, it is an exciting moment for many digital designers as new game engine technologies allow us to create and walk through imaginary virtual environments almost on the fly. However, how much those spectacular visions are really feasible and credible---based on speculative yet thoroughly rigorous (not sci-fi) scientific studies---is a good question.

SESSION: Tripping the light VR

The making of welcome to light fields VR

Light fields can provide transportive immersive experiences with a level of realism unsurpassed by any other imaging technology. Within a limited viewing volume, light fields accurately reproduce stereo parallax, motion parallax, reflections, refractions, and volumetric effects for real-world scenes. While light fields have been explored in computer graphics since the mid-90's [Gortier et al. 1996; Levoy and Hanrahan 1996], practical systems for recording, processing, and delivering high quality light field experiences have remained out of reach.

In this talk, we describe the hardware and software developed to produce the Welcome to Light Fields virtual reality experience, the first immersive light field experience that can be downloaded and viewed on VR-ready computers with HTC Vive, Oculus Rift, and Windows Mixed Reality HMDs. This piece enables the user to step into a collection of panoramic light field still photographs and be guided through a number of light field environments, including the flight deck of Space Shuttle Discovery. Welcome to Light Fields is a free VR app. on the Steam Store that was launched in March 2018 [Google VR 2018].

This piece is the culmination of a multi-year effort to develop consumer-ready light field technology. We built new camera rigs for light field capture and new algorithms for light field rendering. In this talk, we delve into the details from capture to rendering and the steps in between.

We emphasize that our system generates stereoscopic views of a light field for use in current generation consumer VR HMDs. As future work, it could be of interest to display such datasets on near eye light field displays [Lanman and Luebke 2013] to render effects of visual accommodation.

Fractal multiverses in VR

Fractals are complex mathematical structures that have interested the graphics community since their inception. We present our design decisions for an interactive fractal explorer and a novel approach for rendering fractals on VR headsets at high frame rates, through the use of stereo reprojection techniques and conemarching for distance estimation.

VR story production on Disney animation's "cycles"

In this talk, we will explore the design, challenges, collaborative workflows, and technological execution of "Cycles", a VR short film created within Walt Disney Animation Studios' internal "Short Circuit" program for professional development.

Of particular note will be the incorporation of various VR techniques and technologies into the production pipeline as a means of facilitating our team's ambitious creative goals.

SESSION: Light it up

GafFour and sequence-based lighting

Sequence-based lighting has become increasingly popular to further improve efficiency at Imageworks as we are producing thousands of full CG shots each year. However, Katana slows down dramatically as the lighting setup becomes more complicated to accommodate the growing number of nodes which may make up the different shots. We analyzed a large number of sequence-based lighting scene files that were identified as slow and found that these setups usually had tens of gaffer nodes, which each contained thousands of nodes for light creation and manipulation. To solve this problem we implemented a custom gaffer node for Katana, GafFour, which greatly reduced the total number of nodes per scene file.

KatanaForFX: intertwine FX and lighting

On many shows we are working on at MPC, we have to deal with shots containing a high number of FX elements of various types (particles, volumes, animated geometry). A large majority of these effects are rendered by the Lighting department in Katana and Renderman eventually. However, the FX elements are crafted in either Houdini or Maya where the FX artists are also doing their renders, using Renderman or Mantra. The FX artists would often take great care in the shaders and materials they are using for presenting their work as they can have an important impact in the perceived shape and behaviour of their simulation, especially for volumes and particles. The usage of different softwares and renderers to produce the renders between the FX and Lighting departments lead to important differences between the dailies presented by FX and the renders done in Lighting, requiring more time for the Lighting artists to match the look approved in FX.

The KatanaForFX initiative put in place a new workflow to make it easy for FX artists to generate their final renders in Katana and Renderman, save their set up as a released asset and hand it to the Lighting artists without requiring any prior knowledge in Katana nor interrupting their usual workflows. KatanaForFX enables the FX artists to focus on the design of their simulation itself while presenting them with the look developed by the Lighting department. The Lighting artists can in turn receive exactly the settings defined by the FX departments to reproduce their renders as well as develop the materials and shaders for the FX elements simultaneously.

SESSION: Sampling the product

Adaptive environment sampling on CPU and GPU

We present a production-ready approach for efficient environment light sampling which takes visibility into account. During a brief learning phase we cache visibility information in the camera space. The cache is then used to adapt the environment sampling strategy during the final rendering. Unlike existing approaches that account for visibility, our algorithm uses a small amount of memory, provides a lightweight sampling procedure that benefits even unoccluded scenes and, importantly, requires no additional artist care, such as manual setting of portals or other scene-specific adjustments. The technique is unbiased, simple to implement and integrate into a render engine. Its modest memory requirements and simplicity enable efficient CPU and GPU implementations that significantly improve the render times, especially in complex production scenes.

Fast product importance sampling of environment maps

Environment maps have been used for decades in production path-tracers to recreate ambient lighting conditions captured from real world scenes. Stochastic sampling of the radiance integral can be very challenging however, as both the BSDF and the environment can have strong peaks that are not aligned with each other. Multiple importance sampling (MIS) between the environment and the BSDF is a common way to reduce variance by re-weighting each estimator, but can still result in wasted samples. Product importance sampling is an effective way to reduce the variance by drawing samples using a probability distribution built from the product of the BSDF and the environment map. To our knowledge, the most practical product sampling technique [Clarberg and Akenine-Möller 2008] is still relatively costly for production rendering because it approximates the BSDF by a sparse quad-tree built on the fly from a few hundred BSDF samples. Due to the high complexity of the multi-lobed models used in film rendering, this cost can be prohibitive.

Bidirectional path tracing using backward stochastic light culling

Bidirectional path tracing (BPT) produces noticeable variance for specular-diffuse-specular reflections even if they are not perfectly specular. This is because sampling of the connection between a light vertex and eye vertex does not take bidirectional reflectance distribution functions (BRDFs) into account. This paper presents a novel unbiased sampling method referred to as backward stochastic light culling which addresses the problem of specular-diffuse-glossy reflections. Our method efficiently performs Russian roulette for many light vertices according to the glossy BRDF at a given eye vertex using a hierarchical culling algorithm. We combine our method with light vertex cache-based BPT using multiple importance sampling to significantly reduce variance when rendering caustics reflected on highly glossy surfaces.

Fast path space filtering by jittered spatial hashing

Restricting path tracing to a small number of paths per pixel for performance reasons rarely achieves a satisfactory image quality for scenes of interest. However, path space filtering may dramatically improve the visual quality by sharing information across vertices of paths classified as "nearby". While thus contributions can be filtered in path space and beyond the first intersection, searching "nearby" paths is more expensive than filtering in screen space. We greatly improve over this performance penalty by storing and looking up the required information in a hash map using hash keys constructed from jittered and quantized information, such that only a single query may replace costly neighborhood searches.

SESSION: Ohooo shiny!

Automatic photo-from-panorama for Google Maps

We introduce a technique for extracting interesting photographs from 360° panoramas. We build on the success of convolutional neural networks for classification to train a model that scores a given view, using this score to find a best view. Training data for this classification model is generated automatically from landmark detections within Street View panoramas. We validate that our selected views are often preferred over manually chosen ones and have experienced an increase in user interaction when automatically selected views are shown on Google Maps.

Classified texture resizing for mobile devices

Power consumption is one of the most important factors in mobile computing. Especially for high-quality games, it takes a lot of computing power to render visual effects. In order to reduce this, some rendering techniques (e.g., Samsung Game Tuner) adjust rendering parameters (screen resolution, frame rates, and texture sizes) to improve power efficiency or performance. Among them, the texture resizing reduces power consumption in some cases, but it sometimes results in poor rendering quality or no energy saving.

To improve the texture resizing, we present the classified texture resizing technique. Our main idea is to classify textures into certain types and to apply a different approach to each type. As a result, our approach minimizes degradation of rendering quality and can be applied to wider applications. Our experimental results show up to 16% power reduction of a GPU and DRAM.

Deep thoughts on deep image compression

Deep image compositing has, in the last decade, become an industry standard approach to combining multiple computer-generated elements into a final frame. With rich support for multiple depth-specified samples per-pixel, deep images overcome many of the challenges previously faced when trying to combine multiple images using simple alpha channels and/or depth values.

A practical challenge when using deep images, however, is managing the data footprint. The visual fidelity of the computer generated environments, characters and effects is continually growing, typically resulting in both a higher number of elements and greater complexity within each element. It is not uncommon to be using "gigabytes" to describe the size of deep image collections which, as more and more visual effects facilities establish a global presence, introduces a significant concern about timely overseas data transfer. Further, as deep images flow through compositing networks, the high sample count contributes to longer processing times.

Our observation is that, with a richer contextual understanding of the target composite, systems - both automatic and artist-controlled - can be built to significantly compress deep images such that there is no perceptual difference in the final result.

Synthesizing panoramas for non-planar displays: a camera array workflow

In this talk we present a production workflow to generate panoramic high-resolution images for location-based entertainment and other semi-immersive visualization environments. Typically the display screens at these installations are an integral part of the surrounding architecture and have arbitrary non-planar surfaces. Our workflow is designed to minimize the distortions caused by the screen shape and optimize rendering of the high-resolution images while leveraging our existing feature film pipeline which, uses a standard perspective linear-projection camera model.

SESSION: Blow it up real good

Star Wars: The Last Jedi - effects simulation: industrial light and magic

For Star Wars: The Last Jedi, Industrial Light and Magic had to create a vast amount of effects simulation shots to make the story envisioned by the filmmakers believable. The film posed not only highly technical challenges to the effects team, but also strong aesthetic requirements in order to deliver such an anticipated movie.

From massive spaceships taken down during space battles to simulating interactions between characters and the various environments, a wide range of techniques and new developments had to be created to deal with the huge amount of work across all four of the company's facilities.

A collocated spatially adaptive approach to smoke simulation in bifrost

Simulations of smoke are pervasive in the production of visual effects for commercials, movies and games: from cigarette smoke and subtle dust to large-scale clouds of soot and vapor emanating from fires and explosions. In this talk we present a new Eulerian method that targets the simulation of such phenomena on a structured spatially adaptive voxel grid --- thereby achieving an improvement in memory usage and computational performance over regular dense and sparse grids at uniform resolution. Contrary to e.g. Setaluri et al. [2014], we use velocities collocated at voxel corners which allows sharper interpolation for spatially adaptive simulations, is faster for sampling, and promotes ease-of-use in an open procedural environment where technical artists often construct small computational graphs that apply forces, dissipation etc. to the velocities. The collocated method requires special treatment when projecting out the divergent velocity modes to prevent non-physical high frequency oscillations (not addressed by Ferstl et al. [2014]). To this end we explored discretization and filtering methods from computational physics, combining them with a matrix-free adaptive multigrid scheme based on MLAT and FAS [Trottenberg and Schuller 2001]. Finally we contribute a new volumetric quadrature approach to temporally smooth emission which outperforms e.g. Gaussian quadrature at large time steps. We have implemented our method in the cross-platform Autodesk Bifrost procedural environment which facilitates customization by the individual technical artist, and our implementation is in production use at several major studios. We refer the reader to the accompanying video for examples that illustrate our novel workflows for spatially adaptive simulations and the benefits of our approach. We note that several methods for adaptive fluid simulation have been proposed in recent years, e.g. [Ferstl et al. 2014; Setaluri et al. 2014], and we have drawn a lot of inspiration from these. However, to the best of our knowledge we are the first in computer graphics to propose a collocated velocity, spatially adaptive and matrix-free smoke simulation method that explicitly mitigates non-physical divergent modes.

Rampage: a pipelined approach to managing large scale character driven effects

We present the workflow and methodology for managing large scale destruction and volumetric simulations within Weta Digital's proprietary pipeline for the live action feature Rampage. Starting with character motion and a structurally accurate, highly detailed, geometric building model, we manage large asset dataflow, simulate and render rigid body destruction and generate multiple material volumetric events.

SimpleBullet: collaborating on a modular destruction toolkit

This talk discusses the SimpleBullet destruction system that was initially developed at Industrial Light and Magic (ILM), and then subsequently adopted and extended by Walt Disney Animation Studios (WDAS) and Pixar Animation Studios (Pixar). SimpleBullet is a set of HDAs and compiled plugins, built on top of SideFX's Bullet DOPs, along with modifications to the open source Bullet rigid body solver. We discuss the toolset as a whole, as well as pipeline integration efforts and extensions made by WDAS and Pixar.