While the LED panels used in today’s virtual production systems can display vibrant imagery within a wide color gamut, they produce problematic color shifts when used as lighting due to their ”peaky” spectral output from narrow-band red, green, and blue LEDs. In this work, we present an improved color calibration process for virtual production stages which ameliorates this color rendition problem while also maintaining accurate in-camera background colors. We do this by optimizing linear color correction transformations for 1) the LED panel pixels visible in the camera’s field of view, 2) the pixels outside the camera’s field of view illuminating the subjects, and – as a post-process – 3) the pixel values recorded by the studio camera. The result is that footage shot in an RGB LED panel virtual production stage can exhibit more accurate skin tones and costume colors while still reproducing the desired colors of the in-camera background.
Visual effects commonly requires both the creation of realistic synthetic humans as well as retargeting actors’ performances to humanoid characters such as aliens and monsters. Achieving the expressive performances demanded in entertainment requires manipulating complex models with hundreds of parameters. Full creative control requires the freedom to make edits at any stage of the production, which prohibits the use of a fully automatic “black box” solution with uninterpretable parameters. On the other hand, producing realistic animation with these sophisticated models is difficult and laborious. This paper describes FDLS (Facial Deep Learning Solver), which is Weta Digital’s solution to these challenges. FDLS adopts a coarse-to-fine and human-in-the-loop strategy, allowing a solved performance to be verified and (if needed) edited at several stages in the solving process. To train FDLS, we first transform the raw motion-captured data into robust graph features. The feature extraction algorithms were devised after carefully observing the artists’ interpretation of the 3d facial landmarks. Secondly, based on the observation that the artists typically finalize the jaw pass animation before proceeding to finer detail, we solve for the jaw motion first and predict fine expressions with region-based networks conditioned on the jaw position. Finally, artists can optionally invoke a non-linear finetuning process on top of the FDLS solution to follow the motion-captured virtual markers as closely as possible. FDLS supports editing if needed to improve the results of the deep learning solution and it can handle small daily changes in the actor’s face shape. FDLS permits reliable and production-quality performance solving with minimal training and little or no manual effort in many cases, while also allowing the solve to be guided and edited in unusual and difficult cases. The system has been under development for several years and has been used in major movies.
This talk presents an animation friendly, procedural solution for animating ropes in the movie The Sea Beast. With over 5000 ropes on our hero tall ship, we embarked on development of a better rope solution for our animators. The resulting rope rig would allow our animators to interact with ropes using intuitive controls, while producing complex shapes and preserving length. Character interactions with ropes were easy to produce, and stretchy, rubbery ropes avoided. This new rope rig was critical to the believability of our world, considering the massive number of ropes, in so many shots. With this new rig, we were even able to final some shots in animation, without having to simulate the rope dynamics at all.
The Visual Effects industry has experienced a strong shift towards the creation of many shots with several digital creatures and complex grooms (i.e. fur and hair). It is common for these grooms to change in shape over the duration of a shot (e.g. from dry to wet), which requires specific techniques to interpolate across different sets of curves.
While researchers have extensively focused on algorithms for polygonal mesh deformation, very little can be found for curves, and basic linear interpolation techniques produce unrealistic results that do not emulate the expected behavior of groom filaments changing in shape and style.
In this paper we present an iterative algorithm that allows for the interpolation across different curve shapes with the ability to preserve key features like strand’s curvature and segment lengths. We also introduce the concept of partial blendshape linear interpolation to help the system to rapidly converge to the optimal solution in few iterations.
We present detailed results and production use-cases in creatures’ Visual Effects.
To improve performance and interactivity working with our Houdini-based Grooming Tools, Animal Logic developed a set of custom nodes to control and optimize the process of evaluating our groom generation networks.
This talk presents two techniques where novel animation tools are used to influence the final render in unusual ways that have been implemented in the Sony Pictures Imageworks pipeline. The first technique, CreaseLines, provides animators the ability to dynamically create and control curves to define emotive facial creases that directly drive displacement of the face meshes. CreaseLines wide range of control opened new opportunities for animators to quickly hit performances in a very direct way. The second technique, Variable Motion Blur, gives the animators more direct control over the motion blur in the render, allowing them to easily direct the audience’s attention and highlight the important action. Spheres define regions in the scene where motion blur amplitude can be scaled.
For the past 8 years, Animal Logic has been using its custom Animal Logic SHading System (ASH) material definition and rendering technology for all film projects within our proprietary pathtracer Glimpse. We compare existing solutions for material binding and layering from MaterialX , PRMan , USD /USDShade , MDL and more, and show how our own system provides desirable features and solutions absent from other shading solutions and material binding/definition specifications. We propose that existing Open Source projects adopt support for true layered binding, shading and hierarchical assignment, and further propose such solutions provide controllable ordering to allow these layering mechanisms to adequately handle typical production scenarios and requirements. We provide and discuss production examples and further areas for research.
Pixar’s Universal Scene Description (USD) and Hydra combined together to define a scene-description and rendering API to present a view of future interchange and efficient rendering. The industry has embraced this, and integrations have been provided for all DCCs. With that, Hydra has been broadly restricted to use as a viewport render API and not deployed as a final frame render API. We believe that embracing the power of the composition and layering system coupled with live components and interactive rendering enables the ability to revolutionize how artists deliver final frames. With tools built on top of this common foundation, it improves iteration time, but also changes how they work together, across departments, tools, and even companies. By examining the components and conventions built to enable and demonstrate a working, end-to-end system, we encourage discussion and further contributions and improvements to this ecosystem.
This paper presents how we built scalable and evolutive USD pipelines on distributed architecture at Ubisoft. We use BPMN as a nodal representation to allow our supervisors to build new or modify existing workflows. Our processes are designed using industry standards and USD file format for interchangeability and are easily scalable and ready to deploy to our multi-site studios and teams. Using Microservices running on our internal cloud computing infrastructure and their language-neutrality, we can leverage existing in-house and new technologies developed on multiple platforms by our teams worldwide.
This paper gives the reader a behind the scenes look at how cloud native automation pipelines play a significant role in the production of Live Action, Visual Effects, and Animated content at Netflix Studios. Netflix's Studio Orchestrator is used to connect creators around the globe by efficiently sharing, tracking, and transforming production data on a scalable architecture that can be customized to meet the needs of production. We will uncover how our approach provides a way for geographically distributed creative teams to share and collaborate on assets as well as save our developer's time by leveraging shared infrastructure and shared components to build automation pipelines. We will use specific examples of automation pipelines that are in production today across Live Action, Visual Effects, and Animation to highlight the benefits of our approach to solving common industry workflows.
We describe key steps in the process by which an animation and VFX studio (Animal Logic) integrated Pixar’s Universal Scene Description™ into a large existing legacy pipeline. We discuss various architectural choices, as well as software systems developed to support these patterns. This successful USD migration has enabled the studio to significantly improve its toolchain productivity, supporting the simultaneous development of multiple feature films.