LED volume virtual production brings its own unique set of challenges that differ from typical applications of real-time rendering. ILM StageCraft is a complete suite of virtual production tools that incorporates powerful off-the-shelf systems, such as Unreal, with proprietary solutions that allow us to overcome these challenges—always with the goal of using the very best tool for the job.
To meet the most demanding requirements for The Mandalorian season 2 and beyond, Industrial Light & Magic (ILM) developed and augmented proprietary technology solutions in StageCraft 2.0, including a new real-time rendering engine, Helios, and a collection of interactive tools that give filmmakers the freedom, creative feedback, and real-time control to make immediate adjustments on set. Powered by ILM’s award-winning set of tools focused on rendering, lighting, and color, creatives can achieve their desired looks and goals without having to wait for traditional post-production or DI to take place.
Since season 2 of The Mandalorian, ILM has continued to evolve the Helios renderer, expanding what’s possible not only in terms of visual fidelity but also visual adjustability, through a real-time toolset that allows on-set operators to rapidly respond (in a matter of seconds!) to late-breaking changes in creative direction, in ways that make intuitive sense for filmmakers rather than technologists and gamers. This groundbreaking approach saves significant time on set while also providing greater creative control and insights over the final result, allowing for an unprecedented degree of freedom and flexibility to make the right choices on the day.
This talk will address why, in today’s landscape, you would even consider writing a set of proprietary tools and renderer. It will cover the various aspects of ILM StageCraft rendering and operations, from the streaming systems that feed the Helios renderer, to the core rendering passes (geometry submission, direct lighting, ray-traced reflections, volumetrics, etc.) that compose a frame, as well as the various controls we provide for on-the-day tweaks to virtual content.
Along the way, we will discuss our integrated and seamless color pipeline that provides end-to-end confidence in the quality of the captured results. We will also point out some gotchas with using established real-time rendering techniques in the context of the LED volume and how we overcame them. Finally, we will touch on how ILM StageCraft fits within the larger picture of visual effects production and where we plan to take it in the future.
We introduce Magenta Green Screen, a novel machine learning–enabled matting technique for recording the color image of a foreground actor and a simultaneous high-quality alpha channel without requiring a special camera or manual keying techniques. We record the actor on a green background but light them with only red and blue foreground lighting. In this configuration, the green channel shows the actor silhouetted against a bright, even background, which can be used directly as a holdout matte, the inverse of the actor’s alpha channel. We then restore the green channel of the foreground using a machine learning colorization technique. We train the colorization model with an example sequence of the actor lit by white lighting, yielding convincing and temporally stable colorization results. We further show that time-multiplexing the lighting between Magenta Green Screen and Green Magenta Screen allows the technique to be practiced under what appears to be mostly normal lighting. We demonstrate that our technique yields high-quality compositing results when implemented on a modern LED virtual production stage. The alpha channel data obtainable with our technique can provide significantly higher quality training data for natural image matting algorithms to support future ML matting research.
This talk provides a comprehensive look at the innovative technology and workflows developed for Avatar: The Way of Water. The talk will cover the extensive research and development with a focus on three key topics: depth-compositing, the new facial system (APFS) and finally the Loki integrated solver.
The city of ‘Chronopolis’ served as the setting for the entire 3rd act of the movie ‘Ant-Man and the Wasp: Quantumania’. As Kang’s stronghold, Chronopolis is a futuristic metropolis that blends functional, militaristic fortification with disordered, gravity-defying sprawl. Surrounded on all sides by wormholes that serve as access points to other worlds, it became a unique technical and creative challenge. SPI was tasked with building this environment in a way that could convey the vastness of the landscape whilst simultaneously enabling it to be shared with various other studios around the globe. This required various novel and creative solutions for both the primary look and build of the main city as well as the broader landscape of sprawl, concentric roads and wormholes. This talk will outline some of the interesting challenges of production from various SPI departments along with detailing many of the more interesting solutions to this challenge.
Artists at Walt Disney Animation Studios have been constantly seeking to improve the physical depiction of an increasingly wider range of skin tones with each subsequent film. “Strange World” presented a great opportunity to improve on how we portray a wide range of skin types. The changes we implemented included: creating a new skin material from the ground up, changing the testing light rig to better evaluate materials, and implementing lighting strategies to better represent the skin properties of each character. What resulted was character skin that improved on our depiction of a wider range of skin types while maintaining a level of stylization in line with the art direction of the film.
Avatar: The Way of Water, presented Wētā FX with the challenge of delivering visuals of an unprecedented level of complexity and quality. To support the filmmaker's creative vision, Wētā FX sought to ensure it had burst compute capacity available through scalable cloud resource.
In this talk we aim to present our approach to this problem in the cloud rendering space. We will provide an overview of the deployment of a large-scale hybrid cloud render-farm, diving into areas we found challenging and of particular interest to the wider VFX community. Namely the deployment of a custom-built NFS re-export cache tweaked to deal with high latency, our approach to maintaining performant storage in the cloud, our solution to scheduling a large-scale spot fleet across 3 AWS regions and various challenges encountered when rendering at scale in the cloud. Within each section we intend to present scalability issues encountered, detail various challenges and our approaches to overcome them.
Due to an ever increasing need from clients to produce more and more realistic furry and feathered characters, there is an increased need for MPC to ensure that these characters are rendering in the most optimal way possible. In this paper we shall introduce the new, USD-based grooming system that is being implemented at MPC to replace our existing in-house software. This new project steps away from the procedural, render-time groom workflows that Furtility, our legacy software, had. We shall be investigating the motivation for the change, as well as the successes and challenges that we have faced along the way so far.
This paper contains four contributions: (1) a physically plausible displacement [Cook et al. 1987; Cook 1984] layering operation based on the intuitive notions of material Thickness and Accumulation; (2) a universal pattern variation metric and control parameter called Size that simultaneously modifies the displaced pattern’s variation and magnitude, and which is used by the displacement layering operation for specifying and tracking the accumulated "bulk" produced by layered displacements; (3) an encapsulated Material object definition that specifies its BxDF1 response(s), displacement, Thickness of application, and desired level of displacement bulk Accumulation; (4) a Material Layer node definition that allows encapsulated Material objects to be layered over one another in an easy to specify, intuitively controlled, yet physically plausible way simply by connecting them in the desired layering order.
Layering the BxDFs of encapsulated Material objects is made possible by the layering capabilities presently defined in MaterialX [Stone et al. 2012]. However, the layering of displacements and their effects on the Material’s optical properties are not currently defined in a robust manner. The displacement layering operations as described herein remedy this current deficiency. The operations require no physical simulation, as all the necessary data and displacement layering operations are point processes. These characteristics allow the implementation of this system within any shader execution environment.
The lack of non-local interactions between complex Materials cannot account for some types of physical effects in the displacement layering (See Section 7.6). However, this limitation has proven to be of little consequence to our use of this system in practice, and any such limitations are more than offset by the production efficiencies gained from the ability to pre-define a library of modular Materials that can easily be combined with each other as needed to create an essentially infinite number of physically plausible, displaced Material composites.
In this paper we explore the intricate relationship between volumetric albedo and attenuation. When rendering realistic skin and complex FX elements, the traditional approach of authoring absorption and scattering can pose a challenge when trying to achieve a desired opacity, brightness and saturation. The alternate parameterisation of albedo and attenuation can also produce unpredictable results. Focusing on Mie and Rayleigh scattering, we develop three novel techniques that derive physically plausible attenuation values from albedo. By understanding the implications of Rayleigh scattering, and adjusting albedo in an intuitive manner, artists can achieve predictable and consistent results with volumetric rendering workflows.
We introduce Animal Logic’s advanced 3D matte painting toolset AL_USDNuke, which seamlessly integrates Nuke into our USD-centric pipeline. We detail the integration of numerous components such as our path-traced GlimpseViewport for instant representation of large USD stages, a user-friendly node-based toolset to modify USD stages, visualisation by our in-house renderer Glimpse for high-fidelity ground-truth feedback, and complementary views to efficiently manage USD stages inside Nuke. This was achieved by a specialised Nuke to USD translator and our brand-new framework Plasma, an enhancement of Animal Logic’s in-house Nucleus framework for large-scale application development, tailored to Nuke. These developments improve matte painting artists’ efficiency on complex USD stages and allow them to publish their work into shots for the benefit of all other upstream as well as downstream departments.
We present Pahi, a unified water pipeline and toolset for visual effects production. It covers procedural blocking visualization for preproduction, simulation of various water phenomena from large-scale splashes with airborne spray and mist, underwater bubbles and foam to small-scale ripples, thin film and drips, and a compositing system to combine different elements together for rendering. Rather than prescribing a one-size-fits-all solution, Pahi encompasses a number of state-of-the-art techniques from reference engineering-grade solvers to highly art-directable tools. We do a deep dive into the technical aspects of Pahi components and their interaction, and discuss practical aspects of its use on Avatar: The Way of Water. We were honored to be awarded the Visual Effects Society (VES) 2023 Emerging Technology Award for this work.