DigiPro '24: Proceedings of the 2024 Digital Production Symposium

Full Citation in the ACM Digital Library

SPFS and SPK: Tools for Studio Software Deployment and Runtime Environment Management

Software development and technology teams in studio environments often need to support a complex matrix of applications, versions and dependencies. Existing management tools are not always up to this task, but offer a variety of battle-tested ideas that can be re-combined and re-imagined into a solution that better fits the needs of a studio. SPFS and SPK are a pair of tools that work together to deliver such a solution. They have been in development and use for several years now and offer a number of valuable features to the benefit of both technology and production teams. These tools continue to be actively developed, but already offer a compelling take on how we can effectively manage and collaborate within these complex software environments.

Building a scalable Animation Production Reporting Framework

This paper explores how Netflix has enhanced its animation production reporting using a dynamic data transformation framework. As collaborations with animation studios around the globe grow, it’s essential to adeptly manage varied reports to ensure projects stay on schedule and within financial constraints. Traditional reporting methods struggle to meet the demands of modern animation production. The framework we built at Netflix standardizes Animation Production Reporting at scale, by employing modern data processing techniques like Apache Spark and incorporating different layers such as parse, transform, and publish. This paper delves into the design of each layer and the abstractions used, making it feasible for animation studios globally to adopt our framework for reporting. Specific examples are provided to illustrate how these pipelines currently in production automate reporting, ensuring detailed assertions and sequence breakdown workflow.

Multithreading USD and Qt: Adding Concurrency to Filament

As production scene complexity and CPU core count increase, the performance of software used to interact with the scenes may not scale accordingly. Filament is Animal Logic’s in-house, USD-based, PyQt lighting DCC, and a key area for improving Filament was increasing performance and responsiveness when working with large production scenes. As USD, Qt, and Python all have their own multithreading patterns, some coordination is required between all three to work well. Filament was updated to parallelize USD stage processing to reduce processing time, as well as adopt asynchrony to keep the main GUI thread unblocked, greatly improving artist experience. These updates demonstrate a model for multithreading USD stage access to improve other applications working with USD.

Nucleus: A Design System for Animation and VFX Applications

We describe a plugin-based architecture for developing component-based Qt applications for animation and visual effects, and discuss the benefits this approach offers in terms of code reuse, stability and consistency. We introduce an Application Maturity Model quality metric to characterize a set of best practices, design patterns and frameworks for developing complex interactive applications.

Spear: Across the Streaming Multiprocessors: Porting a Production Renderer to the GPU

We ported the Sony Pictures Imageworks version of the Arnold Renderer to the GPU using NVIDIA’s OptiX ray tracing toolkit. This required modifying algorithms to run efficiently on the GPU, the use of new software methodologies to better share source code between the host and device renderers, and a reevaluation of what contributes to poor performance on the device. We share here the key decisions we made to overcome these challenges and the valuable lessons we learned during our journey in implementing the Sony Pictures Evolved Arnold Renderer (Spear) on the GPU.

Cache Points for Production-Scale Occlusion-Aware Many-Lights Sampling and Volumetric Scattering

A hallmark capability that defines a renderer as a production renderer is the ability to scale to handle scenes with extreme complexity, including complex illumination cast by a vast number of light sources. In this paper, we present Cache Points, the system used by Disney’s Hyperion Renderer to perform efficient unbiased importance sampling of direct illumination in scenes containing up to millions of light sources. Our cache points system includes a number of novel features. We build a spatial data structure over points that light sampling will occur from instead of over the lights themselves. We do online learning of occlusion and factor this into our importance sampling distribution. We also accelerate sampling in difficult volume scattering cases.

Over the past decade, our cache points system has seen extensive production usage on every CG feature film and animated short produced by Walt Disney Animation Studios, enabling artists to design lighting environments without concern for complexity. In this paper, we will survey how the cache points system is built, works, impacts production lighting and artist workflows, and factors into the future of production rendering at Disney Animation.

Creating Tools for Stylized Design Workflows

In order to achieve a specific, oil-painting-inspired rendering style, Blender Studio developed a solution based on Blender’s generative geometry and simulation framework “Geometry Nodes”, creating an artist-friendly set of workflows and tools. This approach empowers (technical) artists to develop bespoke solutions for the art department, without requiring lower level engineering and development.

Developing a Curve Rigging Toolset: a Case Study in Adapting to Production Changes

We present an overview of Animal Logic’s curve rigging toolset and its development process, serving as a case study to discuss challenges specific to doing software development for animated feature film production. We will show how R&D projects at Animal Logic lean on agile software practices to enable ambitious development projects, with flexible plans that adapt to the reality of working with creative stakeholders. We will highlight the importance of production engagement, reflect on our technical decisions made over a year of active development while reacting to drastic production schedule changes, and share lessons learned along the way.

Premo: Overrides Data Model

Implementing a Machine Learning Deformer for CG Crowds: Our Journey

CG crowds have become increasingly popular this last decade in the VFX and animation industry: formerly reserved to only a few high end studios and blockbusters, they are now widely used in TV shows or commercials. Yet, there is still one major limitation: in order to be ingested properly in crowd software, studio rigs have to comply with specific prerequisites, especially in terms of deformations. Usually only skinning, blend shapes and geometry caches are supported preventing close-up shots with facial performances on crowd characters. We envisioned two approaches to tackle this: either reverse engineer the hundreds of deformer nodes available in the major DCCs/plugins and incorporate them in our crowd package, or surf the machine learning wave to compress the deformations of a rig using a neural network architecture. Considering we could not commit 5+ man/years of development into this problem, and that we were excited to dip our toes in the machine learning pool, we went for the latter.

From our first tests to a minimum viable product, we went through hopes and disappointments: we hit multiple pitfalls, took false shortcuts and dead ends before reaching our destination. With this paper, we hope to provide a valuable feedback by sharing the lessons we learnt from this experience.