SIGGRAPH '24: ACM SIGGRAPH 2024 Talks

Full Citation in the ACM Digital Library

SESSION: The Sporting Life

College Football is HUGE: Delivering a AAA Sports Game at Scale

Following an 11-year hiatus, EA SPORTS College Football is back. A video game that recreates the greatest sights in America's most beloved sport, this enormous production created over 11,000 unique playable characters, representing 134 schools, their stadiums, athletes, uniforms, mascots, traditions and so much more. This project's scale, quality and time demands necessitated a complete reimagining of traditional content creation and rendering methods.

This talk explores the strategies employed by our art, technical art and rendering teams to meet these challenges. We discuss our approach in creating and rendering every element of the game, from dogs to cows, cannons to swords, helmets to cleats.

Studying Esports Competition: Piloting Methodology for User Studies During Tournaments

On Smoothly Varying Frame Timing in First-Person Gaming

SESSION: Making A Wish

Art-Directing Asha's Braids in Disney's Wish

For Walt Disney Animation Studios’ 100th anniversary feature, the filmmakers wanted to honor the studio’s legacy with a stylized look that draws from the rich artistic heritage of Disney’s earliest films. In addition to the challenges of a highly art-directed stylized look, Asha’s hairstyle comprises a full head of long, thin, tightly-braided locks which was far more complex than our previous braid grooms. In order to art direct Asha’s stylized hair performance in "Wish", several advancements were made in tools and workflows across the character departments for grooming, simulation, and stylization techniques.

Character Stylization in Disney's Wish

"Wish" pays homage to a century of Disney Animation’s legacy and the look of its characters is inspired by the watercolor storybook style of some of our earlier Disney classics, including "Snow White and the Seven Dwarfs", "Pinocchio", and "Sleeping Beauty". Careful attention to detail in appearance, graphic shading, and streamlined geometric shapes of models, clothing and grooms, provide the foundation of the unique stylized look and art-directed character performance. A final procedural compositing treatment is applied in lighting to produce the more painterly watercolor style combined with hand-drawn influenced linework. Modifications were required in all of the asset departments’ workflows to support this overall stylization.

Creating the Wishes of Rosas

The wishes of Rosas are a central story device in Disney Animation’s 100th anniversary film "Wish." Appearing as worlds revealed within magical galaxies living inside palm-sized orbs, the wishes needed to be highly dynamic and deeply dimensional. The film’s art direction required that the internal animation and lighting of the wishes react to external story events and interact with characters, while the narrative required that wishes be choreographed en masse, selectively revealed and concealed while in motion for musical numbers and plot points, and remain identifiable to characters and audiences. The breadth of these requirements, along with a need for scalability in authoring, drove the development of our unified wish asset and shot pipeline.

SESSION: Past Histories and Possible Futures

Gender Diversity of Graphics Conference Leadership

Non-male identifying researchers, including women and non-binary individuals, are underrepresented in leadership roles within computer graphics conferences. Analyzing data on these roles over time across key conferences highlights this disparity, while also identifying positive trends and worrisome constants. The goal of the following data collection is to provide grounded statistics to inform future decision making. Recognizing even small successes from the past may inspire and accelerate future improvement. The academic community’s recent failure to sufficiently address sexual harassment and malevolent behavior from members of our community (e.g. [Obr 2023]) is an urgent reminder to improve.

This is admittedly arm-chair statistics. Anyone with the internet could collect this data and create the following statistics and plots. Nevertheless, I would like to be thorough about the methodology, largely replicating Graesser et al. [2021], who examined robotics.

The Life and Legacy of Bui Tuong Phong

We examine the life and legacy of pioneering Vietnamese American computer scientist Bùi Tường Phong, whose shading and lighting models turned 50 last year. We trace the trajectory of his life through Vietnam, France, and the United States, and its intersections with global conflicts. Crucially, we present evidence that his name has been cited incorrectly over the last five decades. His family name is Bùi Tường, not Phong. By presenting these facts at SIGGRAPH, we hope to collect more information about his life, and ensure that his name is remembered correctly in the future.

SESSION: XR in Practice

New Media for a New Generation of Deep Ocean Explorers

Real-Time 3D Graphics for Health Impact: Interactive 3D IUD Insertion

BioDigital harnesses web-based real-time graphics to revolutionize health education, facilitating over 100 million health conversations yearly. Our content is built to be accessible to anyone with an internet connection, making health literacy available to individuals regardless of their background or location. Our Lead 3D Graphics Engineer and Content Directors will share how we built an optimized 3D interactive and enhanced engine features to expand access to reproductive healthcare in regions with unknown internet speeds and hardware specifications. 

We will present a live demo of our interactive 3D visualization of an IUD Insertion procedure and detailed anatomy interactives, sharing custom rendering and asset-creation techniques developed to meet the strict technical constraints of a global audience.

Bridging the Gap: Immersive Environments for Resettlement

Bringing Adventure Gaming to Life Using Real-Time Generative AI on Your PC

Imagine a new kind of tabletop gaming experience, where a narrator is describing a complex fantasy world, and players gathered around the table can see the events of their world unfolding in real-time, on their PC devices. In this talk, we will show attendees how to execute multi-modal Generative AI (Gen AI) models, running on a PC, in real-time to create immersive scenery, followed by an interactive live demonstration. This talk walks through the optimization of Gen AI modalities, chaining audio transcription and diffusion models, together in real-time. We compress these models with the OpenVINO™ Toolkit and leverage the Intel® Core™ Ultra processor to split these workloads across CPU, integrated GPU, and Neural Processing Unit (NPU), getting the best performance for each model. We also cover exciting new developments with temporal-consistent and depth estimation approaches towards high-resolution, 3D, pop-up, scenery generation. This talk equips participants with hands-on tools to resolve challenges with real-time, high-quality gaming scenery generation on their PC for gaming.

SESSION: Bodies, Skin and Hair

End-to-end Automatic Body and Face Setup for Generative or User Created 3D Avatar

Skin Wrinkle

Skin Wrinkle is a custom deformer tool at DreamWorks Animation that produces wrinkles in the animated skin mesh geometry solely based on the incoming animated skin mesh, rest skin mesh, and artist specified parameters. This wrinkle effect on characters that exhibit dynamic folds enhances their look and animation performance. The deformer minimizes loss in body volume as the skin wrinkles and also performs continuous self collision detection and resolution to handle the tight wrinkles. Skin Wrinkle is time-independent, fast, robust, and controllable, thus making it very artist-friendly.

Wig Refitting in Pixar?s Inside Out 2

In Pixar’s feature animation Inside Out 2 (2024), emotion characters are identified with their corresponding human characters by exhibiting similar wigs. To achieve this look, we developed a custom rig that assists the sharing and reuse of hair grooms between characters of different shapes, feature proportions, and mesh connectivities. Our approach starts by adopting curvenets as a light-weighted representation of scalp surfaces that eases the registration from human to emotion models by detaching the groom setup from the underlying mesh discretization. We then implemented a mix of surface-based and volumetric deformations that warp hair shells and guide curves onto the new character’s scalp defined by the refit curvenet. At last, we incorporated a shaping tool for editing the wig layout controlled by additional curvenets that profile each hair shell.

An Artist-Friendly Method for Procedural Skin Generation and Visualisation in Houdini

Realistic-looking digital humans in visual effects are a crucial aspect of creating a believable experience for the viewer. In order to create a convincing result, a lot of effort is put into the details. In this talk, we look specifically into the details of the human skin and how we implemented a fully procedural approach to generate and visualise skin textures. The tool is integrated into our VFX-pipeline as a SOP (surface operator) in Houdini and provides an artist-friendly interface with many ways to modify and tweak the output appearance. Additionally, we provide a way to utilise the underlying animation to produce pores and wrinkles dynamically driven by stretch and compression in the current frame.

SESSION: Games - Activision Blizzard King

Shadow of HyperPose: New Animation System

This presentation details HyperPose, an innovative animation system that represents motion through strategically chosen samples (“principal dynamic poses”) integrated into a procedural state machine. State navigation is achieved via a cost function, optimizing the system's efficiency and realism in animation. Data is represented as a four-dimensional hyperpose.

Extensions of HyperPose: Echo, Savant, Loop

This presentation introduces several solutions which came as byproduct from our work on HyperPose. Echo is a technique for applying stylistic modifications to select poses in character animation, and extending these effects to unaltered data. Savant is a novel approach to generating runtime secondary motion for character cloth, hair, muscles, props, etc. Loop describes an improved method for mocap aimed to serve next generation of realism in character animation

SESSION: Stop the Presses!

Spatial storytelling in the Newsroom: Reconstructing news events in 3D

When we create stories in 3-D space, we can go beyond words to help the reader feel they were there to experience and understand it. This talk shows how three visual stories in the New York Times create a sense of presence, using computer vision techniques, spatial audio, and 3-D tiles.

Efficient Visibility Reuse for Real-time ReSTIR

Back in the Eye of the Storm, The Visual Effects of 'Twisters'

We begin with an exploration of the evolution from the original 1996 Twister to 2024’s Twisters, highlighting technological advancements and their impact on visual storytelling. The seamless integration of special effects (SFX) and visual effects (VFX) is crucial for achieving realism, and we will discuss the collaboration between these teams to create cohesive, immersive experiences. A key focus is on the elemental analysis of storms, breaking down the visual components that form tornadoes. This includes understanding the recipe and rules of storms, their lifecycle, and their environmental impact. The tools and setups developed to recreate these phenomena will be showcased, illustrating the blend of science and creativity. Photography plays a vital role in capturing the essence of real storms. We will examine the extensive reference photography process, from hi-res storm chaser footage driving into storms to everyday iPhone and GoPro clips, and how these references shape our imagination and design of tornadoes. Finally, the talk will cover the process of transforming tornadoes into characters within the narrative. Each tornado in Twisters is unique in size, shape, and form, following the simple physics of these natural phenomena while pushing creative boundaries. This section will reveal how Industrial Light & Magic (ILM) rigged and animated different tornadoes, making them central figures in the story. This talk offers an in-depth look at the magic that brings nature’s fury to life, inspiring and educating the audience on the potential of visual effects in film to replicate nature.

SESSION: Lips Don’t Lie

Lip-Sync ML: Machine Learning-based Framework to Generate Lip-sync Animations in FINAL FANTASY VII REBIRTH

Audio2Rig: Artist-oriented deep learning tool for facial and lip sync animation

Creating realistic or stylized facial and lip sync animation is a tedious task. It requires lot of time and skills to sync the lips with audio and convey the right emotion to the character’s face. To allow animators to spend more time on the artistic and creative part of the animation, we present Audio2Rig: a new deep learning based tool leveraging previously animated sequences of a show, to generate facial and lip sync rig animation from an audio file. Based in Maya, it learns from any production rig without any adjustment and generates high quality and stylized animations which mimic the style of the show. Audio2Rig fits in the animator workflow: since it generates keys on the rig controllers, the animation can be easily retaken. The method is based on 3 neural network modules which can learn an arbitrary number of controllers. Hence, different configurations can be created for specific parts of the face (such as the tongue, lips or eyes). With Audio2Rig, animators can also pick different emotions and adjust their intensities to experiment or customize the output, and have high level controls on the keyframes setting. Our method shows excellent results, generating fine animation details while respecting the show style. Finally, as the training relies on the studio data and is done internally, it ensures data privacy and prevents from copyright infringement.

Practical Use of Machine Learning in Visual Effects and Animation

SESSION: Wish You VR Here

Co-Presence, Connection and Co-Creation: Building Real-time Cross-person Neurofeedback Interactions

How can we push the boundaries of human connection by tapping into the power of immersive interpersonal brainwave interactions? While biosignal visualizations and interactions have advanced in both research and art scenes, most experiences and applications that utilize brainwaves and other biosignals involve single-person or one-way interactions from one individual to another, underexploiting the prosocial potential of these biosignals. This talk explores cross- and multi-person environments in virtual reality (VR) driven by shared brainwaves. Through case studies of two installations in a series, each showcasing unique relationships cultivated by the immersive experience via distinct EEG manifestations and visualization rules - feeling each other's presence, reinforcing mutual emotional connections, and co-creating art through synchronized brainwaves - the talk describes the design and development process of cross-person neurofeedback experiences in virtual reality. It explains various innovative approaches to real-time brainwave visualizations and interactions that integrate technology, science, storytelling and gamification, and examines their significance in expanding non-verbal channels of communication, elevating interpersonal connections, and fostering collective creativity.

Measuring how Project Starline improves remote communication with behavioral science

Virtual Reality for inward contemplation

When performed at a high level, meditation and contemplative practices can give rise to altered states of consciousness, the study of which is key for cognitive neurosciences. Those effects indeed can help to better understand the underlying mechanisms of the self and its relation with the body. But reaching those states is rare and thus hard to observe. They require intense practice and dedication, even among regular meditators. To make those specific states more accessible, we propose a new approach of a “neuro-engineered meditation”, bringing technologies such as virtual reality (VR) and biofeedback to give practitioners new perspectives over their practice, by targeting their own body and internal sensations. We expect this approach to help participants better understand complex concepts and develop deeper results, in comparison with a regular practice.

SESSION: Opening Up About USD

An OpenUSD Production Pipeline with Very Little Coding: Empowering 3D artists with a parallel workflow using off-the-shelf software

We present a streamlined animation production pipeline leveraging OpenUSD and Houdini's procedural strengths to enable collaborative, parallel workflows. Our approach minimizes coding requirements, empowering artists to iterate simultaneously on shared assets and shots for real-time project visualization. Developed and proven in an academic setting, this pipeline demonstrates adaptability and scalability for small studio environments, successfully fostering iterative workflows among users.

Sony Imageworks Animation Layout Workflow with Unreal Engine and OpenUSD

This is an overview of the new rough layout pipeline at Sony Pictures Imageworks. In a notable departure from the legacy pipeline, sequence and shot-based work now begin in Unreal Engine. By exporting USD data out of Unreal Engine to share with other DCCs, we were able to reinvent the early stages of feature film production.

Optimizing Assets for Authoring and Consumption in USD

Walt Disney Animation Studios has used Universal Scene Description (USD) [Pixar 2016] as the backbone of its production pipeline since "Encanto" in 2021[Miller et al. 2022]. In this talk, we introduce a new asset structure that addresses speed issues from our initial asset structure design and vastly simplifies asset authorship. The new asset structure helped to (1) streamline and decouple asset authorship and shot consumption, (2) enable new authoring workflows that better take advantage of USD’s multi-stage model and (3) open the door for shot focused asset-based optimizations.

SESSION: Such a Character

Crowdabunga! The Crowd Challenges of TMNT: Mutant Mayhem

From the original comic book to the TV shows, the universe of the Ninja Turtles is very well-known for its unique characters, whether lead or secondary. To deliver believable and directable crowds for “Teenage Mutant Ninja Turtles: Mutant Mayhem” (TMNT: MM), the teams at Mikros Animation had to think of new approaches for the Crowd department to populate dozens of shots while matching the animation and visual style of the show. We faced multiple challenges: the crowd population were close to the camera and had to be diverse while matching the asymmetrical / clay-like style and capabilities of the main characters; the main action takes place in NYC and portrays well known crowded situations; the results had to remain faithful to the 2D teenage style of the show.

In this paper, we detail the tools we developed to tackle these challenges. First, a Character Building Tool was developed to generate hundreds of stylized morphologies from an input of 4 Character Templates, hero garments and pencil strokes. Then, crowd artists took advantage of casting tools to populate their shots, assign them with predefined animations and enhance the final results with procedural behaviors. Finally, post simulation tools were implemented to alter animations and lookdev, based on camera distance.

How to control Mayhem: The painted look of hair on TMNT

TMNT Mutant Mayhem (2023) represented a great challenge artistically and technically. The director aimed for a look that would give the illusion that each frame would be hand-crafted 2D concept art. To break away from a digital or procedural feeling, we needed to approach the assets from modeling to surfacing in a unique way. This was especially challenging for the look and feel of the groom. To treat the groom in a way that it would feel cohesive with all the other objects and integrate seamlessly in the rendering scenes, we developed a layered system that combined modeled geometry, curves, a stylized surfacing, rendering, and lighting approach connecting the character work of modeling, grooming, surfacing, and lighting artists.

Performance driven Character Effects in “The Garfield Movie”

The stylised animation in the "The Garfield Movie" presented a number of technical challenges especially for the fur which included detached limbs sliding over the surface of the body which required the groom to maintain its styling and interact reliably and predictably. Additionally limbs and body regions could be stretched drastically to support the extreme poses and stylised facial expressions requiring the fur to dynamically maintain density in those regions.

SESSION: Lights, Quality, Action!

Development of Real-Time QA/QC Tools for AEC in Unity

Here, we are focusing on how our real-time visualizations are being used to improve the QA/QC process. While the AEC industry has utilized 3D CAD design software for years, the review process still typically involves commenting on printed 2D drawings or PDFs. We have developed a QA/QC tool that allows our engineers and project managers to review designs in real-time 3D, placing comment markers in 3D space for others to see. Built in Unity, this tool supports viewing and commenting on everything from individual CAD models and components to sprawling miles-long infrastructure projects, with all markup data being stored securely in the cloud for easy access to authorized contributors.

This is opposed to the traditional method, which involved building a 3D environment, rendering a video, sending it out for review, then having the review team take screenshots and compile it in a PDF. By allowing reviewers to mark up the 3D environment directly, we are drastically speeding up the iteration process. In addition, users have better insight into the current status of revisions as markups can be configured to different stages of completion as updates are made and the project evolves. We have also added multi-user capabilities – using game engine networking functionality allows for multiple users to be in the tool at once, interacting and communicating within the same environment together in real-time. And our tool is able to be built for multiple platforms, so users can access and interact with it using web, mobile devices, VR, etc. As a result, not only does using a real-time game engine speed up the QA/QC process, it also opens new opportunities for improved communication and collaboration.

A Resampled Tree for Many Lights Rendering

We propose a new hybrid method for efficiently sampling many lights on a scene that combines a simplified spatial tree with a resampling stage. Building on previous methods that work with a split or cut of the light tree, we introduce the idea of probabilistic splitting to eliminate noise boundaries. This yields a subset of lights that is then reduced to a smaller and bounded set for full light/BSDF evaluation and resampling. Our main contribution is the stochastic splitting formulation combined with a reservoir set technique which limits samples to an arbitrary number to avoid variable size collections.

ENVIZ: DreamWorks GPU-Accelerated Interactive Environment Deformation and Visualization Toolset

ENVIZ is a toolset at DreamWorks Animation that uses the power of the GPU for both interactive deformation and visualization of environments in the 3-D viewport. It replicates the final rendered hi-fidelity motion, and reacts to material shaders and lights to produce a well shaded and lit scene. Meshes with millions of points can be deformed and visualized real-time in the viewport which can help the artist evaluate motion for a shot without having to run costly renders thus providing significant time and resource savings.

Premo: Integrated Versioning

SESSION: Win, Lose or Draw

Pixar's Inside Out 2: Character Rig Challenges & Techniques

The characters team on Pixar’s Inside Out 2 shares some of the technical & design challenges on our character rigs and presents the techniques used to solve them.

Familiar Feelings: Emotion Look Development on Pixar's Inside Out 2

The emotion characters on Pixar’s Inside Out (2015) were composed of multiple elements that gave them an ethereal look. They were an ingenious composite of a core volume that behaved and illuminated as a glowing surface, moving particles that hovered over it, an edge volume tinted differently based on the lighting direction, and strands of dots that resembled hairs from a distance but sparkles up close. On Pixar’s Inside Out 2 (2024) we had the challenging task of bringing these well known, technically complex characters back to life. We recreated the core five emotions, straddling the delicate balance of upgrading them to take advantage of newer technology while still maintaining their familiar look. We also created a whole new cast of emotions for Riley’s adolescent mind.

Cinematography of Pixar's Win or Lose

Pixar's Win or Lose - Stylized FX in an Animated Series

Win or Lose, Pixar’s first original venture into episodic long-form storytelling, features a variety of stylized looks and visual effects as diverse as its cast and set of perspectives. To face these challenges, the effects team was formed from a group of multidisciplinary artists from typically separate groups across the studio. We worked under a philosophical mindset focused on collaboration across departments, lightweight and nimble experimentation, and an eye for global impact over polished detail. Each episode provided its own unique set of challenges and opportunities to exercise these philosophies, as well as successes and failures. We discuss examples from our production that highlight the process of creating these stylized effects.

SESSION: The Effect of Character

A Modernization of the DreamWorks Feather System

The DreamWorks Feather System is a toolset used to design feathered characters. It enables artists to manage all aspects of feathers from card layout through shot work by using intuitive tools and automated pipeline processes. While the previous system proved inaccessible for some due to its highly technical nature, these new developments have enabled dozens of artists to successfully groom characters and run their feathers in shots. An initial feather layout can now be created in a matter of minutes, with a suite of comprehensive tools allowing for further refinement and precise control over feather card layout, feather curves, and animated motion.

Making of Chameleon Transformation FX in KFP4

In the world of Kung Fu Panda 4, the transformation FX emerges as one of the central narrative elements, showcasing the primary power of the film’s key villain, the Chameleon. This effect imbues the character with a scary, unsettling, ability to morph into different forms, varying significantly in size and shape. It enhances the storytelling by allowing the chameleon to be ubiquitously present in various guises and places. Our FX team embarked on a journey to develop a robust transformation system capable of handling numerous characters. It had to be flexible enough to cater to the needs of a large group of artists and adaptable across a wide range of shots from dynamic kung fu action to close-ups with subtle camera movement. Additionally, this work involved cooperative efforts with multiple departments, such as animation, CFX and lighting, resulting in the creation of cross-department workflows.

A New Kingdom: Weta FX Returns to The Planet of The Apes

Featuring dynamic interactions between apes and humans, a world reclaimed by nature, and large-scale combustion and water simulations, Kingdom of the Planet of The Apes is the result of several years of technical advancements. This paper will explore how artists brought the CG apes to life, and the myriads of landscapes that were built, dressed and eventually destroyed.

SESSION: Generative AI and Style Transfer

A Diffusion-Based Texturing Pipeline for Production-Grade Assets

We introduce an artist-centric Stable Diffusion pipeline, which takes production-grade mesh assets as input, and given text and image prompts generates texture maps instantaneously. While generative AI methods have recently enjoyed rapid growth, most existing works target a broad audience and are not directly usable in professional artist workflows, which require fast iteration time, precise editability, and compatibility with standard toolchains. We build a system that takes these requirements into consideration from the bottom up. Our pipeline allows manual overrides for maximal artist control and ultimately enables artists to rapidly iterate on their work without disruption to existing workflows.

Prompt to Anything?: Exploring Generative AI's Iterative Potential in Mapping Show Production.

Incorporating Generative AI (Gen-AI) into our content creation processes at Moment Factory is an adventure driven by curiosity full of excitement and challenges. We have embarked on this journey to not only understand the capabilities of Gen-AI but also to experiment with how to effectively utilize and adapt its creations capabilities to suit our project needs, particularly in the contexts of location-based experiences and 3D mapping shows.

A location-based experience denotes a multimedia interactive and immersive journey meticulously crafted for a specific physical location. As we orchestrate these experiences from inception to execution, we must develop content tailored precisely to each unique setting. These unusual canvases, often vast in scale, are called “mega-canvases”. They can encompass a diverse expansive array of display surfaces integrated into scenography or architectural designs, demanding visual content with hyper-resolutions exceeding 16k.

Our talk aims to unfold our visual and technical explorations, highlighting the innovative integration of Gen-AI into our process, and sharing the significant accomplishments and lessons learned from conceptualization to execution, and the unique challenges they present. Attendees will leave with a comprehensive understanding of the potential of Gen-AI in enhancing location-based experiences and 3D mapping shows, equipped with knowledge and inspiration to explore these technologies in their own creative endeavors. Through sharing our experiments and outcomes, we aim to foster a dialogue on the future of immersive content creation, inviting others to join us in pushing the boundaries of what is possible with Gen-AI in multimedia experiences.

Creating Infinite Characters From a Single Template: How Automation May Give Super Powers to 3D Artists

Game character creation is a time-consuming and highly manual process that often requires several days to complete one non-player character. In addition, each game has unique character art style and technical specifications (mesh, rig, textures). This explains why game worlds with thousands of unique characters are rare. To address this problem, we propose an automated character generation pipeline that can produce a full-body, rigged, dressed, and accessorized 3D character in less than a minute from an initial template character, based on 2D pictures and/or facial text descriptors. This pipeline helps scale up character creation and populate games with unlimited variations at reduced cost, while ensuring consistency with the game art style.

Making Magic with 3D Volume Style Transfer

In order to achieve the unique look of a hand-drawn, 2D visual style blended with 3D effects elements in Walt Disney Animation Studios’ film "Wish", effects artists leveraged new applications of Volumetric Neural Style Transfer (VNST) techniques. VNST, a method for stylizing 3D volume simulations by extracting the textures and shapes from a 2D image, has been increasingly more effective in assisting the stylization of effects through a number of the studio’s previous projects. For "Wish", the effects artists took advantage of new advancements to inform and accelerate the visual development process, increase the reuse and efficiency of VNST throughout production, and achieve levels of style with detail unachievable with previous methods.

SESSION: Stylized Shading, and Classic Too

Real-Time Refraction Shader for Animation

We present Animal Logic’s solution to simulate light refraction in deformable characters’ eye corneas in Autodesk® Maya®’s viewport, with a result close to the final render, significantly reducing iterations for facial animation workflow. Our approach is tightly integrated with Animal Logic’s GPU-based deformation engine for a minimal impact on playback. The refraction is generated automatically from the scene’s geometries and a simplified shading definition exported by the lookdev department using Pixar® Universal Scene Description. It has proven to give reliable results, allowing for its adoption in production for all current and upcoming shows.

Dynamic Screen Space Textures for Coherent Stylization

Achieving a watercolor look was an important goal for the style of Walt Disney Animation Studios’ “Wish”, and screen space textures were critical for achieving this, for example to convey a sense of the watercolor paper texture. However, using traditional screen space textures would have resulted in a distracting shower-door effect where the animation appears to swim through the texture. Our novel dynamic screen space textures overcame this problem by tracking animation and camera movement while maintaining the screen space qualities of the texture.

A Pipeline for Effective and Extensible Stylization

Inspired by turn-of-the-century watercolor illustrations, the art direction for Walt Disney Animation Studio’s "Wish" called for a unique watercolor storybook style. Traditional CG renders were insufficient to judge the final look of the film without additional processing in departments upstream from lighting. In order to prevent this issue, we created a stylization pipeline to (1) automatically provide all upstream departments stylized renders representative of the final stylized look of the film, (2) provide the ability to iterate closely with lighting to when stylization changes were needed, and (3) provide a centralized area for lighting to manage sweeping stylization changes across the show.

Evolving a Testsuite for Shading and Rendering

This talk will present some of the software testing methodologies used within the back-end divisions (rendering, shading etc) of Sony Pictures Imageworks. A key focus will be the renderingtestsuite, an image-based testing system. We will delve into its evolution, and discuss the challenges of large-scale image-based testing.

SESSION: Fire, Water, and Sand

Simulation and Representation of Topology-Changing Rolling Waves for Massive Open Ocean Games

Rendering close-shore water oceanic phenomena, e.g. rolling waves, for real-time applications is a challenging problem, especially in a massive open-world game where variability, artistic control and performance are crucial. While using a form of animated height displacement maps is a common technique in open-world games (e.g. Assassin’s Creed Odyssey (2018), Death Stranding (2019)), it is limited to height data and is unable to deal with overhangs and changing topology of collapsing oceanic waves, we are proposing a novel method of capturing wave simulation data with a set of approximation curves that can deal with overhangs, changing topology and strict budget requirements.

Large Scale Sand Simulations on Under The Boardwalk

In the 2023 Feature Animation Under the Boardwalk a Romeo and Juliet meets West Side Story themed musical comedy; we enter the miniature world of the Hermit Crab. Set on the Jersey Shore, the rivalry between the sea crabs descending on the home of the resident land crabs during spring break gave us lots of sand interaction shots. Our workflows had to handle closeup macro shots with one or two crab characters, full battle sequences with hundreds of crowd crabs, and human scale characters. With hundreds of FX shots featuring fully simulated sand interaction and many beach environments needing sculpting and shaping to match dressed assets, we had to turn to automation techniques to help us tackle this number of shots efficiently. Our envfinishing step was used to preprocess the shot beach geometry to add detail, deform the sand around the static set objects and add footprints. The FX team used this geometry as a basis for the grain simulations and include any water interaction with rain or waves. These caches had to inherit the large scale textural detail and the fine, per grain variations provided by the surfacing department to seamlessly blend the sand and beach geometry.

Art Directable Underwater Explosion Simulation

We present a technique for simulating underwater explosions using an animated volume control method that allows us to visually approximate the expansion and contraction of underwater explosions measured in existing literature. The foundation of this technique is a FLIP/APIC bubble simulation coupled with a surrounding sparsely allocated volumetric water field in a multi-phase solve. We achieve the desired compression and expansion effects by animating the target bubbles volume via adjusting the equilibrium FLIP particle counts per voxel. Adjusting bubble density with volume and adding surface tension improves the match to real world references. Because our method can be animated to any timing desired by the artist, it is more practical for achieving art-direction.

Elemental - Fireplace Flooding FX

In Disney and Pixar’s Elemental, I was tasked with developing and executing flood water effects for the fireshop flooding sequence. The effect starts out as water spray emitting from gaps in the door and then grows in intensity as it breaks through the doors and windows with stronger force. This starts to flood the the fireshop and puts the main character Ember, who is made of fire in great peril. The characters eventually end up trapped inside a small reading room with water spouts spraying in from debris blocking the entrance. During the initial stages, we reviewed the storyboards, concept art and the layout staging and decided to group the shots in to 3 sections. The first part involved all shots with the water spouts spraying from the doorway. We created a sequence level effects simulation for the majority of the shots in this group. When the flooding got stronger and more turbulent, we decided to make a multishot Houdini rig to tackle these group of shots. For the water spouts in the reading room, we created sequence level effects simulation similiar to the door spray water. While crafting the effect we made sure to make the flooding feel progressively more and more turbulent and dangerous during the first 2 sections. During this time, we also identified dependencies with character/set animation and other effects like floating props and destruction effects.

SESSION: Love Me Some Color

The Real-Time Frontier: Stylized Feature Quality Storytelling in Unreal Engine

This paper presents Steamroller Animation's innovative method for merging 2D artistic styling within a 3D real-time environment for “Spice Frontier”, leveraging Unreal Engine for streamlined long-form production. Focusing on a novel shot setup process that addresses the challenges of transitioning to a real-time workflow. We introduce solutions like custom character setups, rim shaders, rig versioning, gobos and FX sprite sheets. This approach not only aligns artistic vision with advanced technology but also advances animation workflow standards, emphasizing efficiency and creative integrity.

Controlling the color appearance of objects by optimizing the illumination spectrum

We have developed an innovative lighting system that changes specific target colors while keeping the lights appearing naturally white. By precisely controlling the spectral power distribution (SPD) of illumination and harnessing the unique phenomenon of metamerism, our system achieves unique color variations in ways you’ve never seen before. Our system calculates the optimal SPDs of illumination for given materials to intensively induce metamerism, and then synthesizes the illumination using various colors of LEDs. We successfully demonstrated the system’s implementation at Paris Fashion Week 2024. As models step onto the stage, their dresses initiate a captivating transformation. Our system altering the colors of the dresses, showcasing an impressive transition from one stunning color to another.

Neutral Tone Mapping for PBR Color Accuracy

On e-commerce websites, interactive 3D product models are side-by-side with sRGB product photos that have been carefully color graded in post to achieve the desired marketing look. For color consistency, e-commerce production requires a tone mapper that faithfully reproduces 3D model material colors on the screen under neutral (grayscale) lighting to match the photos. This allows artists to build 3D models using marking-approved sRGB color swatches without needing to later tweak the material values to make the output align to existing product images.

A neutral tone mapper has been developed at the Khronos 3D Commerce working group precisely to address this need. The goal is to standardize it and make it an available option across authoring tools and renderers as an improved alternative to disabling tone mapping entirely when no "look" is desired.

SESSION: Less Work, More Perf

Seiler's Interpolation for Evaluating Polynomial Curves

Seiler’s interpolation allows evaluating polynomial curves, such as Bézier curves, with a small number of linear interpolations. It is particularly effective with hardware linear interpolation used in GPU texture filtering. We compare it to the popular alternatives, such as de Casteljau’s algorithm, and present how it extends to higher-degree polynomials.

Look, Ma, No Matrices!

This talk discusses some of the pitfalls and surprises revealed by the "Look, Ma, No Matrices!" [Keninck 2024] project that implements an industry standard forward renderer exclusively using PGA (Projective or Plane-based Geometric Algebra). We will briefly go over some of the challenges and reveal how and insightful application of PGA leads to substantial savings for a typical tangent space normal-mapping setup.

A Position Based Material Point Method

The explicit Material Point Method (MPM) is an easily implemented scheme for the simulation of a wide variety of different physical materials. However, explicit integration has well known stability issues. We have implemented a novel semi-implicit compliant constraint formulation of MPM that is stable at any time-step while remaining as easy to implement as an explicit integrator. We call this method Position Based MPM (PB-MPM). This work significantly improves the utility of MPM for real-time applications.

Designing Mobile Rendering Engines with "Bindless" Vulkan

SESSION: Stop Motion, Go Car!

Polymorph, a minimal input procedural modeling tool for rapid prototyping of stop motion puppets at LAIKA

Polymorph is a minimal input voxel-poly toolset for interactive creation of stop motion puppet head and neck assemblies. At its core it generates procedural 3D print-ready geometry for replacement facial animation.

A Procedural Production System for Autonomous Vehicle Simulation

Simulations provide the only safe and reliable way to test autonomous vehicle systems in situations that may be unusual or undesirable to witness on the road. In order to make simulation at scale possible and its accuracy sufficient, production systems must be built which adapt state-of-the-art graphics production practices to this new field. In this work we discuss the procedurally based system developed at Aurora, which forms a key part of the company’s approach to testing and validation.

Alba: A Multimodal Rendering System for Autonomous Vehicle Simulation

Physically based rendering and path tracing have become the norm in visual effects and animation thanks to the level of realism that they offer. On top of these foundations, modern rendering systems implement layers of specializations that target the use case of images for human consumption. In this paper, we discuss Alba, a multimodal rendering framework for camera, lidar and radar, which targets the use case of autonomous vehicle simulation. We discuss its architecture and the different design choices made to optimize for accuracy, scale, and machine consumption.

Self Examination: Pixar's Adventures in Stop Motion

Self is the story of a wooden doll who desperately wants to fit in and makes an ill-fated wish upon a star, sparking a journey of self discovery, leading her down a harmful path, and challenging her perspective of both who she is and where she belongs. It is also the most recent of Pixar’s SparkShorts, and represented a number of firsts for Pixar: The first use of stop motion animation, the first collaboration with another studio, and the first use of a live-action visual effects workflow in Pixar’s animation-centric pipeline. These presented some unique challenges and required us to restructure much of how we work in order to incorporate physical puppet fabrication and stop motion animation.