Efficient simulation of contact is of interest for numerous physics-based animation applications. For instance, virtual reality training, video games, rapid digital prototyping, and robotics simulation are all examples of applications that involve contact modeling and simulation. However, despite its extensive use in modern computer graphics, contact simulation remains one of the most challenging problems in physics-based animation.
This course covers fundamental topics on the nature of contact modeling and simulation for computer graphics. Specifically, we provide mathematical details about formulating contact as a complementarity problem in rigid body and soft body animations. We briefly cover several approaches for contact generation using discrete collision detection. Then, we present a range of numerical techniques for solving the associated LCPs and NCPs. The advantages and disadvantages of each technique are further discussed in a practical manner, and best practices for implementation are discussed. Finally, we conclude the course with several advanced topics such as methods for soft body contact problems, barrier functions, and anisotropic friction modeling. Programming examples are provided in our appendix as well as on the course website to accompany the course notes.
The student of this course should already know how to use 3D modeling software to create FBX files. This Course expands on a short overview presented in the Educators' Forum. It is different in several ways. First presented is the overview of the context and justification of why botanically accurate plants and landscapes are important for educational applications, such as for use in museums, arboretums, and field trip experiences to botanical gardens. Connected to that goal is the importance of accuracy in visualization of not only the plant, but the entire virtual model of the landscape using plant inventory data and plant population density geographical information system (GIS) data. This then touches on the important issues for digital twins, as models and simulations of reality. Educational applications are different than entertainment application in the dimensions of information fidelity, or the trustworthiness of the presentation, and also the graphical fidelity, or the photorealistic capacity of the rendering systems. These are not always the same. High graphical fidelity is a byproduct of high information fidelity; the reverse is not always true. High graphical fidelity enhances information fidelity, if used for that purpose. Two immersive informal learning applications use cases are presented, one for augmented reality (AR), and the other as a virtual reality (VR) example. Both models used the same design and development process, integrating domain expertise, the botanist and the ecologist, with the art and software team to enhance the accuracy. Co-design, highly iterative review process, removes errors in educational content, and representation, as well as usability and may be generalized to any domain when learning is a goal of a digital twin. In this work it is referred to as the Expert-Learner-User-Experience (ELUX) design process. Game engines, as general purpose visualization tools, make multimodal and interaction possible to enhance user experiences and to make semantic material accessible to the learner. The technical constrains on the application design demanded two production pipelines. The AR and VR pipeline required low-polygon models for performance, and the newly released Unreal Engine 5 and Reality Capture created an opportunity to increase the graphical fidelity and the information fidelity of the plants and models. Virtual nature construction methods are covered in two processes, first with low-polygon 3D plant models ideal for AR and VR, and the second with high-polygon 3D plant models using Unreal Engine 5 and Reality Capture. As highly accurate 3D plant models, combined with stat of the art rendering for photorealistic models, when combined with GIS geospatial datasets, and visualized in immersive devices, digital twins of the natural world become possible. Once these models are connected to mathematical models of the natural world, dynamics driven by real time data feeds, and forecasts, both back in time and forward into the future will enhance our understanding of the natural world, and how it interacts with that of the artificial man-made world.
Today we're excited to walk you through creating location based Augmented Reality (AR) content with Snap's Lens Studio - Snaps augmented reality content authoring tool. We'll discuss what Location Based AR is, the design challenges it presents and demonstrate how you can use Lens Studio to address and solve many of these challenges.
Simulating dynamic deformation has been an integral component of Pixar's storytelling since Boo's shirt in Monsters, Inc. (2001). Recently, several key transformations have been applied to Pixar's core simulator Fizt that improve its speed, robustness, and generality. Starting with Coco (2017), improved collision detection and response were incorporated into the cloth solver, then with Cars 3 (2017) 3D solids were introduced, and in Onward (2020) clothing is allowed to interact with a character's body with two-way coupling.
The 3D solids are based on a fast, compact, and powerful new formulation that we have published over the last few years at SIGGRAPH. Under this formulation, the construction and eigendecomposition of the force gradient, long considered the most onerous part of the implementation, becomes fast and simple. We provide a detailed, self-contained, and unified treatment here that is not available in the technical papers. We also provide, for the first time, open-source C++ implementations of many of the described algorithms.
This new formulation is only a starting point for creating a simulator that is up challenges of a production environment. One challenge is performance: we discuss our current best practices for accelerating system assembly and solver performance. Another challenge that requires considerable attention is robust collision detection and response. Much has been written about collision detection approaches such as proximity-queries, continuous collisions and global intersection analysis. We discuss our strategies for using these techniques, which provides us with valuable information that is needed to handle challenging scenarios.
Compared to path tracing, spectral rendering is still often considered to be a niche application used mainly to produce optical wave effects like dispersion or diffraction. And while over the last years more and more people started exploring the potential of spectral image synthesis, it is still widely assumed to be only of importance in high-quality offline applications associated with long render times and high visual fidelity.
While it is certainly true that describing light interactions in a spectral way is a necessity for predictive rendering, its true potential goes far beyond that. Used correctly, not only will it guarantee colour fidelity, but it will also simplify workflows for all sorts of applications.
Wētā Digital's renderer Manuka showed that there is a place for a spectral renderer in a production environment and how workflows can be simplified if the whole pipeline adapts. Picking up from the course last year, we want to continue the discussion we started as we firmly believe that spectral data is the future in content production. The authors feel enthusiastic about more people being aware of the advantages that spectral rendering and spectral workflows bring and share the knowledge we gained over many years. The novel workflows emerged during the adaptation of spectral techniques at a number of large companies are introduced to a wide audience including technical directors, artists and researchers. However, while last year's course concentrated primarily on the algorithmic sides of spectral image synthesis, this year we want to focus on the practical aspects.
We will draw examples from virtual production, digital humans over spectral noise reduction to image grading, therefore showing the usage of spectral data enhancing each and every single part of the image pipeline.
Derivatives occur frequently in computer graphics and arise in many different contexts. Gradients and often Hessians of objective functions are required for efficient optimization. Gradients of potential energy are used to compute forces. Constitutive models are frequently formulated from an energy density, which must be differentiated to compute stress. Hessians of potential energy or energy density are needed for implicit integration. As the methods used in computer graphics become more accurate and sophisticated, the complexity of the functions that must be differentiated also increases. The purpose of this course is to show that it is practical to compute derivatives even for functions that may seem impossibly complex. This course provides practical strategies and techniques for planning, computing, testing, debugging, and optimizing routines for computing first and second derivatives of real-world routines. This course will also introduce and explore auto differentiation, which encompasses a variety of techniques for obtaining derivatives automatically. Applications to machine learning and differentiable simulation are also considered. The goal of this course is not to introduce the concept of derivatives, how to use them, or even how to calculate them per se. This is not intended to be a calculus course; we will assume that our audience is familiar with multivariable calculus. Instead, the emphasis is on implementing derivatives of complicated computational procedures in computer programs and actually getting them to work.
The evolution of the internet is underway, where immersive virtual 3D environments (commonly known as metaverse or telelife) will replace flat 2D interfaces. Crucial ingredients in this transformation are next-generation displays and cameras representing genuinely 3D visuals while meeting the human visual system's perceptual requirements.
This course will provide a fast-paced introduction to optimization methods for next-generation interfaces geared towards immersive virtual 3D environments. Firstly, we will introduce lensless cameras for high dimensional compressive sensing (e.g., single exposure capture to a video or one-shot 3D). Our audience will learn to process images from a lensless camera at the end. Secondly, we introduce holographic displays as a potential candidate for next-generation displays. By the end of this course, you will learn to create your 3D images that can be viewed using a standard holographic display. Lastly, we will introduce perceptual guidance that could be an integral part of the optimization routines of displays and cameras. Our audience will gather experience in integrating perception to display and camera optimizations.
This course targets a wide range of audiences, from domain experts to newcomers. To do so, examples from this course will be based on our in-house toolkit to be replicable for future use. The course material will provide example codes and a broad survey with crucial information on cameras, displays and perception.