SIGGRAPH '18- ACM SIGGRAPH 2018 Courses

Full Citation in the ACM Digital Library

A conceptual framework for procedural animation (CFPA)

This course presents a conceptual framework for procedural animation (CFPA) that defines and describes common language for a fundamental timing definition that can be used to design and drive procedural animation. The course will use both test and real production cases to illustrate these concepts. By following the CFPA, users can set up procedural animation rigs and tools in a highly organized and modularized way to facilitate authoring and reuse.

Getting started with webGL and three.js

Introduction to the Vulkan Graphics API

Story: it's not just for writers... anymore!

This course has been designed for technical directors, artists, animators, modelers, programmers, and designers whose work is essential in making "the story" come to life. This information can be particularly useful when communicating with screenwriters, directors, producers, and supervisors. This course answers the question "what is story?" (and you don't even have to take a course in screenwriting). Entertaining with numerous clips to show how this has been used in animation and VFX.

The purpose is to take the mystery out of "what is story" for those programmers, artists, and game designers whose work is essential in making Animation, VFX, and Games successful. The attendees will know the basic elements of story, so the next time a producer or director talk about what they want for the story, they will know what specific story benchmarks the producer/director are trying to meet in connecting emotionally with an audience. This course will build from the knowledge that story "is a sequence of events (acts) that builds to a climax...." and then lays out the universal elements of story that make up plot, character development, and narrative structure.

This course emphasizes story elements in context (i.e. theme, character, setting, conflict etc.) and their relationship to classic story structure (i.e. setup, inciting incident, rising action, climax, resolution etc.). It analyzes conflict (i.e. internal, external, environmental), turning points, cause & effect, archetype vs stereotypes, inciting incident, and how choice defines character. In all stories there must be questions raised: What is at stake (i.e. survival, safety, love, esteem, etc.)? What is going to motivate (inciting incident) the main character (protagonist)? Will that be enough to move them from the ordinary (where they are comfortable) to go out into a different world (where the action takes place), and How will the character "change"(necessary for all dramatic stories)? These are just a few of the storytelling elements necessary to structure a solid story. This course is for all whose work makes the story better but their job isn't creating the story.

Deep learning: a crash course

An introduction to physics-based animation

Fundamentals of color science

Color is a fundamental aspect of our visual experience. For this reason color capture and display technologies play a central role in computer graphics and digital imaging. Color science is the discipline that studies the relationships between the physical and visual aspects of color, with the goal of developing tools and systems that facilitate the accurate measurement, representation, communication, and control of color. This course will introduce students to the fundamentals of color science and its applications in graphics and imaging. The first part of the course will introduce the physical and visual aspects of color, and will then focus on colorimetry, colorimetric imaging, color difference metrics, and color appearance models, and color in computer graphics. The second part of the course will then provide a survey of how the insights of color science have been implemented into standards and practices in the fields of photography, computer graphics, film, and video. In addition to explaining how standards such as sRGB, REC-709, REC-2020, and ACES are designed, this part of the course will discuss best-practice workflows to achieve accurate end-to-end color-managed results. Students who take this course will come away with an understanding of both the scientific foundations of color science and its practical uses in graphics and imaging.

Applications of vision science to virtual and augmented reality

Introduction to DirectX raytracing

Realistic rendering in architecture and product visualization

In the recent years, VFX and computer animation witnessed a "path tracing revolution" during which most of the rendering technology has converged on the use of physically-based Monte Carlo techniques. This transition sparked a renewed interest in the topic of physically-based rendering but the focus has been almost exclusively on the application of these method in the movie industry. In the meantime, a significant segment of the realistic rendering market - that focusing on architectural, automotive, and product visualization - has been relying on the physically-based rendering technology since the beginning of the millennium. Despite that, relatively little attention in the communication at Siggraph has been so far paid to this market segment.

The goal of this course is to fill this gap. We present user expectations in the "archviz" and product visualization markets and discuss the technological and engineering choices that these expectations imply on the rendering engines used in these fields. We juxtapose this technology to rendering for motion pictures and point out the most significant differences. Specifically, we discuss the pros and cons of CPU and GPU rendering, simple (unidirectional) vs. more advanced (bidirectional) light transport simulation methods, different approaches to "lookdev" and material design, artist workflows, and the integration of the renderers into the image creation pipeline. We conclude by discussing some open technological issues along with the constraints that the research community should consider so that the the developed methods respect the needs and expectations of the target user group.

Color in advanced displays: HDR, OLED, AR & VR

Digital typography: 25 years of text rendering in computer graphics

3D user interfaces for virtual reality and games: 3D selection, manipulation, and spatial navigation

In this course, we will take a detailed look at different topics in the field of 3D user interfaces (3DUIs) for Virtual Reality and Gaming. With the advent of Augmented and Virtual Reality in numerous application areas, the need and interest in more effective interfaces becomes prevalent, among others driven forward by improved technologies, increasing application complexity and user experience requirements. Within this course, we highlight key issues in the design of diverse 3DUIs by looking closely into both simple and advanced 3D selection/manipulation and spatial navigation interface design topics. These topics are highly relevant, as they form the basis for most 3DUI-driven application, yet also can cause major issues (performance, usability, experience. motion sickness) when not designed properly as they can be difficult to handle. Within this course, we build on top of a general understanding of 3DUIs to discuss typical pitfalls by looking closely at theoretical and practical aspects of selection, manipulation, and navigation and highlight guidelines for their use.

Monte Carlo methods for physically based volume rendering

Path tracing in production

The last few years have seen a decisive move of the movie making industry towards rendering using physically based methods, mostly implemented in terms of path tracing. While path tracing reached most VFX houses and animation studios at a time when a physically based approach to rendering and especially material modelling was already firmly established, the new tools brought with them a whole new balance, and many new workflows have evolved to find a new equilibrium. Letting go of instincts based on hard-learned lessons from a previous time has been challenging for some, and many different takes on a practical deployment of the new technologies have emerged. While the language and toolkit available to the technical directors keep closing the gap between lighting in the real world and the light transport simulations ran in software, an understanding of the limitations of the simulation models and a good intuition of the tradeoffs and approximations at play are of fundamental importance to make efficient use of the available resources. In this course, the novel workflows emerged during the transitions at a number of large facilities are presented to a wide audience including technical directors, artists, and researchers.

Moving mobile graphics

A device you can wear or keep in your pocket has less power and thermal budget than a typical desktop device by several orders of magnitude, but with similar user experience expectations. Wearable VR and AR headsets accentuate these challenges while also increasing the demand for computation. To meet these graphical demands while keeping within a mobile form factor requires scrutiny of all aspects of modern graphics, from hardware design and associated graphics APIs, to OS and system considerations, to the algorithms and techniques employed.

This half-day course provides a technical introduction to mobile graphics spanning the hardware-software spectrum and explores the state of the art with practitioners at the forefront of their field. We look at the impact of XR on hardware, software and OS, quantified best practices in real-time rendering and computer vision research on mobile devices.

Topics in real-time animation

Cage-based performance capture

Nowadays, highly-detailed animations of live-actor performances are increasingly easier to acquire, and 3D Video has reached considerable attention in visual media productions. This lecture will address new paradigm to achieve performance capture using cage-based shapes in motion. We define cage-based performance capture as the non-invasive process of capturing non-rigid surface of actors from multi-view in the form of sparse control deformation handles trajectories and a laser-scanned static template shape.

In this course, we address the hard problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations in four steps: (1) cage-based inverse kinematics, (2) conversion of surface performance capture into cage-based deformation, (3) cage-based cartoon surface exaggeration, and (4) cage-based registration of time-varying reconstructed point clouds. The key objective is to attract the interest of game programmers, digital artists and filmmakers in employing purely geometric and animator-friendly tools to capture and reuse surfaces in motion. Finally, a variety of advanced animation techniques and vision-based graphics applications could benefit from animatable coordinates-based sub-spaces as presented in this course.

At first sight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local captured properties of dynamic surfaces with a limited number of controllable, flexible and reusable parameters. While abandoning the classical articulated skeleton as the underlying structure, we show that cage-based deformers offer a flexible design space abstraction to dynamically non-rigid surface motion through learning space-time shape variability. Registered cage-handles trajectories allow the reconstruction of complex mesh sequences by deforming an enclosed fine-detail mesh. Finally, cage-based performance capture techniques offer suitable and reusable outputs for animation transfer by decoupling the motion from the geometry.

Machine learning and rendering

Machine learning techniques just recently enabled dramatic improvements in both realtime and offline rendering. In this course, we introduce the basic principles of machine learning and review their relations to rendering. Besides fundamental facts like the mathematical identity of reinforcement learning and the rendering equation, we cover efficient and surprisingly elegant solutions to light transport simulation, participating media, noise removal, and anti-aliasing.