The emphasis of this course focuses on the capabilities of the WebGL application programming interface (i.e., programming library, often called an API).
Computer graphics relies on computing hardware for everything from animation to rendering. An emerging new technology exploits the properties of quantum objects to offer radically new ways to think about, create, and run algorithms. For example, quantum computers can evaluate many different input values simultaneously, where "many" can be larger than the number of atoms in the visible universe. The catch is that only one output can be obtained from each run.
Quantum computing may change how we think about computer graphics. For example, future quantum computers may be able to intersect astronomical numbers of rays with a similarly massive database of objects in a single execution, or evaluate enormous numbers of simulation parameters in parallel to return the one set that produces a desired result.
This course describes, without advanced math, the core ideas of quantum computing. We start with the quantum version of the classical bit (called a qubit) and the basic operators that modify qubits. We introduce the four essential properties that distinguish quantum computers from familiar classical computers: superposition, interference, entanglement, and measurement. We show how these properties are orchestrated to build quantum algorithms.
We conclude with a brief overview of some of the most well-known quantum algorithms, and some of their possible applications in computer graphics.
Quantum computers are already here, and are increasing in size and reliability at a rapid pace. Open-source simulators abound, and small quantum computers are even available for free use on the web.
Classical hardware has served computer graphics well. Quantum computing offers a fundamentally new way to design and execute algorithms, which could change our field in fundamental ways. Now is the perfect time to starting thinking about quantum computing for graphics.
The Fourier Transform is a fundamental tool in computer graphics. It explains where aliasing comes from, how to filter textures, why noise can be a good thing, how to simplify shapes, how to select sampling patterns, how the JPEG image compression scheme works, and why wagon wheels start to rotate backwards when they spin too fast. The Fourier Transform not only describes the source of many problems in graphics, it also often tells us how to avoid or suppress them.
Understanding the Fourier transform gives us the ability to identify, diagnose, and fix objectionable artifacts that might otherwise remain mysterious in rendering, modeling, and animation code.
Unfortunately, the Fourier Transform is unfamiliar to many people, and opaque to many others, often because they are put off by its technical language and complicated looking mathematics. Though the notation can be daunting at first contact, it's really just a terse way of expressing specific sequences of multiplications and additions.
In this course we only assume you know a little vector algebra, like the material in any graphics book (if you remember how to multiply two matrices, you're all set). We'll carefully build up to the full Discrete-Time Fourier Transform (and its inverse) that we use every day, taking small steps and illustrating the process with pictures.
Covering the critical stages of subject labeling, data capture, post-processing and final retargeting onto custom character models, this course provides best practices and advanced techniques for streamlining motion capture pipelines. Vicon presents the latest capabilities culminating from 30+ years of development, including evolving use cases such as markerless and multi-participant. Motion capture is an expansive, complex discipline with many different ways of doing things. This course is unique in that it is a first-hand explanation of the technology and best practices to follow when using motion capture for games and VFX, from the creators themselves. The course will cover everything from key considerations when planning the layout and topology of a motion capture volume, to perfecting captured performer data ready for packaging into a gaming experience, to integrating the latest features and capabilities available.
Hearing is the most time-sensitive of the human senses. The technology underlying real-time audio rendering must provide control over our physical, perceptual, cultural, and aesthetic worlds within the tightest of deadlines and with perfect temporal coherence. This course offers an introduction to state-of-the-art real-time audio rendering technology. We dive into the core concepts and challenges that define the problem space and touch on similarities shared by real-time graphic rendering and non-real-time audio rendering.
We originally referred to "real-time" audio systems to draw a distinction with "non-real-time" systems where a series of audio samples is entirely determined and computed in advance, originally because computers were not fast enough to perform the needed mathematical calculations. If you can't listen to the sound as it is produced, you won't be able to change it live and know what you're changing. Thus, the desire to turn a computer into something more like a "musical instrument" was a primary motivation in the development of real-time audio systems.
This course covers how we use technology to capture and preserve ourselves and others, and the philosophical, legal and ethical considerations involved.
Meow Wolf is a Santa Fe-based arts and entertainment group that creates unforgettable immersive and interactive experiences that transport audiences of all ages into fantastic realms of story and exploration.
Welcome to the SIGGRAPH 2024 Course on Generative Models for Visual Content Editing and Creation! In this course, you will embark on an exciting journey into the realm of generative models and their groundbreaking applications in computer graphics. Over the duration of this course, you will gain a comprehensive understanding of generative models and diffusion models, explore fundamental machine learning and deep learning techniques, and discover cutting-edge applications for high-fidelity image synthesis, video generation, 3D content creation, and more. Here's what you can expect to learn:
GPUs have long been built for rasterization-based rendering. As such, graphics APIs like OpenGL, Vulkan, Metal, and Direct3D 12 have been designed to support rasterization-focused rendering pipelines. Now that modern GPUs support ray tracing, new APIs are necessary to fully exploit these new hardware capabilities.
In this course students will be extending the powerful Blender 3D toolsuite through the scripting ecosystem. The goal is to get users who are familiar or otherwise just starting Blender3D to start scripting, creating add-ons, and experimenting by implementing various simple algorithms from computational geometry. Some concrete activities include installation, setup, navigating the scripting user interface, understanding how to rapidly prototype scripts, how to create larger script projects, and how to debug. Some example algorithms that will be implemented include generating geometry, computing a bounding box, and computing a convex hull with a custom user interface. This course is targeted towards artists with minimal programming experience, or programmers who want to write tools that integrate into the Blender 3D ecosystem. Attendees will benefit by learning otherwise how Blender3D can be utilized as a sandbox for research experiments, app development, rapid prototyping, or otherwise more industrial uses like building a content pipeline.
Like a semester long graduate seminar on Optimization in Computer Graphics and Interactive Techniques, this course looks at Optimization through the lens of 13 technical papers selected by the lecturers. The lecturers will highlight trends, similarities, differences, and historical threads through the papers. The papers will cover a range of topics including numerical solutions, objective functions, discrete and continuous optimization, dimensionality reduction, and frictional contact. Applications will range from image segmentation to truss structures to real-time rendering.
The course will also serve as a retrospective on the selected papers, placing them in historical perspective and highlighting significant contributions as well as forgotten gems.
The lecturers have broad expertise across computer graphics and interactive techniques and have co-led the VANGOGH lab meeting at UMBC since 2015.
We will cover recent advances and ongoing challenges in the depiction of Black hair, otherwise known as kinky, or Afro-textured hair. In computer graphics, the majority hair research has been in the depiction straight or wavy hair. As a result, many aspects of the aesthetics and mechanics of Black hair remain poorly understood. To help fill this gap, we will present Code My Crown, a free guide to creating Black digital hairstyles that we co-authored in collaboration with a community of game artists and Dove. We also cover styling guidelines for 3D models in the Open Source Afro Hair Library, and present Lifted Curls, our strand simulation technique specifically designed for Afro-textured hair. Finally, we will suggest future directions for hair research.
Hello everyone. I'm Ralph Potter. I'm a GPU engineer at Samsung Mobile, where I act as Samsung's primary representative to Khronos and particularly the Vulkan Working Group for the past 5 years. Prior to that I've been a GPU driver engineer and GPU compiler/programming model researcher.
I'm here today to talk about how we utilise Universal Scene Description at Animal Logic, with a particular focus on our Asset structure. This overview will primarily encompass the traditional CG departments of Modelling, Surfacing & Rigging but can introduce some non-standard elements as we'll come across shortly. The structure of USD assets is a foundational puzzle piece of the pipeline which can too often be overlooked. The construction of these assets can influence the make-up and design of all downstream departments so is vital to get right. Luckily, by working with the core concepts of what USD can provide, it becomes possible to identify inefficiencies in the workflow. In a continuously evolving process, we can take a modular approach to seamlessly improve technological stability and user experience. One final quick note before we get started: All production images shown here come from the ALab, a publicly available playground provided by Animal Logic to help get to grips and explore how we use USD in our pipeline.
This course will be an exploration of the techniques used to create the iconic imagery from NASA's James Webb Space Telescope. If you're interested in working with real data from one of NASA's space-based observatories, this course is for you! We'll provide some background on the telescope and its launch as well as some general philosophy of our image processing methods. We'll then jump into image processing techniques that bring the universe into rich and vibrant detail. We'll cover how to navigate the data archives to find a suitable observation and download data files. Next, we'll take a close look at the process of "stretching" which essentially compresses the enormous dynamic range of the data to reveal hidden details. Following a successful stretch of the data, we'll go in depth on the technique known as "Chromatic Color" which is how we apply color to several different layers and blend them together to create an initial color composite. Finally, we'll look at some post-processing techniques which are used to clean up image artifacts and produce the final composite image. The techniques covered will demonstrate how this work straddles the line between art and science, with an eye towards maintaining the integrity of the source data while extracting the most information possible. As with so many artistic endeavors, there are different methods to achieve similar results and Alyssa and I will both provide examples of how we work with data to produce our images.