MIG 2018: Limassol, Cyprus

11th annual ACM/ SIGGRAPH conference on Motion, Interaction and Games

The deadline for the ACM/SIGGRAPH conference on Motion, Interaction, and Games (MIG 2018) has been extended by a week. The new submission deadline is July 16 (23:59 AoE timezone).

The 11th annual ACM/SIGGRAPH conference on Motion, Interaction and Games (MIG 2018, formerly Motion in Games) will take place in Limassol, Cyprus from November 8-10, 2018. MIG is sponsored by ACM and held in cooperation with Eurographics.

This year’s MIG will feature a half day Machine Learning workshop led by Daniel Holden (Ubisoft), and Jungdam Won (Seoul National University)! Both researchers have used ML to make great contributions to the field of character animation. This workshop is designed to show attendees how they can start to use ML in their own research. More info about the workshop is available here: http://cyprusconferences.org/mig2018/workshops/ .

Motion plays a crucial role in interactive applications, such as VR, AR, and video games. Characters move around, objects are manipulated or move due to physical constraints, entities are animated, and the camera moves through the scene. Technological advances in VR and AR have also enabled new ways for users to interact with digital environments. Motion, Interaction, and Games (MIG) is focused on the intersection between these three complimentary research areas.

Motion is currently studied in many different areas of research, including graphics and animation, game technology, robotics, simulation, computer vision, and also physics, psychology, and urban studies. Likewise, the challenges of interaction drive research in wide ranging fields from mechanical engineering, to interface design, to perception. Games provide a unique application domain for investigating the intersection of the many facets of motion, interaction, and other areas of graphics. Cross-fertilization between these communities can considerably advance the state-of-the-art in the area.

The goal of the Motion, Interaction, and Games conference is to bring together researchers from this variety of fields to present their most recent results, to initiate collaborations, and to contribute to the advancement of the research area. The conference will consist of regular paper sessions as well as presentations by a selection of internationally renowned speakers in all areas related to interactive systems and animation. The conference includes entertaining cultural and social events that foster casual and friendly interactions among the participants.


Important Dates

Papers

Paper submission: July 16th, 2018 (deadline extended!)
Paper notification: August 20th, 2018
Camera-ready: Sept 17th, 2018

 


Conference Leadership

Conference Chairs

Program Chairs


Paper Publication

All of the accepted regular papers will be archived in the EG and ACM digital libraries. The top 10% papers will be selected for publication in a special section of the Elseviers Computers & Graphics.

Topics of Interest

The relevant topics include (but are not limited to):

  • Animation Systems
  • Animation Algorithms and Techniques
  • Character Animation
  • Behavioral Animation
  • Facial Animation
  • Particle Systems
  • Simulation of Natural Environments
  • Natural Motion Simulation
  • Virtual Humans
  • Physics-based Motion
  • Crowd Simulation
  • Path Planning
  • Navigation and Way-finding
  • Flocking and Steering Behaviour
  • Camera Motion
  • Object Manipulation
  • Motion Capture Techniques
  • Motion Analysis and Synthesis
  • Gesture Recognition
  • Interactive Narrative
  • Virtual/Augmented Reality
     

Review Process

All papers will be reviewed carefully by the International Program Committee members through a double blind process, with at least four reviewers per paper.
 


Papers

We invite submissions of original, high-quality papers in any of the topics of interest (see below). Each submission should be 7-10 pages in length for the long papers or 4-6 pages for the short papers, and will be reviewed by an international program committee for technical quality, novelty, significance, and clarity. All of the accepted regular papers will be archived in the EG and ACM digital libraries. All submissions will be considered for Best Paper Awards. Best Paper, Best Student Paper, and Best Presentation awards will be conferred during the conference.

The top 10% papers will be selected for a special issue in the Computer&Graphics journal (5 year impact factor: 1.089)

Accepted papers will be presented at the conference during oral sessions. Best Paper and Best Student Paper awards will be conferred during the conference.

We also invite poster submissions for work that has been published elsewhere but is of particular relevance to the MIG community (this work and the venue in which it as published should be identified in the abstract), or work that is of interest to the MIG community but is not yet mature enough to appear as a short or long paper.

Posters will not appear in the official MIG proceedings or in the ACM Digital library but will appear in an online database for distribution at author’s discretion. Accepted papers will be presented at the conference during oral sessions, or as posters during a poster session. Posters will be reviewed single-blind, so author information may be included.
 


Submission

Papers should be formatted using the SIGGRAPH formatting guidelines (sigconf). To submit, please follow these instructions before submitting to the easychair submission system.

ACM SIGGRAPH Sunday Workshop: Truth in Images, Videos, and Graphics

ACM SIGGRAPH Sunday Workshop: Truth in Images, Videos, and Graphics

ACM SIGGRAPH Workshop: Truth in Images, Videos, and Graphics

Organizers: Irfan Essa, Chris Bregler, Hany Farid

Purpose

One of the goals of computer graphics is to create images, scenes, and videos that appear real and indistinguishable from live-captured content. This goal is now quite achievable as images and videos can be synthesized with a level of realism such that we can’t tell if the content shown to us is just live-captured content, or some mixture of live content, with added manipulations and edits, or completely synthetic. While the ability to create such synthetic or hybrid content is a much-needed tool for entertainment and story-telling, it can also be used to distort the truth. Recently, we have witnessed a significant increase in both the number and success of manipulations in media. Modern graphics techniques are creating challenges for journalistic processes as truth can be easily manipulated and then shared widely. Tools from computer graphics and multimedia can now create images and videos that are indistinguishable from the real and are therefore very effective at manipulating the beliefs of consumers.

The goal of this inaugural workshop is to bring together researchers and practitioners in all aspects of media creation to understand the challenges as tools for manipulation are made available widely. We will discuss the tools and the issues around how these technologies impact society, and reflect on the responsibilities of both the technology creators and users of these technologies.

The format of this workshop will include invited speakers to set the stage for this conversation.

Topics

  • Videos of real people saying something they never said.
    • http://grail.cs.washington.edu/projects/AudioToObama/
  • Detecting of Manipulation.
    • https://arxiv.org/abs/1805.04953
    • https://arxiv.org/pdf/1805.04096.pdf
  • Staging is manipulation
    • https://petapixel.com/2012/10/01/famous-valley-of-the-shadow-of-death-photo-was-most-likely-staged/
    • https://www.nytimes.com/2011/09/04/books/review/believing-is-seeing-by-errol-morris-book-review.html

 

Speakers

Chris Bregler

Chris Bregler currently works at Google. He was on the faculty at New York University and Stanford University and has worked for several companies including Hewlett Packard, Interval, Disney Feature Animation, LucasFilm's ILM, Facebook's Oculus, and the New York Times. He received his M.S. and Ph.D. in Computer Science from U.C. Berkeley and his Diplom from Karlsruhe University. In 2016 he received an Academy Award in the Oscar's Science and Technology category. He has been named Stanford Joyce Faculty Fellow, Terman Fellow, and Sloan Research Fellow. He received the Olympus Prize for achievements in computer vision and pattern recognition and was awarded the IEEE Longuet-Higgins Prize for "Fundamental Contributions in Computer Vision that have withstood the test of time". His work has resulted in numerous awards from the National Science Foundation, Sloan Foundation, Packard Foundation, Electronic Arts, Microsoft, Google, U.S. Navy, U.S. Airforce, N.S.A, C.I.A. and other sources.He's been the executive producer of Squidball.net, which required building the world's largest real-time motion capture volume, and a massive multi-player motion game holding several world records in The Motion Capture Society. He has been active in the visual effects industry, for example, as the lead developer of ILM's Multitrack system that has been used in many feature film productions, including Avatar, Avengers, Noah, Star Trek, and Star Wars.

Alyosha Efros

Alexei (Alyosha) Efros joined UC Berkeley in 2013. Prior to that, he was nine years on the faculty of Carnegie Mellon University, and has also been affiliated with École Normale Supérieure/INRIA and University of Oxford. His research is in the area of computer vision and computer graphics, especially at the intersection of the two. He is particularly interested in using data-driven techniques to tackle problems where large quantities of unlabeled visual data are readily available. Efros received his PhD in 2003 from UC Berkeley. He is a recipient of CVPR Best Paper Award (2006), NSF CAREER award (2006), Sloan Fellowship (2008), Guggenheim Fellowship (2008), Okawa Grant (2008), Finmeccanica Career Development Chair (2010), SIGGRAPH Significant New Researcher Award (2010), ECCV Best Paper Honorable Mention (2010), 3 Helmholtz Test-of-Time Prizes (1999,2003,2005), and the ACM Prize in Computing (2016).

Webpage: https://www2.eecs.berkeley.edu/Faculty/Homepages/efros.html

 

 

Irfan Essa

 

Irfan Essa is a Distinguished Professor of Computing at Georgia Institute of Technology (GA Tech), in Atlanta, Georgia, USA and a Research Scientist at Google in Mountain View, CA, USA.  At GA Tech, He is in the School of Interactive Computing (iC) and an Associate Dean of Research in the College of Computing (CoC) and serves as the Inaugural Director of the new Interdisciplinary Research Center for Machine Learning at Georgia Tech (ML@GT).  Essa works in the areas of Computer Vision, Machine Learning, Computer Graphics, Computation Perception, Robotics, Computer Animation, and Social Computing, with potential impact on Autonomous Systems, Video Analysis, and Production (e.g., Computational Photography & Video, Image-based Modeling and Rendering, etc.) Human Computer Interaction, Artificial Intelligence, Computational Behavioral/Social Sciences, and Computational Journalism research.  He has published over 150 scholarly articles in leading journals and conference venues on these topics and several of his papers have also won best paper awards. He has been awarded the NSF CAREER and was elected to the grade of IEEE Fellow. He has held extended research consulting positions with Disney Research and Google Research and also was an Adjunct Faculty Member at Carnegie Mellon’s Robotics Institute. He joined GA Tech Faculty in 1996 after his earning his MS (1990), Ph.D. (1994), and holding research faculty position at the Massachusetts Institute of Technology (Media Lab) [1988-1996].

Webpage: www.irfanessa.com / Twitter: @irrfaan

Hany Farid

Hany Farid has been serving as the Albert Bradley 1915 Third Century Professor and Chair of Computer Science at Dartmouth until 2017. After a sabbatical in 2018-2019, he is joining the faculty of Computer Science at University of California at Berkeley in 2019, Farid’s research focuses on digital forensics, image analysis, and human perception. He received my undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, an M.S. in Computer Science from SUNY Albany, and a Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth in 1999. He is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and he is a Fellow of the IEEE and National Academy of Inventors. He is also the Chief Technology Officer and co-founder of Fourandsix Technologies and a Senior Adviser to the Counter Extremism Project.

Webpage: http://www.cs.dartmouth.edu/farid/

Ira Kemelmacher-Shlizerman

Ira Kemelmacher-Shlizerman is a Scientist and Entrepreneur. Ira's interests are in the intersection of computer vision, computer graphics and learning. A major part of her work is to invent virtual and augmented reality experiences to empower people in their day to day activities, and develop algorithms for modeling people from unconstrained photos, videos, audio and language.  Dr. Kemelmacher-Shlizerman is an Assistant Professor in the Allen School at the University of Washington.

Founder and Co-Director of the UW Reality Lab, and Research Scientist at Facebook. She founded a startup Dreambit that was acquired by Facebook Inc. in 2016, and Tech Transfered product Face Movies to Google Inc. in 2011.  Ira received her Ph.D in computer science and applied mathematics at the Weizmann Institute of Science. Her works were awarded the Google faculty award, Madrona prize, the Innovation of the Year Award, 2016, selected to the covers of CACM and SIGGRAPH, and frequently covered by most national and international media. She has been serving as area chair and technical committee of both CVPR and SIGGRAPH, and part of Expert Network, LDV capital.

Webpate: https://homes.cs.washington.edu/~kemelmi/

Hao Li

Hao Li is CEO/Co-Founder of Pinscreen, assistant professor of Computer Science at the University of Southern California, and the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. Hao's work in Computer Graphics and Computer Vision focuses on digitizing humans and capturing their performances for immersive communication and telepresence in virtual worlds. His research involves the development of novel geometry processing, data-driven, and deep learning algorithms.

He is known for his seminal work in non rigid shape alignment, real-time facial performance capture, hair digitization, and dynamic full body capture. He was previously a visiting professor at Weta Digital, a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. He won the Office of Naval Research (ONR) Young Investigator Award in 2018. Hao obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).

Webpage: http://www.hao-li.com/

Logistics

A $40 registration fee is required by 9am Pacific Time on Tuesday Aug 7 to attend lunch. Unregistered attendees may participate if space allows, but lunch will not be provided.

Applications will be accepted on first come/first served basis until August 7th, 9AM PDT.  Apply to participate!

SCA '18

SCA '18

ACM SIGGRAPH/Eurographics Symposium on Computer Animation

CNRS Paris Michel Ange, Paris, France
11-13 July, 2018

SCA18 Program is now online !

The 17th annual Symposium on Computer Animation (SCA) will be held in Paris in France, July 11-13, and the symposium will be hosted at the "CNRS Délégation Paris Michel Ange" (grand auditorium Marie-Curie).

Please note that the early bird registration has been extended until June 19th.

Invited Speakers

  • JP Lewis (SEED Electronic Arts), Open Problems in Character Animation for Games and VFX
  • Mark Meyer (Pixar Animations), Animation Research in Feature Film Production

Registration

  • Early bird (Until June 19th):
    •   Students: €250
    •   ACM SIGGRAPH/EG member: €350
    •   Other: €400

The registration fee includes participation to the main conference, all conference materials, lunchs and coffee breaks, the welcome reception and the conference dinner. The conference dinner will take place at Musée d'Orsay:
Registration fee also includes the possibility to follow keynotes from the collocated SGP conference (July 7-11) also in Paris !

Description

SCA is the premier forum for innovations in the software and technology of computer animation. It unites researchers and practitioners working on all aspects of time-based phenomena. Our focused, intimate gathering, with single track program and emphasis on community interaction, makes SCA the best venue to exchange research results, get inspired, and set up collaborations. We hope to see you in Paris!

Conference Chairs:

  • Maud Marchal, Univ. Rennes, INSA, IRISA
  • Damien Rohmer, Ecole Polytechnique

Program Chairs:

  • Nils Thuerey, Technical University of Munich
  • Thabo Beeler, Disney Research Zurich

Poster Chair:

  • Mélina Skouras, Inria Grenoble
Please Vote!

Please Vote!

Please Vote!

ACM SIGGRAPH needs your help electing members of the Executive Committee. This election will fill the role of Treasurer as well as two Directors At Large. It also includes important Bylaw changes. 

Each candidate on the ballot has authored a position statement explaining their vision of what they hope to accomplish as member of the EC.  Reviewing these statements and exploring member profiles will be useful in making informed decisions. Learn about the candidates and the Bylaw changes and cast your vote before 16:00 UTC 15 August 2018!

Treasurer Candidates

Director At Large Candidates

Members of ACM SIGGRAPH who are in good standing as of June 1, 2018 have been sent voting information in an email message or letter from Election Services Corporation (ESC). If ACM does not have an email address on file, members will receive voting information via postal mail. Members also have the option of requesting a paper ballot. If you have not received an email from ESC, please contact them at acmsighelp@electionservicescorp.com or toll-free at 1-866-720-4357.  If you received the email, but need to retrieve your PIN you can do so as well.   

Bylaw Changes

Changes introduced in proposed amendment to the ACM SIGGRAPH Bylaws:

  • All elected positions will be director positions and ACM SIGGRAPH’s officers will no longer be elected to specific positions through member elections.
  • EC to appoint up to three voting members to its rank from core constituencies as needed
  • Elect the directors to specific position

The first major change is that all elected positions will be director positions and ACM SIGGRAPH’s officers will no longer be elected to specific positions through member elections. Every year, after the new EC takes office, it will select new officers from within the EC to serve one-year terms. The officers will be the Chair, Chair-Elect, Treasurer, and Treasurer-Elect. The Chair-Elect will become the Chair and the Treasurer-Elect will become the Treasurer after the next election.

The second major change is to allow the EC to appoint up to three voting members to its rank. This change will allow the EC to increase representation from core constituencies as needed, and to allow key volunteers — such as the Conference Advisory Group Chair — full participation on the EC to better reflect their role in the organization.

The third major change is to elect the directors to specific positions. For example, if three director positions are open in a given election, the voters would be presented with at least two candidates for position A, another two for position B, etc. This change will allow the nominating committee to achieve increased diversity in skillset, area of expertise, and geography.

The other minor changes to the bylaws are changing current titles, for example, renaming “President,” to the new title of “Chair”, and changes that bring us into compliance with ACM or current practice for SIGGRAPH (e.g., the timing of the election).

The ACM SIGGRAPH Executive Committee believes that, taken together, these changes will allow for a more agile SIGGRAPH organization, better able to focus and act on the strategic issues concerning the field of computer graphics and interactive techniques.

Please see a copy of the proposed ACM SIGGRAPH Bylaws here.

 

ACM SIGGRAPH Sunday Workshop: Computer Graphics for Autonomous Driving Applications

ACM SIGGRAPH Sunday Workshop: Computer Graphics for Autonomous Driving Applications

ACM SIGGRAPH Workshop: Computer Graphics for Autonomous Driving Applications

Organizers: Antonio M. López, José A. Iglesias-Guitián

Autonomous driving (AD) will likely be the core of future intelligent mobility. As recent events have demonstrated, autonomous driving already involves complex scientific-technical, ethical, and legal issues.  The scientific-technical challenge is multidisciplinary as we are not only responsible for the development of the physical vehicles but also the sensors and the artificial intelligence (AI) on which they rely. One key question is how to assess the performance of AI drivers and ensure that the desired safety and reliability standards are reached. AI drivers require a variety of models (perception, control, decision making) that must be trained on millions of data-driven experiences.  Assessing their performance requires, in part, an understanding of whether that “data” or raw information from sensors with associated ground truth conveying depth, motion, and semantics, is sufficient to cover the scenarios that will be encountered in operation.

In this context, Computer Graphics (CG) has emerged as a key field supporting both performance assessment and training of AI drivers. Latest advances in CG suggest that it is feasible to design the corner cases both for training and testing. Simulation allows us to drive millions of miles to assess the performance of AI drivers, as well as generate millions of episodes for training the models behind AI drivers. This simulation-based approach requires advances in procedural generation of realistic traffic infrastructure, realistic behavior of traffic participants (human drivers, cyclists, pedestrians), augmented and mixed reality for on-board videos, simulation of sensors (cameras, LIDAR, RADAR, etc.) and multi-sensor suites, automatic generation of accurate and diverse ground truth, and beyond real-time simulations on multi-AI agents.

The goal of this workshop is to bring together researchers and practitioners of both autonomous driving and computer graphics fields to discuss the open challenges that must be addressed in order to accelerate the deployment of safe and reliable autonomous vehicles. Speakers with experience on the use of simulation and computer graphics for autonomous driving will be invited to share their work and insights regarding upcoming research challenges.

Topics

  • Automatic generation of accurate and diverse ground truth
  • Modeling and simulation of sensors (cameras, LIDAR, RADAR, etc.)
  • Procedural generation of realistic traffic infrastructure
  • Realistic behavior of traffic participants (human drivers, cyclists, pedestrians)
  • Real-time multi-agent simulation
  • Augmented and mixed reality leveraging on-board data sequences (e.g. videos)
  • Management of on-board large data streams

The program of the Workshop has been released!

Organizers would like to thank Adam Bargteil, Jessica Hodgins, Adrien Treuille and Aaron Lefohn for their inestimable help on assembling this Workshop.

 

Speakers

Jose M Alvarez

Jose M. Alvarez is a Senior Deep Learning Engineer at NVIDIA working on scaling-up deep learning for autonomous driving. Previously, he was a senior researcher at Toyota Research Institute and at Data61/CSIRO (formerly NICTA) working on deep learning for large scale dynamic scene understanding. Prior to that, he worked as a postdoctoral researcher at New York University under the supervision of Prof. Yann LeCun. He graduated with his Ph.D. from Autonomous University of Barcelona (UAB) in October 2010, with focus on robust road detection under real-world driving conditions. Dr. Alvarez did research stays at the University of Amsterdam (in 2008 and 2009) and the Electronics Research Group at Volkswagen (in 2010) and Boston College. Since 2014, he has served as an associate editor for IEEE Transactions on Intelligent Transportation Systems.

Simon Box

Simon is the Simulation Architect at Aurora Innovation, where the sim team is working to build a simulation framework that can virtually prototype all parts of the Aurora self-driving software stack. Simon’s previous work in simulation includes his PhD at the University for Cambridge, UK where he was simulating the trajectories of particles in electrostatic fields. In the Machine Learning and Perception group at Microsoft Research, where he build a rocket flight simulator and on the Autopilot team at Tesla Motors, where he led the simulation efforts.

 

 

 

Jose De Oliveira

Jose has 20 + years of industry, working at tech giants such as IBM, Microsoft and Uber in areas that range from real-time communications to enterprise security systems. In 2006 he focused his career on Machine Learning, working on content filtering solutions for Family Safety at Microsoft, where he headed the delivery of the first SmartScreen anti-phishing solution for Internet Explorer 7. He later drove the development of paid search relevance models for mobile devices at Bing Ads and worked on applying Machine Learning to geospatial problems at Bing Maps, continuing that work after joining Uber in 2015. In 2017 he joined the Machine Learning Team at Unity, leading the autonomous vehicles engineering project, part of Unity’s Industrial initiatives. He’s based out of Bellevue, WA.
 

Miguel Ferreira

Miguel Ferreira after helping brands, like Ferrero, De Agostini, MindChamps, shape the mobile entertainment space, he is now Senior Software Engineer at CVEDIA, where he pushes the boundaries of real-time rendering, developing sensor models for cutting-edge deep learning applications. SynCity is a hyper-realistic simulator specifically designed for deep learning algorithm development and training. Constructing complex 3D land, aerial and marine environments and generating ground truth data for sensor devices like LiDAR, radar, thermal, near and far IR, and cameras, SynCity unleashes the limitations of the physical world. When Miguel is not simulating the real world, he is traveling it in search of the perfect picture.
 

Yongjoon Lee

Yongjoon Lee is Engineering Manager of Simulation at Zoox, responsible for the simulation platform to validate and improve the safety and quality of autonomous driving software. Yongjoon joined Zoox from Bungie, where he worked as engineering lead for AI, animation, core action system, cinematic system, and mission scripting system teams. Prior to Bungie, he published six technical papers at SIGGRAPH on realistic motion synthesis using reinforcement learning. He holds a Ph.D in Computer Science & Engineering from the University of Washington.

 

Ming C. Lin

Ming C. Lin is currently the Elizabeth Stevinson Iribe Chair of Computer Science at the University of Maryland College Park and John R. & Louise S. Parker Distinguished Professor Emerita of Computer Science at the University of North Carolina (UNC), Chapel Hill. She is also an honorary Chair Professor (Yangtze Scholar) at Tsinghua University in China. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and many best paper awards at international conferences. She is a Fellow of ACM, IEEE, and Eurographics.

Her research interests include computational robotics, haptics, physically-based modeling, virtual reality, sound rendering, and geometric computing. Her current projects include crowd and traffic simulation, modeling, and reconstruction at the city scale and autonomous driving via learning & simulation. She has (co-)authored more than 300 refereed publications in these areas and co-edited/authored four books. She has served on hundreds of program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is currently a member of Computing Research Association Women (CRA-W) Board of Directors, Chair of IEEE Computer Society (CS) Fellows Committee, Chair of IEEE CS Computer Pioneer Award, and Chair of ACM SIGGRAPH Outstanding Doctoral Dissertation Award. She is a former member of IEEE CS Board of Governors, a former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics (2011-2014), a former Chair of IEEE CS Transactions Operations Committee, and a member of several editorial boards. She also has served on several steering committees and advisory boards of international conferences, as well as government and industrial technical advisory committees.
 

Dinesh Manocha

Dinesh Manocha is the Paul Chrisman Iribe Chair in Computer Science & Electrical and Computer Engineering at the University of Maryland College Park. He is also the Phi Delta Theta/Matthew Mason Distinguished Professor Emeritus of Computer Science at the University of North Carolina – Chapel Hill. He has won many awards, including Alfred P. Sloan Research Fellow, the NSF Career Award, the ONR Young Investigator Award, and the Hettleman Prize for scholarly achievement. His research interests include multi-agent simulation, virtual environments, physically-based modeling, and robotics. His group has developed a number of packages for multi-agent simulation, crowd simulation, and physics-based simulation that have been used by hundreds of thousands of users and licensed to more than 60 commercial vendors. He has published more than 500 papers and supervised more than 35 PhD dissertations. He is an inventor of 9 patents, several of which have been licensed to industry. His work has been covered by the New York Times, NPR, Boston Globe, Washington Post, ZDNet, as well as DARPA Legacy Press Release. He is a Fellow of AAAI, AAAS, ACM, and IEEE and also received the Distinguished Alumni Award from IIT Delhi. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc.

 

Kevin McNamara

Kevin is the founder and CEO of Parallel Domain, a fast growing startup which has automated the generation of high fidelity virtual worlds and scenarios for simulation. He brings deep computer graphics experience having built and led a team within Apple's Special Projects Group focused on autonomous systems simulation, architected procedural content systems for Microsoft Game Studios, and contributed to academy award winning films at Pixar Animation Studios. Kevin holds a degree in computer science from Harvard University and resides in Palo Alto, CA

 

 

Ashu Rege

Ashu Rege is the Vice President of Software at Zoox, responsible for Zoox’s entire software platform including machine learning, motion planning, perception, localization, mapping, and simulation. Ashu joined Zoox from NVIDIA, where he was VP of Computer Vision & Robotics responsible for NVIDIA’s autonomous vehicle and drone technology projects. Previously, he held other senior roles at NVIDIA including VP of the Content & Technology group developing core graphics, physics simulation and GPU computing technologies, and associated software. Prior to NVIDIA, he co-founded and worked at various startups related to computer graphics, laser scanning, Internet and network technologies. Ashu holds a Ph.D in Computer Science from U.C. Berkeley.

 

German Ros

German Ros is a Research Scientist at Intel Intelligent Systems Lab (Santa Clara, California), working on topics at the intersection of Machine Learning, Simulation, Virtual worlds, Transfer Learning and Intelligent Autonomous agents. He leads the CARLA organization as part of the Open Source Vision Foundation. Before joining Intel Labs, German served as a Research Scientist at Toyota Research Institute (TRI), where he conducted research in the area of Simulation for Autonomous Driving, Scene Understanding and Domain Adaptation, in the context of Autonomous Driving. He also helped industrial partners, such as Toshiba, Yandex, Drive.ai, and Volkswagen to leverage simulation and virtual words to empower their machine learning efforts and served at the Computer Vision Center (CVC) as a technical lead for the simulation team. German Ros obtained his PhD in Computer Science at Autonomous University of Barcelona & the Computer Vision Center.
 

Philipp Slusallek

Philipp Slusallek is Scientific Director at the German Research Center for Artificial Intelligence (DFKI), where he heads the research area on Agents and Simulated Reality. At Saarland University he has been a professor for Computer Graphics since 1999, a principle investigator at the German Excellence-Cluster on “Multimodal Computing and Interaction”since 2007, and Director for Research at the Intel Visual Computing Institute since 2009. Before coming to Saarland University, he was a Visiting Assistant Professor at Stanford University. He originally studied physics in Frankfurt and Tübingen (Diploma/M.Sc.) and got his PhD in Computer Science from Erlangen University. He is associate editor of Computer Graphics Forum, a fellow of Eurographics, a member of acatech (German National Academy of Science and Engineering), and a member of the European High-Level Expert Group on Artificial Intelligence. His research covers a wide range of topics including artificial intelligence, simulated/digital reality, real-time realistic graphics, high-performance computing, motion modeling & synthesis, novel programming models, computational sciences, 3D-Internet technology, and others.

 

Gavriel State

Gavriel State is a Senior Director, System Software at NVIDIA, based in Toronto, where he leads efforts involving applications of AI technology to gaming and vice versa, in addition to work in remastering games for NVIDIA’s SHIELD TV platform. Previously, Gav founded TransGaming Inc, and spent 15 years focused on games and rendering technologies.

 

 

Logistics

A $40 registration fee is required by 9am Pacific Time on Tuesday Aug 7 to attend lunch. Unregistered attendees may participate if space allows, but lunch will not be provided.

Applications will be accepted on first come/first served basis until August 7th, 9AM PDT. Apply to participate!