ACM SIGGRAPH Sunday Workshop: Computer Graphics for Autonomous Driving Applications

ACM SIGGRAPH Sunday Workshop: Computer Graphics for Autonomous Driving Applications

ACM SIGGRAPH Workshop: Computer Graphics for Autonomous Driving Applications

Organizers: Antonio M. López, José A. Iglesias-Guitián

Autonomous driving (AD) will likely be the core of future intelligent mobility. As recent events have demonstrated, autonomous driving already involves complex scientific-technical, ethical, and legal issues.  The scientific-technical challenge is multidisciplinary as we are not only responsible for the development of the physical vehicles but also the sensors and the artificial intelligence (AI) on which they rely. One key question is how to assess the performance of AI drivers and ensure that the desired safety and reliability standards are reached. AI drivers require a variety of models (perception, control, decision making) that must be trained on millions of data-driven experiences.  Assessing their performance requires, in part, an understanding of whether that “data” or raw information from sensors with associated ground truth conveying depth, motion, and semantics, is sufficient to cover the scenarios that will be encountered in operation.

In this context, Computer Graphics (CG) has emerged as a key field supporting both performance assessment and training of AI drivers. Latest advances in CG suggest that it is feasible to design the corner cases both for training and testing. Simulation allows us to drive millions of miles to assess the performance of AI drivers, as well as generate millions of episodes for training the models behind AI drivers. This simulation-based approach requires advances in procedural generation of realistic traffic infrastructure, realistic behavior of traffic participants (human drivers, cyclists, pedestrians), augmented and mixed reality for on-board videos, simulation of sensors (cameras, LIDAR, RADAR, etc.) and multi-sensor suites, automatic generation of accurate and diverse ground truth, and beyond real-time simulations on multi-AI agents.

The goal of this workshop is to bring together researchers and practitioners of both autonomous driving and computer graphics fields to discuss the open challenges that must be addressed in order to accelerate the deployment of safe and reliable autonomous vehicles. Speakers with experience on the use of simulation and computer graphics for autonomous driving will be invited to share their work and insights regarding upcoming research challenges.

Topics

  • Automatic generation of accurate and diverse ground truth
  • Modeling and simulation of sensors (cameras, LIDAR, RADAR, etc.)
  • Procedural generation of realistic traffic infrastructure
  • Realistic behavior of traffic participants (human drivers, cyclists, pedestrians)
  • Real-time multi-agent simulation
  • Augmented and mixed reality leveraging on-board data sequences (e.g. videos)
  • Management of on-board large data streams

The program of the Workshop has been released!

Organizers would like to thank Adam Bargteil, Jessica Hodgins, Adrien Treuille and Aaron Lefohn for their inestimable help on assembling this Workshop.

 

Speakers

Jose M Alvarez

Jose M. Alvarez is a Senior Deep Learning Engineer at NVIDIA working on scaling-up deep learning for autonomous driving. Previously, he was a senior researcher at Toyota Research Institute and at Data61/CSIRO (formerly NICTA) working on deep learning for large scale dynamic scene understanding. Prior to that, he worked as a postdoctoral researcher at New York University under the supervision of Prof. Yann LeCun. He graduated with his Ph.D. from Autonomous University of Barcelona (UAB) in October 2010, with focus on robust road detection under real-world driving conditions. Dr. Alvarez did research stays at the University of Amsterdam (in 2008 and 2009) and the Electronics Research Group at Volkswagen (in 2010) and Boston College. Since 2014, he has served as an associate editor for IEEE Transactions on Intelligent Transportation Systems.

Simon Box

Simon is the Simulation Architect at Aurora Innovation, where the sim team is working to build a simulation framework that can virtually prototype all parts of the Aurora self-driving software stack. Simon’s previous work in simulation includes his PhD at the University for Cambridge, UK where he was simulating the trajectories of particles in electrostatic fields. In the Machine Learning and Perception group at Microsoft Research, where he build a rocket flight simulator and on the Autopilot team at Tesla Motors, where he led the simulation efforts.

 

 

 

Jose De Oliveira

Jose has 20 + years of industry, working at tech giants such as IBM, Microsoft and Uber in areas that range from real-time communications to enterprise security systems. In 2006 he focused his career on Machine Learning, working on content filtering solutions for Family Safety at Microsoft, where he headed the delivery of the first SmartScreen anti-phishing solution for Internet Explorer 7. He later drove the development of paid search relevance models for mobile devices at Bing Ads and worked on applying Machine Learning to geospatial problems at Bing Maps, continuing that work after joining Uber in 2015. In 2017 he joined the Machine Learning Team at Unity, leading the autonomous vehicles engineering project, part of Unity’s Industrial initiatives. He’s based out of Bellevue, WA.
 

Miguel Ferreira

Miguel Ferreira after helping brands, like Ferrero, De Agostini, MindChamps, shape the mobile entertainment space, he is now Senior Software Engineer at CVEDIA, where he pushes the boundaries of real-time rendering, developing sensor models for cutting-edge deep learning applications. SynCity is a hyper-realistic simulator specifically designed for deep learning algorithm development and training. Constructing complex 3D land, aerial and marine environments and generating ground truth data for sensor devices like LiDAR, radar, thermal, near and far IR, and cameras, SynCity unleashes the limitations of the physical world. When Miguel is not simulating the real world, he is traveling it in search of the perfect picture.
 

Yongjoon Lee

Yongjoon Lee is Engineering Manager of Simulation at Zoox, responsible for the simulation platform to validate and improve the safety and quality of autonomous driving software. Yongjoon joined Zoox from Bungie, where he worked as engineering lead for AI, animation, core action system, cinematic system, and mission scripting system teams. Prior to Bungie, he published six technical papers at SIGGRAPH on realistic motion synthesis using reinforcement learning. He holds a Ph.D in Computer Science & Engineering from the University of Washington.

 

Ming C. Lin

Ming C. Lin is currently the Elizabeth Stevinson Iribe Chair of Computer Science at the University of Maryland College Park and John R. & Louise S. Parker Distinguished Professor Emerita of Computer Science at the University of North Carolina (UNC), Chapel Hill. She is also an honorary Chair Professor (Yangtze Scholar) at Tsinghua University in China. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and many best paper awards at international conferences. She is a Fellow of ACM, IEEE, and Eurographics.

Her research interests include computational robotics, haptics, physically-based modeling, virtual reality, sound rendering, and geometric computing. Her current projects include crowd and traffic simulation, modeling, and reconstruction at the city scale and autonomous driving via learning & simulation. She has (co-)authored more than 300 refereed publications in these areas and co-edited/authored four books. She has served on hundreds of program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is currently a member of Computing Research Association Women (CRA-W) Board of Directors, Chair of IEEE Computer Society (CS) Fellows Committee, Chair of IEEE CS Computer Pioneer Award, and Chair of ACM SIGGRAPH Outstanding Doctoral Dissertation Award. She is a former member of IEEE CS Board of Governors, a former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics (2011-2014), a former Chair of IEEE CS Transactions Operations Committee, and a member of several editorial boards. She also has served on several steering committees and advisory boards of international conferences, as well as government and industrial technical advisory committees.
 

Dinesh Manocha

Dinesh Manocha is the Paul Chrisman Iribe Chair in Computer Science & Electrical and Computer Engineering at the University of Maryland College Park. He is also the Phi Delta Theta/Matthew Mason Distinguished Professor Emeritus of Computer Science at the University of North Carolina – Chapel Hill. He has won many awards, including Alfred P. Sloan Research Fellow, the NSF Career Award, the ONR Young Investigator Award, and the Hettleman Prize for scholarly achievement. His research interests include multi-agent simulation, virtual environments, physically-based modeling, and robotics. His group has developed a number of packages for multi-agent simulation, crowd simulation, and physics-based simulation that have been used by hundreds of thousands of users and licensed to more than 60 commercial vendors. He has published more than 500 papers and supervised more than 35 PhD dissertations. He is an inventor of 9 patents, several of which have been licensed to industry. His work has been covered by the New York Times, NPR, Boston Globe, Washington Post, ZDNet, as well as DARPA Legacy Press Release. He is a Fellow of AAAI, AAAS, ACM, and IEEE and also received the Distinguished Alumni Award from IIT Delhi. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc.

 

Kevin McNamara

Kevin is the founder and CEO of Parallel Domain, a fast growing startup which has automated the generation of high fidelity virtual worlds and scenarios for simulation. He brings deep computer graphics experience having built and led a team within Apple's Special Projects Group focused on autonomous systems simulation, architected procedural content systems for Microsoft Game Studios, and contributed to academy award winning films at Pixar Animation Studios. Kevin holds a degree in computer science from Harvard University and resides in Palo Alto, CA

 

 

Ashu Rege

Ashu Rege is the Vice President of Software at Zoox, responsible for Zoox’s entire software platform including machine learning, motion planning, perception, localization, mapping, and simulation. Ashu joined Zoox from NVIDIA, where he was VP of Computer Vision & Robotics responsible for NVIDIA’s autonomous vehicle and drone technology projects. Previously, he held other senior roles at NVIDIA including VP of the Content & Technology group developing core graphics, physics simulation and GPU computing technologies, and associated software. Prior to NVIDIA, he co-founded and worked at various startups related to computer graphics, laser scanning, Internet and network technologies. Ashu holds a Ph.D in Computer Science from U.C. Berkeley.

 

German Ros

German Ros is a Research Scientist at Intel Intelligent Systems Lab (Santa Clara, California), working on topics at the intersection of Machine Learning, Simulation, Virtual worlds, Transfer Learning and Intelligent Autonomous agents. He leads the CARLA organization as part of the Open Source Vision Foundation. Before joining Intel Labs, German served as a Research Scientist at Toyota Research Institute (TRI), where he conducted research in the area of Simulation for Autonomous Driving, Scene Understanding and Domain Adaptation, in the context of Autonomous Driving. He also helped industrial partners, such as Toshiba, Yandex, Drive.ai, and Volkswagen to leverage simulation and virtual words to empower their machine learning efforts and served at the Computer Vision Center (CVC) as a technical lead for the simulation team. German Ros obtained his PhD in Computer Science at Autonomous University of Barcelona & the Computer Vision Center.
 

Philipp Slusallek

Philipp Slusallek is Scientific Director at the German Research Center for Artificial Intelligence (DFKI), where he heads the research area on Agents and Simulated Reality. At Saarland University he has been a professor for Computer Graphics since 1999, a principle investigator at the German Excellence-Cluster on “Multimodal Computing and Interaction”since 2007, and Director for Research at the Intel Visual Computing Institute since 2009. Before coming to Saarland University, he was a Visiting Assistant Professor at Stanford University. He originally studied physics in Frankfurt and Tübingen (Diploma/M.Sc.) and got his PhD in Computer Science from Erlangen University. He is associate editor of Computer Graphics Forum, a fellow of Eurographics, a member of acatech (German National Academy of Science and Engineering), and a member of the European High-Level Expert Group on Artificial Intelligence. His research covers a wide range of topics including artificial intelligence, simulated/digital reality, real-time realistic graphics, high-performance computing, motion modeling & synthesis, novel programming models, computational sciences, 3D-Internet technology, and others.

 

Gavriel State

Gavriel State is a Senior Director, System Software at NVIDIA, based in Toronto, where he leads efforts involving applications of AI technology to gaming and vice versa, in addition to work in remastering games for NVIDIA’s SHIELD TV platform. Previously, Gav founded TransGaming Inc, and spent 15 years focused on games and rendering technologies.

 

 

Logistics

A $40 registration fee is required by 9am Pacific Time on Tuesday Aug 7 to attend lunch. Unregistered attendees may participate if space allows, but lunch will not be provided.

Applications will be accepted on first come/first served basis until August 7th, 9AM PDT. Apply to participate!

Meet the ACM SIGGRAPH Candidates

Meet the ACM SIGGRAPH Candidates

The ACM SIGGRAPH election window is now open and will remain open until August 15, 2018.  There are two races being held, one for Treasurer and the other for Director At Large in which the top two candidate will be elected.  Those elected will be starting their terms September 1, 2018. 

Each candidate has created a position paper based on their vision of what they hope to accomplish in their term of office.  The candidates were also asked to do an ACM SIGGRAPH Member profile.  Please read about their positions, position statements and member profile. These documents will be useful for making an informed decision.  Learn about the candidates and cast your vote!

Treasurer Candidates

Director At Large Candidates

Members of ACM SIGGRAPH who are in good standing as of June 1, 2018 have been sent voting information in an email message or letter from Election Services Corporation (ESC). If ACM does not have an email address on file, members will receive voting information via postal mail. Members also have the option of requesting a paper ballot. If you have not received an email from ESC, please contact them at acmsighelp@electionservicescorp.com or toll-free at 1-866-720-4357.

Thesis Fast Forward

Thesis Fast Forward

Make an Impression

To provide more young presenters with a platform for sharing innovative ideas and gaining valuable exposure, SIGGRAPH 2018 is introducing the first ever Thesis Fast Forward program. Doctoral students in the final stage of their Ph.D. studies, or Ph.D. degree holders within a year of graduation, are encouraged to submit to this event. The central element of the submission will be a three-minute video presentation by the candidate, explaining the central theme of their thesis, using no more than two supporting slides. The intent is to make the presentation accessible to a non-expert audience, representative of the typical cross-section of SIGGRAPH attendees.

Based on the video submissions and, as a secondary criterion on the provided abstracts, a jury will select up to 12 candidates who will be asked to perform three-minute oral presentations live at a special session at SIGGRAPH 2018. A panel of experts will provide immediate commentary after each live presentation and select a best performance. The live presentations will be judged solely on the content of the live three-minute presentation.

All selected candidates will be awarded an upgradeable Select Conference registration (upon commitment to participate in the live event). Submissions are open through Thursday, 28 June 2018. Finalists for the live event will be notified on Tuesday, 3 July 2018.

https://s2018.siggraph.org/conference/conference-overview/thesis-fast-forward/

Thesis Fast Forward Committee:

  • Eftychios Sifakis, University of Wisconsin-Madison

  • M. Alex. O. Vasilescu, Associate Director, UCLA Computer Graphics and Vision Lab

Submission Guidelines

The core component of a submission is a presentation video with duration no more than 3 minutes. In this video, the applicants should summarize the key components of their thesis, its merit and potential impact. Up to two presentation slides can be used as an optional backdrop to the presenter, who must be clearly visible in the video. The submission video should be provided via a web link (a link to a video on media sharing website such as YouTube is recommended to avoid encoding issues, but a direct URL to a video file is also acceptable).

The material to be submitted on the EasyChair Website should be a single PDF file, with the following contents:

  • A cover page, listing the applicant's name, affiliation, tentative or final dissertation title, and the actual (in the past) or future anticipated date of PhD degree conferral. This date should be no earlier than 1 September 2017, and no later than 31 August 2019.
    The cover page should also list the link to the video submission itself as mentioned earlier.

  • An optional addendum of up to two pages can be used to include an extended abstract, in the SIGGRAPH publication format, providing additional context or technical details on the applicant's dissertation work. Not including this extra material will not, in any way, disqualify the applicant from selection, as the video submission is fundamentally the basis on which the selection will be made.

Submission Website : https://easychair.org/conferences/?conf=siggraphtff18

 

Call for Candidates for the ACM SIGGRAPH Executive Committee

Call for Candidates for the ACM SIGGRAPH Executive Committee and Standing Committees

We are looking for Candidates to run for Director at Large (three positions). For information on these positions please see ACM SIGGRAPH Elections page. All candidates must be Professional members of ACM and ACM SIGGRAPH. If you are interested, please contact Rebecca Strzelec.

The Meet the Candidates Forum at S2018 will be Monday 13 August 12:30-1:30.

ACM SIGGRAPH Taps Tony Baylis to Head New Diversity Committee

ACM SIGGRAPH Taps Tony Baylis to Head New Diversity Committee
by Melanie Farmer

Tony Baylis, a longtime ACM SIGGRAPH member and leadership volunteer, has been appointed the inaugural chair of the organization’s Diversity and Inclusion Committee. Baylis, who is director for the Office of Strategic Diversity and Inclusion programs at Lawrence Livermore National Laboratory, will carry out the new committee’s goal to create a welcoming and nurturing community for everyone working in computer graphics and interactive techniques independent of gender, ethnic background and abilities.

“Diversity and inclusion is a priority for SIGGRAPH,” says Jessica Hodgins, ACM SIGGRAPH president. “We are thrilled that Tony has agreed to lead this key effort for us. With his direct expertise in this area, he’ll be able to help us move forward with all the myriad aspects of diversity and inclusion.”

Baylis believes it is critical for all organizations to be engaged in the discussion of diversity and inclusion. In this new role, he says “My hope is that we will strive to make sure that not only all are welcome but individuals are being respected, listened to and encouraged to grow in the organization. We truly want to work in the best interest of all.”

In the near term, the committee is considering kicking off a diversity awareness campaign alongside the 2018 conference. The group’s goals will be to build a strategy that the organization and its membership endorses, believes in and lives by—an effort that will be driven by the committee and organization. Baylis intends to recruit five to 10 members to serve on the new group, and the hope is to organize yearlong mentorship programs and produce diversity workshops and panels at the annual conferences.

Baylis is a longtime SIGGRAPH volunteer and contributor. He has served on conference committees, as director and treasurer on the Executive Committee, as well as a member of the Conference Advisory Group. Baylis has worked in science and technology for more than 20 years. At Lawrence Livermore, he is a DOE Minorities in Energy Champion for the department and also serves on a number of conference program committees and advisory boards that promote STEM and diversity in science and technical careers.

Comments, questions and suggestions for the Diversity and Inclusion Committee are welcome at diversity-info@siggraph.org