ACM SIGGRAPH Workshop: Computer Graphics for Autonomous Driving Applications

Organizers: Antonio M. López, José A. Iglesias-Guitián

Autonomous driving (AD) will likely be the core of future intelligent mobility. As recent events have demonstrated, autonomous driving already involves complex scientific-technical, ethical, and legal issues.  The scientific-technical challenge is multidisciplinary as we are not only responsible for the development of the physical vehicles but also the sensors and the artificial intelligence (AI) on which they rely. One key question is how to assess the performance of AI drivers and ensure that the desired safety and reliability standards are reached. AI drivers require a variety of models (perception, control, decision making) that must be trained on millions of data-driven experiences.  Assessing their performance requires, in part, an understanding of whether that “data” or raw information from sensors with associated ground truth conveying depth, motion, and semantics, is sufficient to cover the scenarios that will be encountered in operation.

In this context, Computer Graphics (CG) has emerged as a key field supporting both performance assessment and training of AI drivers. Latest advances in CG suggest that it is feasible to design the corner cases both for training and testing. Simulation allows us to drive millions of miles to assess the performance of AI drivers, as well as generate millions of episodes for training the models behind AI drivers. This simulation-based approach requires advances in procedural generation of realistic traffic infrastructure, realistic behavior of traffic participants (human drivers, cyclists, pedestrians), augmented and mixed reality for on-board videos, simulation of sensors (cameras, LIDAR, RADAR, etc.) and multi-sensor suites, automatic generation of accurate and diverse ground truth, and beyond real-time simulations on multi-AI agents.

The goal of this workshop is to bring together researchers and practitioners of both autonomous driving and computer graphics fields to discuss the open challenges that must be addressed in order to accelerate the deployment of safe and reliable autonomous vehicles. Speakers with experience on the use of simulation and computer graphics for autonomous driving will be invited to share their work and insights regarding upcoming research challenges.

Topics

  • Automatic generation of accurate and diverse ground truth
  • Modeling and simulation of sensors (cameras, LIDAR, RADAR, etc.)
  • Procedural generation of realistic traffic infrastructure
  • Realistic behavior of traffic participants (human drivers, cyclists, pedestrians)
  • Real-time multi-agent simulation
  • Augmented and mixed reality leveraging on-board data sequences (e.g. videos)
  • Management of on-board large data streams

The program of the Workshop has been released!

Organizers would like to thank Adam Bargteil, Jessica Hodgins, Adrien Treuille and Aaron Lefohn for their inestimable help on assembling this Workshop.

 

Speakers

Jose M Alvarez

Jose M. Alvarez is a Senior Deep Learning Engineer at NVIDIA working on scaling-up deep learning for autonomous driving. Previously, he was a senior researcher at Toyota Research Institute and at Data61/CSIRO (formerly NICTA) working on deep learning for large scale dynamic scene understanding. Prior to that, he worked as a postdoctoral researcher at New York University under the supervision of Prof. Yann LeCun. He graduated with his Ph.D. from Autonomous University of Barcelona (UAB) in October 2010, with focus on robust road detection under real-world driving conditions. Dr. Alvarez did research stays at the University of Amsterdam (in 2008 and 2009) and the Electronics Research Group at Volkswagen (in 2010) and Boston College. Since 2014, he has served as an associate editor for IEEE Transactions on Intelligent Transportation Systems.

Simon Box

Simon is the Simulation Architect at Aurora Innovation, where the sim team is working to build a simulation framework that can virtually prototype all parts of the Aurora self-driving software stack. Simon’s previous work in simulation includes his PhD at the University for Cambridge, UK where he was simulating the trajectories of particles in electrostatic fields. In the Machine Learning and Perception group at Microsoft Research, where he build a rocket flight simulator and on the Autopilot team at Tesla Motors, where he led the simulation efforts.

 

 

 

Jose De Oliveira

Jose has 20 + years of industry, working at tech giants such as IBM, Microsoft and Uber in areas that range from real-time communications to enterprise security systems. In 2006 he focused his career on Machine Learning, working on content filtering solutions for Family Safety at Microsoft, where he headed the delivery of the first SmartScreen anti-phishing solution for Internet Explorer 7. He later drove the development of paid search relevance models for mobile devices at Bing Ads and worked on applying Machine Learning to geospatial problems at Bing Maps, continuing that work after joining Uber in 2015. In 2017 he joined the Machine Learning Team at Unity, leading the autonomous vehicles engineering project, part of Unity’s Industrial initiatives. He’s based out of Bellevue, WA.
 

Miguel Ferreira

Miguel Ferreira after helping brands, like Ferrero, De Agostini, MindChamps, shape the mobile entertainment space, he is now Senior Software Engineer at CVEDIA, where he pushes the boundaries of real-time rendering, developing sensor models for cutting-edge deep learning applications. SynCity is a hyper-realistic simulator specifically designed for deep learning algorithm development and training. Constructing complex 3D land, aerial and marine environments and generating ground truth data for sensor devices like LiDAR, radar, thermal, near and far IR, and cameras, SynCity unleashes the limitations of the physical world. When Miguel is not simulating the real world, he is traveling it in search of the perfect picture.
 

Yongjoon Lee

Yongjoon Lee is Engineering Manager of Simulation at Zoox, responsible for the simulation platform to validate and improve the safety and quality of autonomous driving software. Yongjoon joined Zoox from Bungie, where he worked as engineering lead for AI, animation, core action system, cinematic system, and mission scripting system teams. Prior to Bungie, he published six technical papers at SIGGRAPH on realistic motion synthesis using reinforcement learning. He holds a Ph.D in Computer Science & Engineering from the University of Washington.

 

Ming C. Lin

Ming C. Lin is currently the Elizabeth Stevinson Iribe Chair of Computer Science at the University of Maryland College Park and John R. & Louise S. Parker Distinguished Professor Emerita of Computer Science at the University of North Carolina (UNC), Chapel Hill. She is also an honorary Chair Professor (Yangtze Scholar) at Tsinghua University in China. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and many best paper awards at international conferences. She is a Fellow of ACM, IEEE, and Eurographics.

Her research interests include computational robotics, haptics, physically-based modeling, virtual reality, sound rendering, and geometric computing. Her current projects include crowd and traffic simulation, modeling, and reconstruction at the city scale and autonomous driving via learning & simulation. She has (co-)authored more than 300 refereed publications in these areas and co-edited/authored four books. She has served on hundreds of program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is currently a member of Computing Research Association Women (CRA-W) Board of Directors, Chair of IEEE Computer Society (CS) Fellows Committee, Chair of IEEE CS Computer Pioneer Award, and Chair of ACM SIGGRAPH Outstanding Doctoral Dissertation Award. She is a former member of IEEE CS Board of Governors, a former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics (2011-2014), a former Chair of IEEE CS Transactions Operations Committee, and a member of several editorial boards. She also has served on several steering committees and advisory boards of international conferences, as well as government and industrial technical advisory committees.
 

Dinesh Manocha

Dinesh Manocha is the Paul Chrisman Iribe Chair in Computer Science & Electrical and Computer Engineering at the University of Maryland College Park. He is also the Phi Delta Theta/Matthew Mason Distinguished Professor Emeritus of Computer Science at the University of North Carolina – Chapel Hill. He has won many awards, including Alfred P. Sloan Research Fellow, the NSF Career Award, the ONR Young Investigator Award, and the Hettleman Prize for scholarly achievement. His research interests include multi-agent simulation, virtual environments, physically-based modeling, and robotics. His group has developed a number of packages for multi-agent simulation, crowd simulation, and physics-based simulation that have been used by hundreds of thousands of users and licensed to more than 60 commercial vendors. He has published more than 500 papers and supervised more than 35 PhD dissertations. He is an inventor of 9 patents, several of which have been licensed to industry. His work has been covered by the New York Times, NPR, Boston Globe, Washington Post, ZDNet, as well as DARPA Legacy Press Release. He is a Fellow of AAAI, AAAS, ACM, and IEEE and also received the Distinguished Alumni Award from IIT Delhi. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc.

 

Kevin McNamara

Kevin is the founder and CEO of Parallel Domain, a fast growing startup which has automated the generation of high fidelity virtual worlds and scenarios for simulation. He brings deep computer graphics experience having built and led a team within Apple's Special Projects Group focused on autonomous systems simulation, architected procedural content systems for Microsoft Game Studios, and contributed to academy award winning films at Pixar Animation Studios. Kevin holds a degree in computer science from Harvard University and resides in Palo Alto, CA

 

 

Ashu Rege

Ashu Rege is the Vice President of Software at Zoox, responsible for Zoox’s entire software platform including machine learning, motion planning, perception, localization, mapping, and simulation. Ashu joined Zoox from NVIDIA, where he was VP of Computer Vision & Robotics responsible for NVIDIA’s autonomous vehicle and drone technology projects. Previously, he held other senior roles at NVIDIA including VP of the Content & Technology group developing core graphics, physics simulation and GPU computing technologies, and associated software. Prior to NVIDIA, he co-founded and worked at various startups related to computer graphics, laser scanning, Internet and network technologies. Ashu holds a Ph.D in Computer Science from U.C. Berkeley.

 

German Ros

German Ros is a Research Scientist at Intel Intelligent Systems Lab (Santa Clara, California), working on topics at the intersection of Machine Learning, Simulation, Virtual worlds, Transfer Learning and Intelligent Autonomous agents. He leads the CARLA organization as part of the Open Source Vision Foundation. Before joining Intel Labs, German served as a Research Scientist at Toyota Research Institute (TRI), where he conducted research in the area of Simulation for Autonomous Driving, Scene Understanding and Domain Adaptation, in the context of Autonomous Driving. He also helped industrial partners, such as Toshiba, Yandex, Drive.ai, and Volkswagen to leverage simulation and virtual words to empower their machine learning efforts and served at the Computer Vision Center (CVC) as a technical lead for the simulation team. German Ros obtained his PhD in Computer Science at Autonomous University of Barcelona & the Computer Vision Center.
 

Philipp Slusallek

Philipp Slusallek is Scientific Director at the German Research Center for Artificial Intelligence (DFKI), where he heads the research area on Agents and Simulated Reality. At Saarland University he has been a professor for Computer Graphics since 1999, a principle investigator at the German Excellence-Cluster on “Multimodal Computing and Interaction”since 2007, and Director for Research at the Intel Visual Computing Institute since 2009. Before coming to Saarland University, he was a Visiting Assistant Professor at Stanford University. He originally studied physics in Frankfurt and Tübingen (Diploma/M.Sc.) and got his PhD in Computer Science from Erlangen University. He is associate editor of Computer Graphics Forum, a fellow of Eurographics, a member of acatech (German National Academy of Science and Engineering), and a member of the European High-Level Expert Group on Artificial Intelligence. His research covers a wide range of topics including artificial intelligence, simulated/digital reality, real-time realistic graphics, high-performance computing, motion modeling & synthesis, novel programming models, computational sciences, 3D-Internet technology, and others.

 

Gavriel State

Gavriel State is a Senior Director, System Software at NVIDIA, based in Toronto, where he leads efforts involving applications of AI technology to gaming and vice versa, in addition to work in remastering games for NVIDIA’s SHIELD TV platform. Previously, Gav founded TransGaming Inc, and spent 15 years focused on games and rendering technologies.

 

 

Logistics

A $40 registration fee is required by 9am Pacific Time on Tuesday Aug 7 to attend lunch. Unregistered attendees may participate if space allows, but lunch will not be provided.

Applications will be accepted on first come/first served basis until August 7th, 9AM PDT. Apply to participate!