s
Share This Page o

ACM SIGGRAPH Sunday Workshop: Computer Graphics for Autonomous Driving Applications

18 Jun 2018

ACM SIGGRAPH Workshop: Computer Graphics for Autonomous Driving Applications

Organizers: Antonio M. López, José A. Iglesias-Guitián

Autonomous driving (AD) will likely be the core of future intelligent mobility. As recent events have demonstrated, autonomous driving already involves complex scientific-technical, ethical, and legal issues.  The scientific-technical challenge is multidisciplinary as we are not only responsible for the development of the physical vehicles but also the sensors and the artificial intelligence (AI) on which they rely. One key question is how to assess the performance of AI drivers and ensure that the desired safety and reliability standards are reached. AI drivers require a variety of models (perception, control, decision making) that must be trained on millions of data-driven experiences.   Assessing their performance requires, in part, an understanding of whether that “data” or raw information from sensors with associated ground truth conveying depth, motion, and semantics, is sufficient to cover the scenarios that will be encountered in operation.

In this context, Computer Graphics (CG) has emerged as a key field supporting both performance assessment and training of AI drivers. Latest advances in CG suggest that it is feasible to design the corner cases both for training and testing. Simulation allows us to drive millions of miles to assess the performance of AI drivers, as well as generate millions of episodes for training the models behind AI drivers. This simulation-based approach requires advances in procedural generation of realistic traffic infrastructure, realistic behavior of traffic participants (human drivers, cyclists, pedestrians), augmented and mixed reality for on-board videos, simulation of sensors (cameras, LIDAR, RADAR, etc.) and multi-sensor suites, automatic generation of accurate and diverse ground truth, and beyond real-time simulations on multi-AI agents.

The goal of this workshop is to bring together researchers and practitioners of both autonomous driving and computer graphics fields to discuss the open challenges that must be addressed in order to accelerate the deployment of safe and reliable autonomous vehicles. Speakers with experience on the use of simulation and computer graphics for autonomous driving will be invited to share their work and insights regarding upcoming research challenges.

Topics

  • Automatic generation of accurate and diverse ground truth
  • Modeling and simulation of sensors (cameras, LIDAR, RADAR, etc.)
  • Procedural generation of realistic traffic infrastructure
  • Realistic behavior of traffic participants (human drivers, cyclists, pedestrians)
  • Real-time multi-agent simulation
  • Augmented and mixed reality leveraging on-board data sequences (e.g. videos)
  • Management of on-board large data streams

Speakers

Jose M Alvarez

Jose M. Alvarez is a Senior Deep Learning Engineer at NVIDIA working on scaling-up deep learning for autonomous driving. Previously, he was a senior researcher at Toyota Research Institute and at Data61/CSIRO (formerly NICTA) working on deep learning for large scale dynamic scene understanding. Prior to that, he worked as a postdoctoral researcher at New York University under the supervision of Prof. Yann LeCun. He graduated with his Ph.D. from Autonomous University of Barcelona (UAB) in October 2010, with focus on robust road detection under real-world driving conditions. Dr. Alvarez did research stays at the University of Amsterdam (in 2008 and 2009) and the Electronics Research Group at Volkswagen (in 2010) and Boston College. Since 2014, he has served as an associate editor for IEEE Transactions on Intelligent Transportation Systems.
 

Jose De Oliveira

Jose has 20 + years of industry, working at tech giants such as IBM, Microsoft and Uber in areas that range from real-time communications to enterprise security systems. In 2006 he focused his career on Machine Learning, working on content filtering solutions for Family Safety at Microsoft, where he headed the delivery of the first SmartScreen anti-phishing solution for Internet Explorer 7. He later drove the development of paid search relevance models for mobile devices at Bing Ads and worked on applying Machine Learning to geospatial problems at Bing Maps, continuing that work after joining Uber in 2015. In 2017 he joined the Machine Learning Team at Unity, leading the autonomous vehicles engineering project, part of Unity’s Industrial initiatives. He’s based out of Bellevue, WA.

 

Miguel Ferreira

Miguel Ferreira after helping brands, like Ferrero, De Agostini, MindChamps, shape the mobile entertainment space, he is now Senior Software Engineer at CVEDIA, where he pushes the boundaries of real-time rendering, developing sensor models for cutting-edge deep learning applications. SynCity is a hyper-realistic simulator specifically designed for deep learning algorithm development and training. Constructing complex 3D land, aerial and marine environments and generating ground truth data for sensor devices like LiDAR, radar, thermal, near and far IR, and cameras, SynCity unleashes the limitations of the physical world. When Miguel is not simulating the real world, he is traveling it in search of the perfect picture.

 

Ming C. Lin

Ming C. Lin is currently the Elizabeth Stevinson Iribe Chair of Computer Science at the University of Maryland College Park and John R. & Louise S. Parker Distinguished Professor Emerita of Computer Science at the University of North Carolina (UNC), Chapel Hill. She is also an honorary Chair Professor (Yangtze Scholar) at Tsinghua University in China. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and many best paper awards at international conferences. She is a Fellow of ACM, IEEE, and Eurographics.

Her research interests include computational robotics, haptics, physically-based modeling, virtual reality, sound rendering, and geometric computing. Her current projects include crowd and traffic simulation, modeling, and reconstruction at the city scale and autonomous driving via learning & simulation. She has (co-)authored more than 300 refereed publications in these areas and co-edited/authored four books. She has served on hundreds of program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is currently a member of Computing Research Association Women (CRA-W) Board of Directors, Chair of IEEE Computer Society (CS) Fellows Committee, Chair of IEEE CS Computer Pioneer Award, and Chair of ACM SIGGRAPH Outstanding Doctoral Dissertation Award. She is a former member of IEEE CS Board of Governors, a former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics (2011-2014), a former Chair of IEEE CS Transactions Operations Committee, and a member of several editorial boards. She also has served on several steering committees and advisory boards of international conferences, as well as government and industrial technical advisory committees.
 

Dinesh Manocha

Dinesh Manocha is the Paul Chrisman Iribe Chair in Computer Science & Electrical and Computer Engineering at the University of Maryland College Park. He is also the Phi Delta Theta/Matthew Mason Distinguished Professor Emeritus of Computer Science at the University of North Carolina – Chapel Hill. He has won many awards, including Alfred P. Sloan Research Fellow, the NSF Career Award, the ONR Young Investigator Award, and the Hettleman Prize for scholarly achievement. His research interests include multi-agent simulation, virtual environments, physically-based modeling, and robotics. His group has developed a number of packages for multi-agent simulation, crowd simulation, and physics-based simulation that have been used by hundreds of thousands of users and licensed to more than 60 commercial vendors. He has published more than 500 papers and supervised more than 35 PhD dissertations. He is an inventor of 9 patents, several of which have been licensed to industry. His work has been covered by the New York Times, NPR, Boston Globe, Washington Post, ZDNet, as well as DARPA Legacy Press Release. He is a Fellow of AAAI, AAAS, ACM, and IEEE and also received the Distinguished Alumni Award from IIT Delhi. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc.
 

German Ros

German Ros is a Research Scientist at Intel Intelligent Systems Lab (Santa Clara, California), working on topics at the intersection of Machine Learning, Simulation, Virtual worlds, Transfer Learning and Intelligent Autonomous agents. He leads the CARLA organization as part of the Open Source Vision Foundation. Before joining Intel Labs, German served as a Research Scientist at Toyota Research Institute (TRI), where he conducted research in the area of Simulation for Autonomous Driving, Scene Understanding and Domain Adaptation, in the context of Autonomous Driving. He also helped industrial partners, such as Toshiba, Yandex, Drive.ai, and Volkswagen to leverage simulation and virtual words to empower their machine learning efforts and served at the Computer Vision Center (CVC) as a technical lead for the simulation team. German Ros obtained his PhD in Computer Science at Autonomous University of Barcelona & the Computer Vision Center.
 

Philipp Slusallek

Philipp Slusallek is Scientific Director at the German Research Center for Artificial Intelligence (DFKI), where he heads the research area on Agents and Simulated Reality. At Saarland University he has been a professor for Computer Graphics since 1999, a principle investigator at the German Excellence-Cluster on “Multimodal Computing and Interaction”since 2007, and Director for Research at the Intel Visual Computing Institute since 2009. Before coming to Saarland University, he was a Visiting Assistant Professor at Stanford University. He originally studied physics in Frankfurt and Tübingen (Diploma/M.Sc.) and got his PhD in Computer Science from Erlangen University. He is associate editor of Computer Graphics Forum, a fellow of Eurographics, a member of acatech (German National Academy of Science and Engineering), and a member of the European High-Level Expert Group on Artificial Intelligence. His research covers a wide range of topics including artificial intelligence, simulated/digital reality, real-time realistic graphics, high-performance computing, motion modeling & synthesis, novel programming models, computational sciences, 3D-Internet technology, and others.

Logistics

To participate in this workshop and the conversation please submit a CV and brief answers to the questions below. Responses should be limited to two pages.  Follow this link to the submission system.  The organizers will review the responses and curate a program for the workshop. A submission is required to be invited to attend the workshop as space is limited.  There will be a nominal fee to cover lunch expenses, details for purchasing tickets will be forthcoming.

1.) Describe your current professional role and its relation to autonomous driving (AD).
2.) Briefly describe your motivation to attend this workshop.
3.) Please provide an example of specific questions or topics you would be interested in hearing or debating more about.
4.) Could you mention some tools involving computer graphics or driving simulators that you or your team often use or are aiming to use for your work in relation to AD? Feel free to comment about driving simulators that you might have used in the past and what you are looking for in the future.
5.) Please feel free to include any other comments you might want to share with us.

Applications will be accepted on first come/first served basis until July 27th. Apply to participate!