https://web.archive.org/web/20220411235334/http://wad.ai/talk.html

ICCV 2019 Workshop Schedule (Oct 28th)

Time Program
08:45 Opening Remarks
09:10 Pei Sun (Waymo)“Waymo Open Dataset Update”
09:30 Li​​ ​​Erran​​ ​​Li (Scale AI)“Scaling data labeling for autonomous driving”
10:00 Wei Liu (Nuro)“Scaling Robotics with Machine Learning”
10:30 Coffee Break
11:30 Adrien Gaidon (Toyota Research Institute)“Self-Supervised Monocular Depth for Automated Driving”
12:00 Liang Wang (Baidu)“TBD”
12:30 Lunch Break
14:00 Yang Song (Waymo)“Perception in Self Driving: Challenges and Opportunities”
14:40 Nick Rhinehart (CMU)“Jointly Forecasting and Controlling Behavior by Learning from High-Dimensional Data”
15:20 Marc Pollefeys (ETH)“AutoVision: towards camera-only urban and rural autonomous navigation. ”
16:00 Coffee Break
16:30 Subhasis Das (Zoox)“Perception and Prediction for dense urban environments”
17:00 David Hsu (NUS)“Online Decision Making for Autonomous Driving in Dense Traffic”
17:30 Panel Discussion

Confirmed Speakers

Untitled

Pei Sun

Waymo

Waymo Open Dataset update

Untitled

Li Erran Li

Scale AI

Scaling data labeling for autonomous driving

Untitled

Wei Liu

Nuro

Scaling Robotics with Machine LearningWe have seen tremendous achievement in Computer Vision due to the powerful feature learning with deep neural networks (DNN). Many successful models achieve state-of-the-art results by combining classical core ideas with a DNN. For example, faster-rcnn combines sliding window and selective search detection ideas in a unified way and becomes the de facto architecture for object detection. We argue that there are many areas in Robotics that we can combine traditional core ideas with the power of DNN to achieve much better results. Especially, we demonstrate in the self-driving car scenario that we use DNN to dramatically improve 3D object detection, temporal detection, 3D semantic segmentation, trajectory prediction, etc. Beyond the model itself, there are also lots of engineering challenges needs to be solved to bring SDCs to the public and commercialize them.

Untitled

Adrien Gaidon

Toyota Research Institute

Self-Supervised Monocular Depth for Automated DrivingAbstract: Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception, especially in safety critical contexts like Automated Driving. Nonetheless, recent progress in combining deep learning and geometry suggests that cameras may become a competitive source of reliable 3D information. In this talk, we will present our latest developments in self-supervised monocular depth and pose estimation for urban environments. Particularly, we show that with the proper network architecture, large-scale training, and computational power it is possible to outperform fully supervised methods while still operating on the much more challenging self-supervised setting, where the only source of input information are video sequences. Furthermore, we discuss how other sources of information (i.e. camera velocity, sparse LiDAR data, and semantic predictions) can be leveraged at training time to further improve pseudo-lidar accuracy and overcome some of the inherent limitations of self-supervised learning.

Untitled

Liang Wang

Baidu

TBD

Untitled

Yang Song

Waymo

Perception in Self Driving: Challenges and OpportunitiesWaymo has a mission to make it safe and easy to move people and things around. We began testing fully driverless vehicles — with no trained human driver behind the wheel – in Phoenix in 2017. More recently, we’ve been offering some members of the public, who are using an app to hail Waymo vehicles to get to work, school or run errands, their first ride in a truly autonomous vehicle. In this talk, we will cover perception challenges in self-driving, and the work we’re doing to address responding to rare long-tail events, driving in adverse weather conditions, and dealing with variations in sensor settings.

Untitled

Nick Rhinehart

CMU

Jointly Forecasting and Controlling Behavior by Learning from High-Dimensional DataFor autonomous vehicles (AVs) to behave appropriately on roads populated by human-driven vehicles, they must be able to plan and execute behaviors that achieve complex tasks, as well as reason about the uncertain intentions of other drivers. Towards these capabilities, we developed a probabilistic forecasting and control framework for partially-controlled multi-agent settings. Our framework capabilities to jointly forecast and plan behavior enable us to specify complex control objectives and reason about how our own vehicle future goals could affect the behaviors of other vehicles. We show that our framework achieves state-of-the-art performance on the CARLA autonomous driving benchmark, state-of-the-art performance on realistic multi-agent vehicle forecasting benchmarks, and can incorporate knowledge of the AV own goals to better predict the behaviors of other agents.