#ICRA2020 workshops on robotics and learning

#ICRA2020 workshops on robotics and learning

This year the International Conference on Robotics and Automation (ICRA) is being run as a virtual event. One interesting feature of this conference is that it has been extended to run from 31 May to 31 August. A number of workshops were held on the opening day and here we focus on two of them: “Learning of manual skills in humans and robots” and “Emerging learning and algorithmic methods for data association in robotics”.

Learning of manual skills in humans and robots

This workshop was organised by Aude Billard, EPFL and Dagmar Sternad, Northeastern University. It brought together researchers from human motor control and from robotics to answer questions such as: How do humans achieve manual dexterity? What kind of practice schedules can shape these skills? Can some of these strategies be transferred to robots? To which extent is robot manual skill limited by the hardware, what can be learned and what cannot?

The third session of the workshop focussed on “Learning skills” and you can watch the two talks and the discussions below:

Jeannette Bohg – Learning to scaffold the development of robotic manipulation skills

Dagmar Sternad – Learning and control in skilled interactions with objects: A task-dynamic approach

Discussion with Jeannette Bohg and Dagmar Sternad

Emerging learning and algorithmic methods for data association in robotics

This workshop covered emerging algorithmic methods based on optimization and graph-theoretic techniques, learning and end-to-end solutions based on deep neural networks, and the relationships between these techniques.

You can watch the workshop in full here:

Below is the programme with the times indicating the position of that talk in the YouTube video:
11:00 Ayoung Kim – Learning motion and place descriptor from LiDARs for long-term navigation
34:11 Xiaowei Zhou – Learning correspondences for 3D reconstruction and pose estimation
51:30 Florian Bernard – Higher-order projected power iterations for scalable multi-matching
1:11:24 Cesar Cadena – High level understanding in the data association problem
1:34:55 Spotlight talk 1: Daniele Cattaneo – CMRNet++: map and camera agnostic monocular visual localization in LiDAR maps
1:50:45 Nicholas Roy – The role of semantics in perception
2:11:12 Kostas Daniilidis – Learning representations for matching
2:33:26 Jonathan How – Consistent multi-view data association
2:51:40 John Leonard – A research agenda for robust semantic SLAM
3:17:58 Luca Carlone – Towards certifiably robust spatial perception
3:39:36 Roberto Tron – Fast, consistent distributed matching for robotics applications
3:59:22 Randal Beard – Tracking moving objects from a moving camera in 3d environments
4:18:49 Nikolay Atanasov – A unifying view of geometry, semantics, and data association in SLAM
4:39:03 Spotlight talk 2: Nathaniel Glaser – Enhancing multi-robot perception via learned data association

Comments are closed.