Amanda Prorok’s talk – Learning to Communicate in Multi-Agent Systems (with video)

In this technical talk, Amanda Prorok, Assistant Professor in the Department of Computer Science and Technology at Cambridge University, and a Fellow of Pembroke College, discusses her team’s latest research on what, how and when information needs to be shared among agents that aim to solve cooperative tasks.

Abstract

Effective communication is key to successful multi-agent coordination. Yet it is far from obvious what, how and when information needs to be shared among agents that aim to solve cooperative tasks. In this talk, I discuss our recent work on using Graph Neural Networks (GNNs) to solve multi-agent coordination problems. In my first case-study, I show how we use GNNs to find a decentralized solution to the multi-agent path finding problem, which is known to be NP-hard. I demonstrate how our policy is able to achieve near-optimal performance, at a fraction of the real-time computational cost. Secondly, I show how GNN-based reinforcement learning can be leveraged to learn inter-agent communication policies. In this case-study, I demonstrate how non-shared optimization objectives can lead to adversarial communication strategies. Finally, I address the challenge of learning robust communication policies, enabling a multi-agent system to maintain high performance in the presence of anonymous non-cooperative agents that communicate faulty, misleading or manipulative information.

Biography

Amanda Prorok is an Assistant Professor in the Department of Computer Science and Technology, at Cambridge University, UK, and a Fellow of Pembroke College. Her mission is to find new ways of coordinating artificially intelligent agents (e.g., robots, vehicles, machines) to achieve common goals in shared physical and virtual spaces. Amanda Prorok has been honored by an ERC Starting Grant, an Amazon Research Award, an EPSRC New Investigator Award, an Isaac Newton Trust Early Career Award, and the Asea Brown Boveri (ABB) Award for the best thesis at EPFL in Computer Science. Further awards include Best Paper at DARS 2018, Finalist for Best Multi-Robot Systems Paper at ICRA 2017, Best Paper at BICT 2015, and MIT Rising Stars 2015. She serves as Associate Editor for IEEE Robotics and Automation Letters (R-AL), and Associate Editor for Autonomous Robots (AURO). Prior to joining Cambridge, Amanda Prorok was a postdoctoral researcher at the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory at the University of Pennsylvania, USA. She completed her PhD at EPFL, Switzerland.

Featuring Guest Panelist(s): Stephanie Gil, Joey Durham


The next technical talk will be delivered by Koushil Sreenath from UC Berkeley, and it will take place on April 23 at 3pm EDT. Keep up to date on this website.

Chad Jenkins’ talk – That Ain’t Right: AI Mistakes and Black Lives (with video)

In this technical talk, Chad Jenkins from the University of Michigan posed the following question: “who will pay the cost for the likely mistakes and potential misuse of AI systems?” As he states, “we are increasingly seeing how AI is having a pervasing impact on our lives, both for good and for bad. So, how do we ensure equal opportunity in science and technology?”

Abstract

It would be great to talk about the many compelling ideas, innovations, and new questions emerging in robotics research. I am fascinated by the ongoing NeRF Explosion, prospects for declarative robot programming by demonstration, and potential for a reemergence of probabilistic generative inference. However, there is a larger issue facing our intellectual enterprise: who will pay the cost for the likely mistakes and potential misuse of AI systems? My nation is poised to invest billions of dollars to remain the leader in artificial intelligence as well as quantum computing. This investment is critically needed to reinvigorate the science that will shape our future. In order to get the most from this investment, we have to create an environment that will produce innovations that are not just technical advancements but will also benefit and uplift everybody in our society. We are increasingly seeing how AI is having a pervasing impact on our lives, both for good and for bad. So, how do we ensure equal opportunity in science and technology? It starts with how we invest in scientific research. Currently, when we make investments, we only think about technological advancement. Equal opportunity is a non-priority and, at best, a secondary consideration. The fix is simple really — and something we can do almost immediately: we must start enforcing existing civil rights statutes for how government funds are distributed in support of scientific advancement. This will mostly affect universities, as the springwell that generates the intellectual foundation and workforce for other organizations that are leading the way in artificial intelligence.This talk will explore the causes of systemic inequality in AI, the impact of this inequity within the field of AI and across society today, and offer thoughts for the next wave of AI inference systems for robotics that could provide introspectability and accountability. Ideas explored build upon the BlackInComputing.org open letter and “Before we put $100 billion into AI…” opinion. Equal opportunity for anyone requires equal opportunity for everyone.

Biography

Odest Chadwicke Jenkins, Ph.D., is a Professor of Computer Science and Engineering and Associate Director of the Robotics Institute at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. Prof. Jenkins has been recognized as a Sloan Research Fellow and is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR), the Air Force Office of Scientific Research (AFOSR) and the National Science Foundation (NSF). Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science and Association for the Advancement of Artificial Intelligence, and Senior Member of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. He is an alumnus of the Defense Science Study Group (2018-19).

Featuring Guest Panelist: Sarah Brown, Hadas Kress-Gazit, Aisha Walcott


The next technical talk will be delivered by Raia Hadsell from DeepMind, and it will take place on March 26 at 3pm EST. Keep up to date on this website.

Adam Bry and Hayk Martiros’s talk – Skydio Autonomy: Research in Robust Visual Navigation and Real-Time 3D Reconstruction (with video)

In the last online technical talk, Adam Bry and Hayk Martiros from Skydio explained how their company tackles real-world issues when it comes to drone flying.

Abstract

Skydio is the leading US drone company and the world leader in autonomous flight. Our drones are used for everything from capturing amazing video, to inspecting bridges, to tracking progress on construction sites. At the core of our products is a vision-based autonomy system with seven years of development at Skydio, drawing on decades of academic research. This system pushes the state of the art in deep learning, geometric computer vision, motion planning, and control with a particular focus on real-world robustness. Drones encounter extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. They are commonly flown in scenes with few or no semantic priors and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, camera smudges, and fog. These challenges are daunting for classical vision – because photometric signals are simply not consistent and for learning-based methods – because there is no ground truth for direct supervision of deep networks. In this talk we’ll take a detailed look at these issues and the algorithms we’ve developed to tackle them. We will also cover the new capabilities on top of our core navigation engine to autonomously map complex scenes and capture all surfaces, by performing real-time 3D reconstruction across multiple flights.

Biography

Adam is co-founder and CEO at Skydio. He has two decades of experience with small UAS, starting as a national champion R/C airplane aerobatics pilot. As a grad student at MIT, he did award winning research that pioneered autonomous flight for drones, transferring much of what he learned as an R/C pilot into software that enables drones to fly themselves. Adam co-founded Google’s drone delivery project. He currently serves on the FAA’s Drone Advisory Committee. He holds a BS in Mechanical Engineering from Olin College and an SM in Aero/Astro from MIT. He has co-authored numerous technical papers and patents, and was also recognized on MIT’s TR35 list for young innovators.

Hayk was the first engineering hire at Skydio and he leads the autonomy team. He is an experienced roboticist who develops robust approaches to computer vision, deep learning, nonlinear optimization, and motion planning to bring intelligent robots into the mainstream. His team’s state of the art work in UAV visual localization, obstacle avoidance, and navigation of complex scenarios is at the core of every Skydio drone. He also has an interest in systems architecture and symbolic computation. His previous works include novel hexapedal robots, collaboration between robot arms, micro-robot factories, solar panel farms, and self-balancing motorcycles. Hayk is a graduate of Stanford University and Princeton University.

Featuring Guest Panelist: Davide Scaramuzza and Margaritha Chli


The next technical talk is happening this Friday the 12th of March at 3pm EST. Join Chad Jenkins from the University of Michigan in his talk ‘That Ain’t Right: AI Mistakes and Black Lives’ using this link.

Carlotta Berry’s talk – Robotics Education to Robotics Research (with video)

Carlotta Berry

A few days ago, Robotics Today hosted an online seminar with Professor Carlotta Berry from the Rose-Hulman Institute of Technology. In her talk, Carlotta presented the multidisciplinary benefits of robotics in engineering education. In is worth highlighting that Carlotta Berry is one of the 30 women in robotics you need to know about in 2020.

Abstract

This presentation summarizes the multidisciplinary benefits of robotics in engineering education. I will describe how it is used at a primarily undergraduate institution to encourage robotics education and research. There will be a review of how robotics is used in several courses to illustrate engineering design concepts as well as controls, artificial intelligence, human-robot interaction, and software development. This will be a multimedia presentation of student projects in freshman design, mobile robotics, independent research and graduate theses.

Biography

Carlotta A. Berry is a Professor in the Department of Electrical and Computer Engineering at Rose-Hulman Institute of Technology. She has a bachelor’s degree in mathematics from Spelman College, bachelor’s degree in electrical engineering from Georgia Institute of Technology, master’s in electrical engineering from Wayne State University, and PhD from Vanderbilt University. She is one of a team of faculty in ECE, ME and CSSE at Rose-Hulman to create and direct the first multidisciplinary minor in robotics. She is the Co-Director of the NSF S-STEM Rose Building Undergraduate Diversity (ROSE-BUD) Program and advisor for the National Society of Black Engineers. She was previously the President of the Technical Editor Board for the ASEE Computers in Education Journal. Dr. Berry has been selected as one of 30 Women in Robotics You Need to Know About 2020 by robohub.org, Reinvented Magazine Interview of the Year Award on Purpose and Passion, Women and Hi Tech Leading Light Award You Inspire Me and Insight Into Diversity Inspiring Women in STEM. She has taught undergraduate courses in Human-Robot Interaction, Mobile Robotics, circuits, controls, signals and system, freshman and senior design. Her research interests are in robotics education, interface design, human-robot interaction, and increasing underrepresented populations in STEM fields. She has a special passion for diversifying the engineering profession by encouraging more women and underrepresented minorities to pursue undergraduate and graduate degrees. She feels that the profession should reflect the world that we live in in order to solve the unique problems that we face.

You can also view past seminars on the Robotics Today YouTube Channel.

Carlotta Berry’s talk – Robotics Education to Robotics Research (with video)

Carlotta Berry

A few days ago, Robotics Today hosted an online seminar with Professor Carlotta Berry from the Rose-Hulman Institute of Technology. In her talk, Carlotta presented the multidisciplinary benefits of robotics in engineering education. In is worth highlighting that Carlotta Berry is one of the 30 women in robotics you need to know about in 2020.

Abstract

This presentation summarizes the multidisciplinary benefits of robotics in engineering education. I will describe how it is used at a primarily undergraduate institution to encourage robotics education and research. There will be a review of how robotics is used in several courses to illustrate engineering design concepts as well as controls, artificial intelligence, human-robot interaction, and software development. This will be a multimedia presentation of student projects in freshman design, mobile robotics, independent research and graduate theses.

Biography

Carlotta A. Berry is a Professor in the Department of Electrical and Computer Engineering at Rose-Hulman Institute of Technology. She has a bachelor’s degree in mathematics from Spelman College, bachelor’s degree in electrical engineering from Georgia Institute of Technology, master’s in electrical engineering from Wayne State University, and PhD from Vanderbilt University. She is one of a team of faculty in ECE, ME and CSSE at Rose-Hulman to create and direct the first multidisciplinary minor in robotics. She is the Co-Director of the NSF S-STEM Rose Building Undergraduate Diversity (ROSE-BUD) Program and advisor for the National Society of Black Engineers. She was previously the President of the Technical Editor Board for the ASEE Computers in Education Journal. Dr. Berry has been selected as one of 30 Women in Robotics You Need to Know About 2020 by robohub.org, Reinvented Magazine Interview of the Year Award on Purpose and Passion, Women and Hi Tech Leading Light Award You Inspire Me and Insight Into Diversity Inspiring Women in STEM. She has taught undergraduate courses in Human-Robot Interaction, Mobile Robotics, circuits, controls, signals and system, freshman and senior design. Her research interests are in robotics education, interface design, human-robot interaction, and increasing underrepresented populations in STEM fields. She has a special passion for diversifying the engineering profession by encouraging more women and underrepresented minorities to pursue undergraduate and graduate degrees. She feels that the profession should reflect the world that we live in in order to solve the unique problems that we face.

You can also view past seminars on the Robotics Today YouTube Channel.

Davide Scaramuzza’s seminar – Autonomous, agile micro drones: Perception, learning, and control

Davide Scaramuzza

A few days ago, Robotics Today hosted an online seminar with Professor Davide Scaramuzza from the University of Zurich. The seminar was recorded, so you can watch it now in case you missed it.

“Robotics Today – A series of technical talks” is a virtual robotics seminar series. The goal of the series is to bring the robotics community together during these challenging times. The seminars are open to the public. The format of the seminar consists of a technical talk live captioned and streamed via Web and Twitter, followed by an interactive discussion between the speaker and a panel of faculty, postdocs, and students that will moderate audience questions.

Abstract

Autonomous quadrotors will soon play a major role in search-and-rescue, delivery, and inspection missions, where a fast response is crucial. However, their speed and maneuverability are still far from those of birds and human pilots. High speed is particularly important: since drone battery life is usually limited to 20-30 minutes, drones need to fly faster to cover longer distances. However, to do so, they need faster sensors and algorithms. Human pilots take years to learn the skills to navigate drones. What does it take to make drones navigate as good or even better than human pilots? Autonomous, agile navigation through unknown, GPS-denied environments poses several challenges for robotics research in terms of perception, planning, learning, and control. In this talk, I will show how the combination of both model-based and machine learning methods united with the power of new, low-latency sensors, such as event cameras, can allow drones to achieve unprecedented speed and robustness by relying solely on onboard computing.

Biography

Davide Scaramuzza (Italian) is a Professor of Robotics and Perception at both departments of Informatics (University of Zurich) and Neuroinformatics (joint between the University of Zurich and ETH Zurich), where he directs the Robotics and Perception Group. His research lies at the intersection of robotics, computer vision, and machine learning, using standard cameras and event cameras, and aims to enable autonomous, agile navigation of micro drones in search and rescue applications. After a Ph.D. at ETH Zurich (with Roland Siegwart) and a postdoc at the University of Pennsylvania (with Vijay Kumar and Kostas Daniilidis), from 2009 to 2012, he led the European project sFly, which introduced the PX4 autopilot and pioneered visual-SLAM-based autonomous navigation of micro drones in GPS-denied environments. From 2015 to 2018, he was part of the DARPA FLA program (Fast Lightweight Autonomy) to research autonomous, agile navigation of micro drones in GPS-denied environments. In 2018, his team won the IROS 2018 Autonomous Drone Race, and in 2019 it ranked second in the AlphaPilot Drone Racing world championship. For his research contributions to autonomous, vision-based, drone navigation and event cameras, he won prestigious awards, such as a European Research Council (ERC) Consolidator Grant, the IEEE Robotics and Automation Society Early Career Award, an SNSF-ERC Starting Grant, a Google Research Award, the KUKA Innovation Award, two Qualcomm Innovation Fellowships, the European Young Research Award, the Misha Mahowald Neuromorphic Engineering Award, and several paper awards. He co-authored the book “Introduction to Autonomous Mobile Robots” (published by MIT Press; 10,000 copies sold) and more than 100 papers on robotics and perception published in top-ranked journals (Science Robotics, TRO, T-PAMI, IJCV, IJRR) and conferences (RSS, ICRA, CVPR, ICCV, CORL, NeurIPS). He has served as a consultant for the United Nations’ International Atomic Energy Agency’s Fukushima Action Plan on Nuclear Safety and several drones and computer-vision companies, to which he has also transferred research results. In 2015, he cofounded Zurich-Eye, today Facebook Zurich, which developed the visual-inertial SLAM system running in Oculus Quest VR headsets. He was also the strategic advisor of Dacuda, today Magic Leap Zurich. In 2020, he cofounded SUIND, which develops camera-based safety solutions for commercial drones. Many aspects of his research have been prominently featured in wider media, such as The New York Times, BBC News, Discovery Channel, La Repubblica, Neue Zurcher Zeitung, and also in technology-focused media, such as IEEE Spectrum, MIT Technology Review, Tech Crunch, Wired, The Verge.

You can also view past seminars on the Robotics Today YouTube Channel.