Page 1 of 5
1 2 3 5

Interview with Dautzenberg Roman: #IROS2023 Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp.

Congratulations to Dautzenberg Roman and his team of researchers, who won the IROS 2023 Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp. for their paper “A perching and tilting aerial robot for precise and versatile power tool work on vertical walls“. Below, the authors tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

Our paper shows a an aerial robot (think “drone”) which can exert large forces in the horizontal direction, i.e. onto walls. This is a difficult task, as UAVs usually rely on thrust vectoring to apply horizontal forces and thus can only apply small forces before losing control authority. By perching onto walls, our system no longer needs the propulsion to remain at a desired site. Instead we use the propellers to achieve large reaction forces in any direction, also onto walls! Additionally, perching allows extreme precision, as the tool can be moved and re-adjusted, as well as being unaffected by external disturbances such as gusts of wind.

Could you tell us about the implications of your research and why it is an interesting area for study?

Precision, force exertion and mobility are the three (of many) criteria where robots – and those that develop them – make trade-offs. Our research shows that the system we designed can exert large forces precisely with only minimal compromises on mobility. This widens the horizon of conceivable tasks for aerial robots, as well as serving as the next link in automating the chain of tasks need to perform many procedures on construction sites, or on remote, complex or hazardous environments.

Could you explain your methodology?

The main aim of our paper is to characterize the behavior and performance of the system, and comparing the system to other aerial robots. To achieve this, we investigated the perching and tool positioning accuracy, as well as comparing the applicable reaction forces with other systems.

Further, the paper shows the power consumption and rotational velocities of the propellers for the various phases of a typical operation, as well as how certain mechanism of the aerial robot are configured. This allows for a deeper understanding of the characteristics of the aerial robot.

What were your main findings?

Most notably, we show the perching precision to be within +-10cm of a desired location over 30 consecutive attempts and tool positioning to have mm-level accuracy even in a “worst-case” scenario. Power consumption while perching on typical concrete is extremely low and the system is capable of performing various tasks (drilling, screwing) also in quasi-realistic, outdoor scenarios.

What further work are you planning in this area?

Going forward, enhancing the capabilities will be a priority. This relates both to the types of surface manipulations that can be performed, but also the surfaces onto which the system can perch.


About the author

Dautzenberg Roman is currently a Masters student at ETH Zürich and Team Leader at AITHON. AITHON is a research project which is transforming into a start-up for aerial construction robotics. They are a core team of 8 engineers, working under the guidance of the Autonomous Systems Lab at ETH Zürich and located at the Innovation Park Switzerland in Dübendorf.

#IROS2023 awards finalists and winners + IROS on Demand free for one year

Did you have the chance to attend the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) in Detroit? Here we bring you the papers that received an award this year in case you missed them. And good news: you can read all the papers because IROS on Demand is open to the public and freely available for one year from Oct 9th. Congratulations to all the winners and finalists!

IROS 2023 Best Overall and Best Student Paper

Winner of the IROS 2023 Best Paper

  • Autonomous Power Line Inspection with Drones via Perception-Aware MPC, by Jiaxu Xing, Giovanni Cioffi, Javier Hidalgo Carrio, Davide Scaramuzza.

Winner of the IROS 2023 Best Student Paper

  • Controlling Powered Prosthesis Kinematics over Continuous Transitions Between Walk and Stair Ascent, by Shihao Cheng, Curt A. Laubscher, Robert D. Gregg.

Finalists

  • Learning Contact-Based State Estimation for Assembly Tasks, by Johannes Pankert, Marco Hutter.
  • Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV, by Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu, Fangcheng Zhu, Ximin Lyu, Fu Zhang.
  • Towards Legged Locomotion on Steep Planetary Terrain, by Giorgio Valsecchi, Cedric Weibel, Hendrik Kolvenbach, Marco Hutter.
  • Decentralized Swarm Trajectory Generation for LiDAR-based Aerial Tracking in Cluttered Environments, by Longji Yin, Fangcheng Zhu, Yunfan Ren, Fanze Kong, Fu Zhang.
  • Open-Vocabulary Affordance Detection in 3D Point Clouds, by Toan Nguyen, Minh Nhat Vu, An Vuong, Dzung Nguyen, Thieu Vo, Ngan Le, Anh Nguyen.
  • Discovering Symbolic Adaptation Algorithms from Scratch, by Stephen Kelly, Daniel Park, Xingyou Song, Mitchell McIntire, Pranav Nashikkar, Ritam Guha, Wolfgang Banzhaf, Kalyanmoy Deb, Vishnu Boddeti, Jie Tan, Esteban Real.
  • Parallel cell array patterning and target cell lysis on an optoelectronic micro-well device, by Chunyuan Gan, Hongyi Xiong, Jiawei Zhao, Ao Wang, Chutian Wang, Shuzhang Liang, Jiaying Zhang, Lin Feng.
  • FATROP: A Fast Constrained Optimal Control Problem Solver for Robot Trajectory Optimization and Control, by Lander Vanroye, Ajay Suresha Sathya, Joris De Schutter, Wilm Decré.
  • GelSight Svelte: A Human Finger-Shaped Single-Camera Tactile Robot Finger with Large Sensing Coverage and Proprioceptive Sensing, by Jialiang Zhao, Edward Adelson.
  • Shape Servoing of a Soft Object Using Fourier Series and a Physics-based Model, by Fouad Makiyeh, Francois Chaumette, Maud Marchal, Alexandre Krupa.

IROS Best Paper Award on Agri-Robotics sponsored by YANMAR

Winner

  • Visual, Spatial, Geometric-Preserved Place Recognition for Cross-View and Cross-Modal Collaborative Perception, by Peng Gao, Jing Liang, Yu Shen, Sanghyun Son, Ming C. Lin.

Finalists

  • Online Self-Supervised Thermal Water Segmentation for Aerial Vehicles, by Connor Lee, Jonathan Gustafsson Frennert, Lu Gan, Matthew Anderson, Soon-Jo Chung.
  • Relative Roughness Measurement based Real-time Speed Planning for Autonomous Vehicles on Rugged Road, by Liang Wang, Tianwei Niu, Shuai Wang, Shoukun Wang, Junzheng Wang.

IROS Best Application Paper Award sponsored by ICROS

Winner

  • Autonomous Robotic Drilling System for Mice Cranial Window Creation: An Evaluation with an Egg Model, by Enduo Zhao, Murilo Marques Marinho, Kanako Harada.

Finalists

  • Visuo-Tactile Sensor Enabled Pneumatic Device Towards Compliant Oropharyngeal Swab Sampling, by Shoujie Li, MingShan He, Wenbo Ding, Linqi Ye, xueqian WANG, Junbo Tan, Jinqiu Yuan, Xiao-Ping Zhang.
  • Improving Amputee Endurance over Activities of Daily Living with a Robotic Knee-Ankle Prosthesis: A Case Study, by Kevin Best, Curt A. Laubscher, Ross Cortino, Shihao Cheng, Robert D. Gregg.
  • Dynamic hand proprioception via a wearable glove with fabric sensors, by Lily Behnke, Lina Sanchez-Botero, William Johnson, Anjali Agrawala, Rebecca Kramer-Bottiglio.
  • Active Capsule System for Multiple Therapeutic Patch Delivery: Preclinical Evaluation, by Jihun Lee, Manh Cuong Hoang, Jayoung Kim, Eunho Choe, Hyeonwoo Kee, Seungun Yang, Jongoh Park, Sukho Park.

IROS Best Entertainment and Amusement Paper Award sponsored by JTCF

Winner

  • DoubleBee: A Hybrid Aerial-Ground Robot with Two Active Wheels, by Muqing Cao, Xinhang Xu, Shenghai Yuan, Kun Cao, Kangcheng Liu, Lihua Xie.

Finalists

  • Polynomial-based Online Planning for Autonomous Drone Racing in Dynamic Environments, by Qianhao Wang, Dong Wang, Chao Xu, Alan Gao, Fei Gao.
  • Bistable Tensegrity Robot with Jumping Repeatability based on Rigid Plate-shaped Compressors, by Kento Shimura, Noriyasu Iwamoto, Takuya Umedachi.

IROS Best Industrial Robotics Research for Applications sponsored by Mujin Inc.

Winner

  • Toward Closed-loop Additive Manufacturing: Paradigm Shift in Fabrication, Inspection, and Repair, by Manpreet Singh, Fujun Ruan, Albert Xu, Yuchen Wu, Archit Rungta, Luyuan Wang, Kevin Song, Howie Choset, Lu Li.

Finalists

  • Learning Contact-Based State Estimation for Assembly Tasks, by Johannes Pankert, Marco Hutter.
  • Bagging by Learning to Singulate Layers Using Interactive Perception, by Lawrence Yunliang Chen, Baiyu Shi, Roy Lin, Daniel Seita, Ayah Ahmad, Richard Cheng, Thomas Kollar, David Held, Ken Goldberg.
  • Exploiting the Kinematic Redundancy of a Backdrivable Parallel Manipulator for Sensing During Physical Human-Robot Interaction, by Arda Yigit, Tan-Sy Nguyen, Clement Gosselin.

IROS Best Paper Award on Cognitive Robotics sponsored by KROS

Winner

  • Extracting Dynamic Navigation Goal from Natural Language Dialogue, by Lanjun Liang, Ganghui Bian, Huailin Zhao, Yanzhi Dong, Huaping Liu.

Finalists

  • EasyGaze3D: Towards Effective and Flexible 3D Gaze Estimation from a Single RGB Camera, by Jinkai Li, Jianxin Yang, Yuxuan Liu, ZHEN LI, Guang-Zhong Yang, Yao Guo.
  • Team Coordination on Graphs with State-Dependent Edge Cost, by Sara Oughourli, Manshi Limbu, Zechen Hu, Xuan Wang, Xuesu Xiao, Daigo Shishika.
  • Is Weakly-supervised Action Segmentation Ready For Human-Robot Interaction? No, Let’s Improve It With Action-union Learning, by Fan Yang, Shigeyuki Odashima, Shochi Masui, Shan Jiang.
  • Exploiting Spatio-temporal Human-object Relations using Graph Neural Networks for Human Action Recognition and 3D Motion Forecasting, by Dimitrios Lagamtzis, Fabian Schmidt, Jan Reinke Seyler, Thao Dang, Steffen Schober.

IROS Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp.

Winner

  • A perching and tilting aerial robot for precise and versatile power tool work on vertical walls, by Roman Dautzenberg, Timo Küster, Timon Mathis, Yann Roth, Curdin Steinauer, Gabriel Käppeli, Julian Santen, Alina Arranhado, Friederike Biffar, Till Kötter, Christian Lanegger, Mike Allenspach, Roland Siegwart, Rik Bähnemann.

Finalists

  • Placing by Touching: An empirical study on the importance of tactile sensing for precise object placing, by Luca Lach, Niklas Wilhelm Funk, Robert Haschke, Séverin Lemaignan, Helge Joachim Ritter, Jan Peters, Georgia Chalvatzaki.
  • Efficient Object Manipulation Planning with Monte Carlo Tree Search, by Huaijiang Zhu, Avadesh Meduri, Ludovic Righetti.
  • Sequential Manipulation Planning for Over-actuated UAMs, by Yao Su, Jiarui Li, Ziyuan Jiao, Meng Wang, Chi Chu, Hang Li, Yixin Zhu, Hangxin Liu.
  • On the Design of Region-Avoiding Metrics for Collision-Safe Motion Generation on Riemannian Manifolds, by Holger Klein, Noémie Jaquier, Andre Meixner, Tamim Asfour.

IROS Best RoboCup Paper Award sponsored by RoboCup Federation

Winner

  • Sequential Neural Barriers for Scalable Dynamic Obstacle Avoidance, by Hongzhan Yu, Chiaki Hirayama, Chenning Yu, Sylvia Herbert, Sicun Gao.

Finalists

  • Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation, by Fabian Clemens Weigend, Shubham Sonawani, Drolet Michael, Heni Ben Amor.
  • Effectively Rearranging Heterogeneous Objects on Cluttered Tabletops, by Kai Gao, Justin Yu, Tanay Sandeep Punjabi, Jingjin Yu.
  • Prioritized Planning for Target-Oriented Manipulation via Hierarchical Stacking Relationship Prediction, by Zewen Wu, Jian Tang, Xingyu Chen, Chengzhong Ma, Xuguang Lan, Nanning Zheng.

IROS Best Paper Award on Robot Mechanisms and Design sponsored by ROBOTIS

Winner

  • Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV, by Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu, Fangcheng Zhu, Ximin Lyu, Fu Zhang.

Finalists

  • Hybrid Tendon and Ball Chain Continuum Robots for Enhanced Dexterity in Medical Interventions, by Giovanni Pittiglio, Margherita Mencattelli, Abdulhamit Donder, Yash Chitalia, Pierre Dupont.
  • c^2: Co-design of Robots via Concurrent-Network Coupling Online and Offline Reinforcement Learning, by Ci Chen, Pingyu Xiang, Haojian Lu, Yue Wang, Rong Xiong.
  • Collision-Free Reconfiguration Planning for Variable Topology Trusses using a Linking Invariant, by Alexander Spinos, Mark Yim.
  • eViper: A Scalable Platform for Untethered Modular Soft Robots, by Hsin Cheng, Zhiwu Zheng, Prakhar Kumar, Wali Afridi, Ben Kim, Sigurd Wagner, Naveen Verma, James Sturm, Minjie Chen.

IROS Best Paper Award on Safety, Security, and Rescue Robotics in memory of Motohiro Kisoi sponsored by IRSI

Winner

  • mCLARI: A Shape-Morphing Insect-Scale Robot Capable of Omnidirectional Terrain-Adaptive Locomotion, by Heiko Dieter Kabutz, Alexander Hedrick, William Parker McDonnell, Kaushik Jayaram.

Finalists

  • Towards Legged Locomotion on Steep, Planetary Terrain, by Giorgio Valsecchi, Cedric Weibel, Hendrik Kolvenbach, Marco Hutter.
  • Global Localization in Unstructured Environments using Semantic Object Maps Built from Various Viewpoints, by Jacqueline Ankenbauer, Parker C. Lusk, Jonathan How.
  • EELS: Towards Autonomous Mobility in Extreme Environments with a Novel Large-Scale Screw Driven Snake Robot, by Rohan Thakker, Michael Paton, Marlin Polo Strub, Michael Swan, Guglielmo Daddi, Rob Royce, Matthew Gildner, Tiago Vaquero, Phillipe Tosi, Marcel Veismann, Peter Gavrilov, Eloise Marteau, Joseph Bowkett, Daniel Loret de Mola Lemus, Yashwanth Kumar Nakka, Benjamin Hockman, Andrew Orekhov, Tristan Hasseler, Carl Leake, Benjamin Nuernberger, Pedro F. Proença, William Reid, William Talbot, Nikola Georgiev, Torkom Pailevanian, Avak Archanian, Eric Ambrose, Jay Jasper, Rachel Etheredge, Christiahn Roman, Daniel S Levine, Kyohei Otsu, Hovhannes Melikyan, Richard Rieber, Kalind Carpenter, Jeremy Nash, Abhinandan Jain, Lori Shiraishi, Ali-akbar Agha-mohammadi, Matthew Travers, Howie Choset, Joel Burdick, Masahiro Ono.
  • Multi-IMU Proprioceptive Odometry for Legged Robots, by Shuo Yang, Zixin Zhang, Benjamin Bokser, Zachary Manchester.

#IROS2023: A glimpse into the next generation of robotics

The 2023 EEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) kicks off today at the Huntington Place in Detroit, Michigan. This year’s theme, “The Next Generation of Robotics,” is a call to the young and senior researchers to create a forum where the past, present, and future of robotics converge.

The program of IROS 2023 is a blend of theoretical insights and practical demonstrations, designed to foster a culture of innovation and collaboration. Among the highlights are the plenary and keynote talks by eminent personalities in the field of robotics.

Plenaries and keynotes

On the plenary front, Marcie O’Malley from Rice University will delve into the realm of robots that teach and learn with a human touch. Yuto Nakanishi of GITAI, Japan, will share his insights on the challenges of developing space robots for building a moonbase. Matt Johnson-Roberson from Carnegie Mellon University will explore the shared history and convergent future of AI and Robotics.

The keynote sessions are equally thought-provoking. On Monday, October 2nd, Sven Behnke from the University of Bonn, Germany, will discuss the transition from intuitive immersive telepresence systems to conscious service robots, while Michelle Johnson from the University of Pennsylvania, USA, will talk about the journey towards more inclusive rehabilitation robots. Rebecca Kramer-Bottiglio from Yale University, USA, will also share insights on shape-shifting soft robots that adapt to changing tasks and environments.

On Tuesday, October 3rd, Kostas Alexis from the Norwegian University of Science and Technology, Norway, will share experiences from the DARPA Subterranean Challenge focusing on resilient robotic autonomy. Serena Ivaldi from Inria, France, will discuss the transition from humanoids to exoskeletons, aiming at assisting and collaborating with humans. Mario Santillo from Ford Motor Company, USA, will provide a glimpse into the future of manufacturing automation.

The series continues on Wednesday, October 4th, with Moritz Bächer (Switzerland) and Morgan Pope (USA) from Disney Research discussing the design and control of expressive robotic characters. Tetsuya Ogata from Waseda University/AIST, Japan, will delve into deep predictive learning in robotics, optimizing models for adaptive perception and action. Lastly, Teresa Vidal-Calleja from the University of Technology Sydney, Australia, will talk about empowering robots with continuous space and time representations.

Competitions

The competitions segment of IROS 2023 will be a space for innovation and creativity. The Functional Fashion competition invites teams to design and demonstrate robotic clothing that is as aesthetically pleasing as it is functional. The F1/10 Autonomous Racing challenges participants to build a 1:10 scaled autonomous race car and compete in minimizing lap time while avoiding crashes. The Soft Robotics Balloon Robots competition encourages the creation of locomoting and swimming soft robots using balloons as a substrate, exploring rapid design and deployment of soft robotic structures.

Technical programme & demonstrations

The technical sessions and workshops/tutorials at IROS 2023 are designed to foster a rich exchange of ideas among the attendees. These sessions will feature presentations on cutting-edge research and innovative projects from across the globe, providing a platform for researchers to share their findings, receive feedback, and engage in meaningful discussions. In additions, the demonstrations segment will bring theories to life as participants showcase their working prototypes and models, offering a tangible glimpse into the advancements in robotics.

Participate in IROS 2023 remotely with Ohmni telepresence robots

If you are unable to participate in the conference in person, Ohmnilabs has provided three of their Ohmni telepresence robots to facilitate participation in the conference virtually. The telepresence robots will be active from October 2-4 from 9:00 a.m.- 6:00 p.m. EDT. You can secure a time slot in advance using this link.

The telepresence robots will allow you to:

  • Explore the exhibit hall and speak with exhibitors
  • Interact with authors and other attendees during interactive poster sessions
  • Attend a plenary or keynote presentation

You can check in real-time here to see if any of the robots are available throughout the day.

Watch out our blog during the following days for updates and results from the best paper awards. And enjoy IROS 2023!

Interview with Jean Pierre Sleiman, author of the paper “Versatile multicontact planning and control for legged loco-manipulation”

Picture from paper “Versatile multicontact planning and control for legged loco-manipulation“. © American Association for the Advancement of Science

We had the chance to interview Jean Pierre Sleiman, author of the paper “Versatile multicontact planning and control for legged loco-manipulation”, recently published in Science Robotics.

What is the topic of the research in your paper?
The research topic focuses on developing a model-based planning and control architecture that enables legged mobile manipulators to tackle diverse loco-manipulation problems (i.e., manipulation problems inherently involving a locomotion element). Our study specifically targeted tasks that would require multiple contact interactions to be solved, rather than pick-and-place applications. To ensure our approach is not limited to simulation environments, we applied it to solve real-world tasks with a legged system consisting of the quadrupedal platform ANYmal equipped with DynaArm, a custom-built 6-DoF robotic arm.

Could you tell us about the implications of your research and why it is an interesting area for study?
The research was driven by the desire to make such robots, namely legged mobile manipulators, capable of solving a variety of real-world tasks, such as traversing doors, opening/closing dishwashers, manipulating valves in an industrial setting, and so forth. A standard approach would have been to tackle each task individually and independently by dedicating a substantial amount of engineering effort to handcraft the desired behaviors:

This is typically achieved through the use of hard-coded state-machines in which the designer specifies a sequence of sub-goals (e.g., grasp the door handle, open the door to a desired angle, hold the door with one of the feet, move the arm to the other side of the door, pass through the door while closing it, etc.). Alternatively, a human expert may demonstrate how to solve the task by teleoperating the robot, recording its motion, and having the robot learn to mimic the recorded behavior.

However, this process is very slow, tedious, and prone to engineering design errors. To avoid this burden for every new task, the research opted for a more structured approach in the form of a single planner that can automatically discover the necessary behaviors for a wide range of loco-manipulation tasks, without requiring any detailed guidance for any of them.

Could you explain your methodology?
The key insight underlying our methodology was that all of the loco-manipulation tasks that we aimed to solve can be modeled as Task and Motion Planning (TAMP) problems. TAMP is a well-established framework that has been primarily used to solve sequential manipulation problems where the robot already possesses a set of primitive skills (e.g., pick object, place object, move to object, throw object, etc.), but still has to properly integrate them to solve more complex long-horizon tasks.

This perspective enabled us to devise a single bi-level optimization formulation that can encompass all our tasks, and exploit domain-specific knowledge, rather than task-specific knowledge. By combining this with the well-established strengths of different planning techniques (trajectory optimization, informed graph search, and sampling-based planning), we were able to achieve an effective search strategy that solves the optimization problem.

The main technical novelty in our work lies in the Offline Multi-Contact Planning Module, depicted in Module B of Figure 1 in the paper. Its overall setup can be summarized as follows: Starting from a user-defined set of robot end-effectors (e.g., front left foot, front right foot, gripper, etc.) and object affordances (these describe where the robot can interact with the object), a discrete state that captures the combination of all contact pairings is introduced. Given a start and goal state (e.g., the robot should end up behind the door), the multi-contact planner then solves a single-query problem by incrementally growing a tree via a bi-level search over feasible contact modes jointly with continuous robot-object trajectories. The resulting plan is enhanced with a single long-horizon trajectory optimization over the discovered contact sequence.

What were your main findings?
We found that our planning framework was able to rapidly discover complex multi- contact plans for diverse loco-manipulation tasks, despite having provided it with minimal guidance. For example, for the door-traversal scenario, we specify the door affordances (i.e., the handle, back surface, and front surface), and only provide a sparse objective by simply asking the robot to end up behind the door. Additionally, we found that the generated behaviors are physically consistent and can be reliably executed with a real legged mobile manipulator.

What further work are you planning in this area?
We see the presented framework as a stepping stone toward developing a fully autonomous loco-manipulation pipeline. However, we see some limitations that we aim to address in future work. These limitations are primarily connected to the task-execution phase, where tracking behaviors generated on the basis of pre-modeled environments is only viable under the assumption of a reasonably accurate description, which is not always straightforward to define.

Robustness to modeling mismatches can be greatly improved by complementing our planner with data-driven techniques, such as deep reinforcement learning (DRL). So one interesting direction for future work would be to guide the training of a robust DRL policy using reliable expert demonstrations that can be rapidly generated by our loco-manipulation planner to solve a set of challenging tasks with minimal reward-engineering.

About the author

Jean-Pierre Sleiman received the B.E. degree in mechanical engineering from the American University of Beirut (AUB), Lebanon, in 2016, and the M.S. degree in automation and control from Politecnico Di Milano, Italy, in 2018. He is currently a Ph.D. candidate at the Robotic Systems Lab (RSL), ETH Zurich, Switzerland. His current research interests include optimization-based planning and control for legged mobile manipulation.

#ICRA2023 awards finalists and winners

In this post we bring you all the paper awards finalists and winners presented during the 2023 edition of the IEEE International Conference on Robotics and Automation (ICRA). Congratulations to the winners and finalists!

ICRA 2023 Outstanding Paper

ICRA 2023 Outstanding Automation Paper

ICRA 2023 Outstanding Student Paper

ICRA 2023 Outstanding Deployed Systems Paper

ICRA 2023 Outstanding Dynamics and Control Paper

ICRA 2023 Outstanding Healthcare and Medical Robotics Paper

ICRA 2023 Outstanding Locomotion Paper

ICRA 2023 Outstanding Manipulation Paper

ICRA 2023 Outstanding Mechanisms and Design Paper

ICRA 2023 Outstanding Multi-Robot Systems Paper

ICRA 2023 Outstanding Navigation Paper

  • IMODE: Real-Time Incremental Monocular Dense Mapping Using Neural Field, by Matsuki, Hidenobu; Sucar, Edgar; Laidlow, Tristan; Wada, Kentaro; Scona, Raluca; Davison, Andrew J.
  • SmartRainNet: Uncertainty Estimation for Laser Measurement in Rain, by Zhang, Chen; Huang, Zefan; Tung, Beatrix; Ang Jr, Marcelo H; Rus, Daniela. (WINNER)
  • Online Whole-Body Motion Planning for Quadrotor Using Multi-Resolution Search, by Ren, Yunfan; Liang, Siqi; Zhu, Fangcheng; Lu, Guozheng; Zhang, Fu.

ICRA 2023 Outstanding Physical Human-Robot Interaction Paper

ICRA 2023 Outstanding Planning Paper

ICRA 2023 Outstanding Robot Learning Paper

ICRA 2023 Outstanding Sensors and Perception Paper

Interview with Hae-Won Park, Seungwoo Hong and Yong Um about MARVEL, a robot that can climb on various inclined steel surfaces

Prof. Hae-Won Park (left), Ph.D. Student Yong Um (centre), Ph.D. Student Seungwoo Hong (right). Credits: KAIST

We had the chance to interview Hae-Won Park, Seungwoo Hong and Yong Um, authors of the paper “Agile and versatile climbing on ferromagnetic surfaces with a quadrupedal robot”, recently published in Science Robotics.

What is the topic of the research in your paper?
The main topic of our work is that the robot we have developed can move agilely, not only on flat ground but also on vertical walls and ceilings made of ferromagnetic materials. Also, it has the ability to perform dexterous maneuvers such as crossing gaps, overcoming obstacles, and transitioning upon corners.

Could you tell us about the implications of your research and why it is an interesting area for study?
Such agile and dexterous locomotion capabilities will be able to expand the robot’s operational workspace and approach places that are difficult or dangerous for human operators to access directly. For example, inspection and welding operations in heavy industries such as shipbuilding, steel bridges, and storage tanks.

Could you explain your methodology? What were your main findings?
Our magnet foot can switch the on/off state in a short period of time (5 ms) and in an energy-efficient way, thanks to the novel geometry design of EPM. At the same time, the magnet foot can provide large holding forces in both shear and normal directions due to the MRE footpad. Also, our actuators can provide balanced speed/torque characteristics, high-bandwidth torque control capability, and the ability to mediate high impulsive force. To control vertical and inverted locomotion as well as various versatile motions, we have utilized a control framework (model predictive control) that can generate reliable and robust reaction forces to track desired body motions in 3D space while preventing slippage or tipping-over occurs. We found that all the elements mentioned earlier are imperative to perform dynamic maneuvers against gravity.

What further work are you planning in this area?
So far, the robot is able to move on smooth surfaces with moderate curvature. To enable the robot to move on irregularly shaped surfaces, we are working on designing a compliantly-integrated multiple miniaturized EPMs with MRE footpads that can increase the effective contact area to provide robust adhesion. Also, a vision system with high-level navigation algorithms will be included to enable the robot to move autonomously in the near future.

About the authors

Hae-Won Park received the B.S. and M.S. degrees from Yonsei University, Seoul, South Korea, in 2005 and 2007, respectively, and the Ph.D. degree from the University of Michigan, Ann Arbor, MI, USA, in 2012, all in mechanical engineering. He is an Associate Professor of mechanical engineering with the Korea Advanced Institute of Science and Technology, Daejeon, South Korea. His research interests include the intersection of control, dynamics, and mechanical design of robotic systems, with special emphasis on legged locomotion robots. Dr. Park is the recipient of the 2018 National Science Foundation (NSF) CAREER Award and NSF most prestigious awards in support of early-career faculty.

Seungwoo Hong received the B.S. degree from Shanghai Jiao Tong University, Shanghai, China, in July 2014, and the M.S. degree from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in August 2017, all in mechanical engineering. He is currently a Ph.D. candidate with the Department of Mechanical Engineering, KAIST, Daejeon, Korea. His current research interests include model-based optimization, motion planning and control of legged robotic systems.

Yong Um received the B.S. degree in mechanical engineering from the Korea Advanced Institute of Science and Technology, Daejeon, South Korea, in 2020. He is currently working toward the Ph.D. degree in mechanical engineering in Korea Advanced Institute of Science and Technology. His research interests include mechanical system and magnetic device design for legged robot.


Science Magazine robot videos 2022 (+ breakthrough of the year)

Image generated by DALLE 2 using prompt “a hyperrealistic image of a robot watching robot videos on a laptop”

Did you manage to watch all the holiday robot videos of 2022? If you did but are still hungry for more, I have a few more videos from Science Magazine featuring robotics research that were released during last year. Enjoy!

Extra: breakthrough of the year

Holiday robot videos 2022 updated (+ how robots prepare an Amazon warehouse for Christmas)

Image generated by OpenAI’s DALL-E 2 with prompt “a robot surrounded by humans, Santa Claus and a Christmas tree at Christmas, digital art”.

Happy holidays everyone! And many thanks to all those that sent us their holiday videos. Here are some robot videos of this year to get you into the spirit of the season. We wish you the very best for these holidays and the year 2023 :)

And here are some very special season greetings from robots!

Recent submissions

Extra: How robots prepare an Amazon warehouse for Christmas


Did we miss your video? You can send it to daniel.carrillozapata@robohub.org and we’ll include it in this list.

#IROS2022 best paper awards

Did you have the chance to attend the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) in Kyoto? Here we bring you the papers that received an award this year in case you missed them. Congratulations to all the winners and finalists!

Best Paper Award on Cognitive Robotics

Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation. Payam Jome Yazdian, Mo Chen, and Angelica Lim.

Best RoboCup Paper Award

RCareWorld: A Human-centric Simulation World for Caregiving Robots. Ruolin Ye, Wenqiang Xu, Haoyuan Fu, Rajat Kumar, Jenamani, Vy Nguyen, Cewu Lu, Katherine Dimitropoulou, and Tapomayukh Bhattacharjee.

SpeedFolding: Learning Efficient Bimanual Folding of Garments. Yahav Avigal, Lars Burscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg.

Best Paper Award on Robot Mechanisms and Design

Aerial Grasping and the Velocity Sufficiency Region. Tony G. Chen, Kenneth Hoffmann, JunEn Low, Keiko Nagami, David Lentink, and Mark Cutkosky.

Best Entertainment and Amusement Paper Award

Robot Learning to Paint from Demonstrations. Younghyo Park, Seunghun Jeon, and Taeyoon Lee.

Best Paper Award on Safety, Security, and Rescue Robotics

Power-based Safety Layer for Aerial Vehicles in Physical Interaction using Lyapunov Exponents. Eugenio Cuniato, Nicholas Lawrance, Marco Tognon, and Roland Siegwart.

Best Paper Award on Agri-Robotics

Explicitly Incorporating Spatial Information to Recurrent Networks for Agriculture. Claus Smitt, Michael Allan Halstead, Alireza Ahmadi, and Christopher Steven McCool.

Best Paper Award on Mobile Manipulation

Robot Learning of Mobile Manipulation with Reachability Behavior Priors. Snehal Jauhri, Jan Peters, and Georgia Chalvatzaki.

Best Application Paper Award

Soft Tissue Characterisation Using a Novel Robotic Medical Percussion Device with Acoustic Analysis and Neural Networks. Pilar Zhang Qiu, Yongxuan Tan, Oliver Thompson, Bennet Cobley, and Thrishantha Nanayakkara.

Best Paper Award for Industrial Robotics Research for Applications

Absolute Position Detection in 7-Phase Sensorless Electric Stepper Motor. Vincent Groenhuis, Gijs Rolff, Koen Bosman, Leon Abelmann, and Stefano Stramigioli.

ABB Best Student Paper Award

FAR Planner: Fast, Attemptable Route Planner using Dynamic Visibility Update. Fan Yang, Chao Cao, Hongbiao Zhu, Jean Oh, and Ji Zhang.

Best Paper Award

SpeedFolding: Learning Efficient Bimanual Folding of Garments. Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg.

Cooperative cargo transportation by a swarm of molecular machines

Dr. Akira Kakugo and his team from Hokkaido University in Japan sent us a video presentation of his recent paper ‘Cooperative cargo transportation by a swarm of molecular machines’, published in Science Robotics.

‘Despite the advancements in macro-scale robots, the development of a large number of small-size robots is still challenging, which is crucial to enhance the scalability of swarm robots,’ says Dr. Kakugo. In the paper, researchers showed it is possible to collectively transport molecular cargo by a swarm of artificial molecular robots (engineered systems with biological/molecular sensors, processors and actuators) responding to light.

Coffee with a Researcher (#ICRA2022)

As part of her role as one of the IEEE ICRA 2022 Science Communication Awardees, Avie Ravendran sat down virtually with a few researchers from academia and industry attending the conference. Curious about what they had to say? Read their quotes below!

“I really believe that learned methods, especially imitation and transfer learning, will enable scalable robot applications in human and unstructured environments We’re on the cusp of seeing robot agents dynamically adapt and solve real world problems”

– Nicholas Nadeau, CTO, Halodi Robotics

“On one hand I think that the interplay of perception and control is quite exciting, in terms of the common underlying principles, while on the other, it’s both cool and inspiring to see more robots getting out of the lab”

– Matías Mattamala, PhD Student, Oxford Dynamic Robot Systems, Oxford Robotics Institute

“I believe that incorporating priors regarding the existing scene geometry and the temporal consistency that’s present in the context of mobile robotics, can be used to guide the learning of more robust representations”

– Kavisha Vidanapathirana, QUT & CSIRORobotics

“At the moment, I am aiming to find out what researchers need in order to take care of their motivation and wellbeing”

– Daniel Carrillo-Zapata, Founder, Scientific Agitation

“We have an immense amount of unsupervised knowledge and we’re always updating our priors. Taking advantage of large-scale unsupervised pretraining and having a lifelong learning system seems like a significant step in the right direction”

– Nitish Dashora, Researcher, Berkeley AI Research & Redwood Center for Theoretical Neuroscience

“When objects are in clutter, with various objects lying on top of one another, the robot needs to interactively and autonomously rearrange the scene in order to retrieve the pose of the target object with minimal number of actions to achieve overall efficiency. I work on pose estimation algorithms to process dense visual data as well as sparse tactile data”

– Prajval Kumar, BMW & University of Glasgow

“Thinking of why the robots or even the structures behave the way they do, and framing and answering questions in that line satisfies my curiosity as a researcher”

– Tung Ta, Postdoctoral Researcher, The University of Tokyo

“I sometimes hear that legged locomotion is a solved problem, but I disagree. I think that the standards of performance have just been raised and collectively we can now tackle more dynamic, efficient and reliable gaits”

– Kevin Green, PhD Candidate, Oregon State University

“My goal in robotics research is to bring down the cost and improve the capabilities of marine research platforms by introducing modularity and underactuation into the field. We’re working on understanding how to bring our collective swimming technology into flowing environments now”

– Gedaliah Knizhnik, PhD Candidate, GRASP Laboratory & The modular robotics laboratory, University of Pennsylvania

“I am interested in how we can develop the algorithms and representations needed to enable long-term autonomous robot navigation without human intervention, such as in the case of an autonomous underwater robot persistently mapping a marine ecosystem for an extended period of time. There are lots of challenges like how can we build a compact representation of the world, ideally grounded in human-understandable semantics? How can we deal gracefully with outliers in perception that inevitably occur in the lifelong setting? and also how can we scale robot state estimation methods in time and space while bounding memory and computation requirements?”

– Kevin Doherty, Computer Science and AI Lab, MIT & Woods Hole Oceanographic Institution

“How can robots learn to interact with and reason about themselves and the world without an intuitive feel for either? Communication is at the heart of biological and robotic systems. Inspired by control theory, information theory, and neuroscience, early work in artificial intelligence (AI) and robotics focused on a class of dynamical system known as feedback systems. These systems are characterized by recurrent mechanisms or feedback loops that govern, regulate, or ‘steer’ the behaviour of the system toward desirable stable states in the presence of disturbance in diverse environments. Feedback between sensation, prediction, decision, action, and back is a critical component of sensorimotor learning needed to realize robust intelligent robotic systems in the wild, a grand challenge of the field. Existing robots are fundamentally numb to the world, limiting their ability to sense themselves and their environment. This problem will only increase as robots grow in complexity, dexterity, and maneuverability, guided by biomimicry. Feedback control systems such as the proportional integral derivative (PID), reinforcement learning (RL), and model predictive control (MPC) are now common in robotics, as is (optimal, Bayesian) Kálmán filtering of point-based IMU-GPS signals. Lacking are the distributed multi-modal, high-dimensional sensations needed to realize general intelligent behaviour, executing complex action sequences through high-level abstractions built up from an intuitive feel or understanding of physics.While the central nervous system and biological neural networks are quantum parallel distributed processing (PDP) engines, most digital artificial neural networks are fully decoupled from sensors and provide only a passive image of the world. We are working to change that by coupling parallel distributed sensing and data processing through a neural paradigm. This involves innovations in hardware, software, and datasets. At Nervosys, we aim to make this dream a reality by building the first nervous system and platform for general robotic intelligence.”

– Adam Erickson, Founder, Nervosys

Interview with Axel Krieger and Justin Opfermann: autonomous robotic laparoscopic surgery for intestinal anastomosis

Axel Krieger is the Head of the Intelligent Medical Robotic Systems and Equipment (IMERSE) Lab at Johns Hopkins University, where Justin Opfermann is pursuing his PhD degree. Together with H. Saeidi, M. Kam, S. Wei, S. Leonard , M. H. Hsieh and J. U. Kang, they recently published the paper ‘Autonomous robotic laparoscopic surgery for intestinal anastomosis‘ in Science Robotics. Below, Axel and Justin tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

Our research is focused on the design and evaluation of medical robots for autonomous soft tissue surgeries. In particular, this paper describes a surgical robot and workflow to perform autonomous anastomosis of the small bowel. Performance of the robot is conducted in synthetic tissues against expert surgeons, followed by experiments in pig studies to demonstrate preclinical feasibility of the system and approach.

Could you tell us about the implications of your research and why it is an interesting area for study?

Anastomosis is an essential step to the reconstructive phase of surgery and is performed over a million times each year in the United States alone. Surgical outcomes for patients are highly dependent on the surgeon’s skill, as even a single missed stitch can lead to anastomotic leak and infection in the patient. In laparoscopic surgeries these challenges are even more difficult due to space constraints, tissue motion, and deformations. Robotic anastomosis is one way to ensure that surgical tasks that require high precision and repeatability can be performed with more accuracy and precision in every patient independent of surgeon skill. Already there are autonomous surgical robots for hard tissue surgeries such as bone drilling for hip and knee implants. The Smart Tissue Autonomous Robot (STAR) takes the autonomous robotic skill one step further by performing surgical tasks on soft tissues. This enables a robot to work with a human to complete more complicated surgical tasks where preoperative planning is not possible. We hypothesize that this will result in a democratized surgical approach to patient care with more predictable and consistent patient outcomes.

Could you explain your methodology?

Until this paper, autonomous laparoscopic surgery was not possible in soft tissue due to the unpredictable motions of the tissue and limitations on the size of surgical tools. Performing autonomous surgery required the development of novel suturing tools, imaging systems, and robotic controls to visualize a surgical scene, generate an optimized surgical plan, and then execute that surgical plan with the highest precision. Combining all of these features into a single system is challenging. To accomplish these goals we integrated a robotic suturing tool that simplifies wristed suturing motions to the press of a button, developed a three dimensional endoscopic imaging system based on structured light that was small enough for laparoscopic surgery, and implemented a conditional autonomy control scheme that enables autonomous laparoscopic anastomosis. We evaluated the system against expert surgeons performing end to end anastomosis using either laparoscopic or da Vinci tele-operative techniques on synthetic small bowel across metrics such as consistency of suture spacing and suture bite, stitch hesitancy, and overall surgical time. These experiments were followed by preclinical feasibility tests in porcine small bowel. Limited necropsy was performed after one week to evaluate the quality of the anastomosis and immune response.

What were your main findings?

Comparison studies in synthetic tissues indicated that sutures placed by the STAR system had more consistent spacing and bite depth than those applied by surgeons using either a manual laparoscopic technique or robotic assistance with the da Vinci surgical system. The improved precision afforded by the autonomous approach led to a higher quality anastomosis for the STAR system which was qualitatively verified by laminar four dimension MRI flow fields across the anastomosis. The STAR system completed the anastomosis with a first stitch success rate of 83% which was better than surgeons in either group. Following the ex-vivo tests, STAR performed laparoscopic small bowel anastomosis in four pigs. All animals survived the procedure and had an average weight gain over the 1 week survival period. STAR’s anastomoses had similar burst strength, lumen area reduction, and healing as manually sewn samples, indicating the feasibility of autonomous soft tissue surgeries.

What further work are you planning in this area?

Our group is researching marker-less strategies to track tissue position, motion, and plan surgical tasks without the need for fiducial markers on tissues. The ability to three dimensionally reconstruct the surgical field on a computer and plan surgical tasks without the need for artificial landmarks would simplify autonomous surgical planning and enable collaborative surgery between an autonomous robot and human. Using machine learning and neural networks, we have demonstrated the robot’s ability to identify tissue edges and track natural landmarks. We are planning to implement fail-safe techniques and hope to perform first in human studies in the next few years.


About the interviewees

Axel Krieger (PhD), an Assistant Professor in mechanical engineering, focuses on the development of novel tools, image guidance, and robot-control techniques for medical robotics. He is a member of the Laboratory for Computational Sensing and Robotics. He is also the Head of the Intelligent Medical Robotic Systems and Equipment (IMERSE) Lab at Johns Hopkins University.

Justin Opfermann (MS) is a PhD robotics student in the Department of Mechanical Engineering at Johns Hopkins University. Justin has ten years of experience in the design of autonomous robots and tools for laparoscopic surgery, and is also affiliated with the Laboratory for Computational Sensing and Robotics. Before joining JHU, Justin was a Project Manager and Senior Research and Design Engineer at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital.

Page 1 of 5
1 2 3 5