Archive 31.05.2023

Page 1 of 5
1 2 3 5

Legged robots learn dynamic physical characteristics of terrains like animals

When a kitten is walking in a dangerous environment, it will gently step on the terrain with its feet to estimate the friction or bearing capacity. Based on this experience, the kitten can then predict the physical parameters of terrain with a similar appearance and avoid the soft, wet ground.

Team develops a revolutionary unpiloted aerial vehicle inspired by science fiction

Imagine a world where science fiction meets reality, where cutting-edge technology brings to life the awe-inspiring scenes from movies like Prometheus. This is the groundbreaking research led by Dr. Fu Zhang, Assistant Professor of Department of Mechanical Engineering at the Faculty of Engineering, the University of Hong Kong (HKU), who has developed a Powered-flying Ultra-underactuated LiDAR-Sensing Aerial Robot (PULSAR) that is poised to redefine the world of unpiloted aerial vehicles (UAVs).

We are pleased to announce our 3rd Reddit Robotics Showcase!

During the 2020 pandemic, members of the reddit & discord r/robotics community rallied to organize an online showcase for members of our community. What was originally envisioned as a small, intimate afternoon video call turned out to be a two day event of participants from across the world. The 2021 and 2022 events showcased a multitude of fantastic projects from the r/Robotics Reddit community, as well as academia and industry.

This year’s event features many wonderful robots including…

Program

All times are recorded in Eastern Daylight Time (EDT), UTC-4. Check out the full program in our website for more details.

Saturday, 10th of June
Session 1: Robot Arms
10:00 – 11:00 KUKA Research and Development
11:00 – 11:30 Harrison Low – Juggling Robot
11:30 – 11:45 Jan Veverak Koniarik – Open Source Servo Firmware
11:45 – 12:00 Rafael Diaz – Soft Robot Tentacle
12:00 – 12:30 Petar Crnjak – DIY 6-Axis Robot Arm
Lunch Break

Session 2: Social, Domestic, and Hobbyist Robots
14:00 – 15:00 Eliot Horowitz (CEO of VIAM) – The Era of Robotics Unicorns
15:00 – 15:30 Niranj S – Mini Humanoid Robot
15:30 – 15:45 Tommy Hedlund – Interactive Robot with ChatFPT
15:45 – 16:00 Emilie Kroeger – ChatGPT Integration for the Pepper Robot
16:00 – 16:15 Matt Vella – Retrofitting an Omnibot 2000 with a Raspberry Pi
16:15 – 16:30 Keegan Neave – NE-Five Mk3
16:30 – 17:00 Dan Nicholson – Open Source Companion Robot

Sunday, 11th of June
Session 1: Autonomous Mobile Robots
10:00 – 11:00 Keynote TBD
11:00 – 11:30 Ciaran Dowdson – “Sailing into the Future: Oshen’s Mini, Autonomous Robo-Vessels for Enhanced Ocean Exploration”
11:30 – 12:00 James Clayton – Giant, Walking Spider Suit with Real Flowers
12:00 – 12:15 Jacob David Cunningham – SLAM by Blob Tracking and Inertial Tracking
12:15 – 12:30 Carl Draper – Mobile UGV Platform Based on ROS2
12:30 – 12:45 Daniel Strabley – Nightcrawler Tactical Robot
12:45 – 13:15 Saksham Sharma – Multi-Robot Path Planning Using Priority Based Algorithm
Lunch Break

Session 2: Startup & Solutions
14:00 – 15:00 Carter Schultz (AMP Robotics) – The Reality of Robotic Systems
15:00 – 15:15 Jakub Matyszczak – MAB Robotics
15:15 – 15:45 Daniel Simu – Acrobot, the Acrobatic Robot
15:45 – 16:00 Luis Guzman – Zeus2Q, the Humanoid Robotic Platform
16:00 – 16:30 Kshitij Tiwari – The State of Robotic Touch Sensing
16:30 – 16:45 Sayak Nandi – ROS Robots as a Web Application
16:45 – 17:00 Ishant Pundir – Asper and Osmos: A Personal Robot and AI-Based OS

Links

Team develops a centipede robot with variable body-axis flexibility

Researchers from the Department of Mechanical Science and Bioengineering at Osaka University have invented a new kind of walking robot that takes advantage of dynamic instability to navigate. By changing the flexibility of the couplings, the robot can be made to turn without the need for complex computational control systems. This work may assist the creation of rescue robots that are able to traverse uneven terrain.

A worm-inspired robot based on an origami structure and magnetic actuators

Bio-inspired robots, robotic systems that emulate the appearance, movements, and/or functions of specific biological systems, could help to tackle real-world problems more efficiently and reliably. Over the past two decades, roboticists have introduced a growing number of these robots, some of which draw inspiration from fruit flies, worms, and other small organisms.

European Robotics Forum 2023 was a success!

Earlier this spring, the largest robotics event in Europe – The European Robotics Forum 2023 (ERF23) – was held in Odense, Denmark. As one of the most influential gatherings of the robotics community in Europe, the event brought together researchers, engineers, managers, entrepreneurs, businesspeople, and public funding officers to explore the latest trends and themes in the field of robotics. With more than 1100 registered participants and 65 sponsors and exhibitors, this was ‘the largest ERF in recorded history – on all parameters’, say the organizers.

During the four-day forum, RI4EU robotics DIHs network, together with agROBOfood and Rima Network, hosted a booth at the event, where they showcased a range of robotics initiatives. These also included TRINITY Robotics DIHs, agROBOfood, DIH-HERO, and DIH² robotics networks.

One of the highlights of the conference for RI4EU was their workshop – “Supporting SMEs in Bringing Robotics Solutions to Market“. The aim was to have an interactive discussion on how robotics Digital Innovation Hubs (DIHs) networks can create a greater impact for SMEs and facilitate a broad uptake and integration of robotics technologies in the industry. The question comes in the context where a group of 5 EU-funded robotics projects – robotics DIHs networks (agROBOfood, Rima Network, TRINITY, DIH-HERO, and DIH²) – under the umbrella of RI4EU, have provided financial support of 40M EUR and additional services to more than 180 European robotics SMEs, to help them bring their solutions to the market. So now, when these projects are ending after a period of 4 years, it is only natural to discuss the impact they have created, challenges and lessons learned.

The session featured five expert speakers: Minna Lanz, Coordinator of TRINITY Robotics DIHs, Christophe Leroux, Coordinator of Rima Network, Françoise Siepel, Coordinator of DIH-HERO, Ali Muhammad, Coordinator of DIH², and Tsampikos Kounalakis, Robotics Researcher-agROBOfood. Maurits Butter, RI4EU Gateway to EU Robotics Initiatives, was a moderator, ensuring a productive and engaging conversation.

Building trusting relationships is key

There is massive potential for robotic applications in the industry, for example, to increase productivity, improve safety, etc. but the total market size of robotics is still negligible in relation to the overall market size. The feedback received during the workshop indicated that one of the major issues robot SMEs face is the effort needed to develop Proof of concepts and run tests. Along the same line of reasoning, companies agreed that the 4 most important services they need from DIHs are:

  • Technological support, to provide technological infrastructure and expertise to develop the innovation.
  • Ecosystem services, aiming at the provision of support to create a dynamic ecosystem.
  • Business support, to provide more single customer support on developing an innovation-based business
  • Skills and educational support, to enhance the expertise, skills and human resources with the network partners.

However, there was a common key element that companies recognized to have great impact on their activity: the trusted business connections that were facilitated by the DIHs. Building a community of interconnected experts in the many robotics fields requires strategic planning, resources, and determination. So in this sense, DIHs networks can provide opportunities that cannot be found elsewhere. Moreover, scouting for new regional network connections, seeking to connect the value chain, etc. can be a daunting and time-consuming task but then the DIHs networks really bring added value through their connections with other projects or professionals within niche markets, which provide the robotics ecosystem with an entire list of contacts to seeks guidance from or do business with.

Long-term growth

But now, the 5 robotics projects, agROBOfood, Rima Network, TRINITY, DIH-HERO, and DIH², are coming to an end, as they have received funding from the European Union’s Horizon 2020 so far. So what will be the future of these DIHs networks? Will they stop their activity once the EU funding will have ended or will they continue to provide their services in the future? When asked these questions during the workshop, all networks made it clear that they ‘will continue to live’, as one of the speakers said. They are well-established networks in the field of robotics and although their new business models are not officialized yet, the DIHs networks are planning to grow in the future.
So far, the Rima Network has made its announcement: it is now transforming into the RIMA Alliance, to keep providing services and seize all good practices in just one-stop-shop, to keep working towards the uptake of robotics in inspection and maintenance. As for the other 4 networks, follow their pages and RI4EU network to hear fresh news and subscribe to the newsletter on their website.

Through participation in the ERF23, the RI4EU team was able to learn from experts in the field, make new connections, and promote their work to a wider audience. By partnering with their innovation actions/DIHs networks, they were able to showcase their initiatives and inspire others to join them in their mission to accelerate innovation in robotics.

Helping robots handle fluids

Researchers created “FluidLab,” a simulation environment with a diverse set of manipulation tasks involving complex fluid dynamics. Image: Alex Shipps/MIT CSAIL via Midjourney

Imagine you’re enjoying a picnic by a riverbank on a windy day. A gust of wind accidentally catches your paper napkin and lands on the water’s surface, quickly drifting away from you. You grab a nearby stick and carefully agitate the water to retrieve it, creating a series of small waves. These waves eventually push the napkin back toward the shore, so you grab it. In this scenario, the water acts as a medium for transmitting forces, enabling you to manipulate the position of the napkin without direct contact.

Humans regularly engage with various types of fluids in their daily lives, but doing so has been a formidable and elusive goal for current robotic systems. Hand you a latte? A robot can do that. Make it? That’s going to require a bit more nuance. 

FluidLab, a new simulation tool from researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), enhances robot learning for complex fluid manipulation tasks like making latte art, ice cream, and even manipulating air. The virtual environment offers a versatile collection of intricate fluid handling challenges, involving both solids and liquids, and multiple fluids simultaneously. FluidLab supports modeling solid, liquid, and gas, including elastic, plastic, rigid objects, Newtonian and non-Newtonian liquids, and smoke and air. 

At the heart of FluidLab lies FluidEngine, an easy-to-use physics simulator capable of seamlessly calculating and simulating various materials and their interactions, all while harnessing the power of graphics processing units (GPUs) for faster processing. The engine is “differential,” meaning the simulator can incorporate physics knowledge for a more realistic physical world model, leading to more efficient learning and planning for robotic tasks. In contrast, most existing reinforcement learning methods lack that world model that just depends on trial and error. This enhanced capability, say the researchers, lets users experiment with robot learning algorithms and toy with the boundaries of current robotic manipulation abilities.

To set the stage, the researchers tested said robot learning algorithms using FluidLab, discovering and overcoming unique challenges in fluid systems. By developing clever optimization methods, they’ve been able to transfer these learnings from simulations to real-world scenarios effectively. 

“Imagine a future where a household robot effortlessly assists you with daily tasks, like making coffee, preparing breakfast, or cooking dinner. These tasks involve numerous fluid manipulation challenges. Our benchmark is a first step towards enabling robots to master these skills, benefiting households and workplaces alike,” says visiting researcher at MIT CSAIL and research scientist at the MIT-IBM Watson AI Lab Chuang Gan, the senior author on a new paper about the research. “For instance, these robots could reduce wait times and enhance customer experiences in busy coffee shops. FluidEngine is, to our knowledge, the first-of-its-kind physics engine that supports a wide range of materials and couplings while being fully differentiable. With our standardized fluid manipulation tasks, researchers can evaluate robot learning algorithms and push the boundaries of today’s robotic manipulation capabilities.”

Fluid fantasia

Over the past few decades, scientists in the robotic manipulation domain have mainly focused on manipulating rigid objects, or on very simplistic fluid manipulation tasks like pouring water. Studying these manipulation tasks involving fluids in the real world can also be an unsafe and costly endeavor. 

With fluid manipulation, it’s not always just about fluids, though. In many tasks, such as creating the perfect ice cream swirl, mixing solids into liquids, or paddling through the water to move objects, it’s a dance of interactions between fluids and various other materials. Simulation environments must support “coupling,” or how two different material properties interact. Fluid manipulation tasks usually require pretty fine-grained precision, with delicate interactions and handling of materials, setting them apart from straightforward tasks like pushing a block or opening a bottle. 

FluidLab’s simulator can quickly calculate how different materials interact with each other. 

Helping out the GPUs is “Taichi,” a domain-specific language embedded in Python. The system can compute gradients (rates of change in environment configurations with respect to the robot’s actions) for different material types and their interactions (couplings) with one another. This precise information can be used to fine-tune the robot’s movements for better performance. As a result, the simulator allows for faster and more efficient solutions, setting it apart from its counterparts.

The 10 tasks the team put forth fell into two categories: using fluids to manipulate hard-to-reach objects, and directly manipulating fluids for specific goals. Examples included separating liquids, guiding floating objects, transporting items with water jets, mixing liquids, creating latte art, shaping ice cream, and controlling air circulation. 

“The simulator works similarly to how humans use their mental models to predict the consequences of their actions and make informed decisions when manipulating fluids. This is a significant advantage of our simulator compared to others,” says Carnegie Mellon University PhD student Zhou Xian, another author on the paper. “While other simulators primarily support reinforcement learning, ours supports reinforcement learning and allows for more efficient optimization techniques. Utilizing the gradients provided by the simulator supports highly efficient policy search, making it a more versatile and effective tool.”

Next steps

FluidLab’s future looks bright. The current work attempted to transfer trajectories optimized in simulation to real-world tasks directly in an open-loop manner. For next steps, the team is working to develop a closed-loop policy in simulation that takes as input the state or the visual observations of the environments and performs fluid manipulation tasks in real time, and then transfers the learned policies in real-world scenes.

The platform is publicly publicly available, and researchers hope it will benefit future studies in developing better methods for solving complex fluid manipulation tasks.

“Humans interact with fluids in everyday tasks, including pouring and mixing liquids (coffee, yogurts, soups, batter), washing and cleaning with water, and more,” says University of Maryland computer science professor Ming Lin, who was not involved in the work. “For robots to assist humans and serve in similar capacities for day-to-day tasks, novel techniques for interacting and handling various liquids of different properties (e.g. viscosity and density of materials) would be needed and remains a major computational challenge for real-time autonomous systems. This work introduces the first comprehensive physics engine, FluidLab, to enable modeling of diverse, complex fluids and their coupling with other objects and dynamical systems in the environment. The mathematical formulation of ‘differentiable fluids’ as presented in the paper makes it possible for integrating versatile fluid simulation as a network layer in learning-based algorithms and neural network architectures for intelligent systems to operate in real-world applications.”

Gan and Xian wrote the paper alongside Hsiao-Yu Tung a postdoc in the MIT Department of Brain and Cognitive Sciences; Antonio Torralba, an MIT professor of electrical engineering and computer science and CSAIL principal investigator; Dartmouth College Assistant Professor Bo Zhu, Columbia University PhD student Zhenjia Xu, and CMU Assistant Professor Katerina Fragkiadaki. The team’s research is supported by the MIT-IBM Watson AI Lab, Sony AI, a DARPA Young Investigator Award, an NSF CAREER award, an AFOSR Young Investigator Award, DARPA Machine Common Sense, and the National Science Foundation.

The research was presented at the International Conference on Learning Representations earlier this month.

A software package to ease the use of neural radiance fields in robotics research

Neural radiance fields (NeRFs) are advanced machine learning techniques that can generate three-dimensional (3D) representations of objects or environments from two-dimensional (2D) images. As these techniques can model complex real-world environments realistically and in detail, they could greatly support robotics research.

Robot Talk Episode 50 – Elena De Momi

Claire chatted to Elena De Momi from the the Polytechnic University of Milan all about surgical robotics, artificial intelligence, and the upcoming ICRA robotics conference in London.

Elena De Momi received her MSc in Biomedical Engineering in 2002, PhD in Bioengineering in 2006, and she is currently Associate Professor in the Electronic Information and Bioengineering Department (DEIB) of Politecnico di Milano. She is co-founder of the Neuroengineering and Medical Robotics Laboratory, in 2008, being responsible of the Medical Robotics section. Her academic interests include computer vision and image-processing, artificial intelligence, augmented reality and simulators, teleoperation, haptics, medical robotics, human robot interaction.

Page 1 of 5
1 2 3 5