Archive 08.11.2023

Page 5 of 6
1 3 4 5 6

The 5 levels of Sustainable Robotics

If you look at the UN Sustainable Development Goals, it’s clear that robots have a huge role to play in advancing the SDGs. However the field of Sustainable Robotics is more than just the application area. For every application that robotics can improve in sustainability, you have to also address the question – what are the additional costs or benefits all the way along the supply chain. What are the ‘externalities’, or additional costs/benefits, of using robots to solve the problem. Does the use of robotics bring a decrease or an increase to:

  • power costs
  • production costs
  • labor costs
  • supply chain costs
  • supply chain mileage
  • raw materials consumption
  • and raw material choice

Solving our economic and environmental global challenges should not involve adding to the existing problems or creating new ones. So it’s important that we look beyond the first order ways in which robotics can solve global sustainable development goals and address every level at which robotics can have an impact.

Here I propose 5 levels of sustainability to frame the discussion, much as the 5 levels of autonomy have helped define the stages of autonomous mobility.

Level 1: Robots for existing recycling

Level 1 of Sustainable Robotics is simply making existing processes in sustainability more efficient, affordable and deployable. Making recycling better. Companies that are great examples are: AMP Robotics, Recycleye, MachineEx, Pellenc ST, Greyparrot, Everlast Labs and Fanuc. Here’s an explainer video from Fanuc.

“Because of AI, because of the robotic arms, we have seen plants recover 10, 20, 30% more than what they have been doing previously,” said JD Ambati, CEO of EverestLabs. “They have been losing millions of dollars to the landfill, and because of AI, they were able to identify the value of the losses and deploy robotic arms to capture that.”{}^{1}

Some other examples of Level 1 use robots to better monitor aquaculture, or robots to clean or install solar farms and wind turbines. If the robotics technology improves existing recycling practices then it is at Level 1 of Sustainable Robotics.

Level 2: Robots enabling new recycling

Level 2 of Sustainable Robotics is where robotics allows new materials to be recycled and in new industry application areas. A great example of this is Urban Machines, which salvages timber from construction sites and transforms it back into useable materials, something that was too difficult to do at any scale previously.

Construction using onsite materials and robotics 3D printing is another example, as seen in the NASA Habitat Challenge, sponsored by Caterpillar, Bechtel and Brick & Mortar Ventures.

Some other examples are the ocean or lake going garbage collecting robots like Waste Shark from Ran Marine, River Cleaning, or Searial Cleaners, a Quebec company whose robots were deployed in the Great Lakes Plastic Cleanup, helping to remove 74,000 plastic pieces from four lakes since 2020.

Searial Cleaners is angling for its BeBot and PixieDrone to be used as janitorial tools for beaches, marinas and golf courses, and the BeBot offers ample room for company branding. The equipment emerged from the mission of the Great Lakes Plastic Cleanup (GLPC) to harness new technologies against litter. The program also uses other devices including the Seabin, which sits in water and sucks in trash, and the Enviropod LittaTrap filter for stormwater drains.{}^{2}

If it’s a brand new way to practice recycling with robotic technology, then it’s at Level 2 of Sustainable Robotics.

Level 3: Robots electrifying everything

One of the biggest sustainability shifts enabled by robotics is the transition from fossil fuel powered transport, logistics and agricultural machinery into BEV, or Battery Electric Vehicle technology. On top of radically reducing emissions, the increasing use of smaller autonomous electric vehicles across first, last and middle mile can change the total number of trips taken, as well as reducing the need for large vehicles that are partially loaded taking longer trips.

Monarch Tractor’s MK-V is the world’s first electric tractor, and is ‘driver optional’, meaning it can be driven or operate autonomously, providing greater flexibility for farmers. Of course the increased use of computer vision and AI across all agrobots increase sustainability, by enabling precision or regenerative agriculture with less need for chemical solutions. Technically, these improvements to agricultural practice are Level 2 of Sustainable Robotics.{}^{3}

However, the use of smaller sized fully autonomous agricultural robots, such as Meropy, Burro.ai, SwarmFarm, Muddy Machines and Small Robot Company also reduces the size and soil compaction associated with agricultural machinery, and make it possible to tend smaller strip farms by machine. {}^4} This is Level 3 of Sustainable Robotics.

Level 4: Robots

The higher the sustainability level, the deeper it is into the actual design and construction of the robot system. Switching to electric from fossil fuels is a small step. Switching to locally sourced or produced materials is another. Switching to recyclable materials is another step towards fully sustainable robotics.

OhmniLabs utilize 3D printing in their robot construction, allowing them to export robots to 47 countries, while also manufacturing locally in Silicon Valley.

Meanwhile, Cornell researchers Wendy Ju and Ilan Mandel have introduced the phrase ‘Garbatrage’ to describe the opportunity to prototype or build robots using components recycled from other consumer electronics, like these hoverboards.

“The time is ripe for a practice like garbatrage, both for sustainability reasons and considering the global supply shortages and international trade issues of the last few years,” the researchers said. {}^{5}

This is a great example of Level 4 of Sustainable Robotics.

Level 5: Self-powering/repairing Robots

Self powering or self repairing or self recycling robots are the Level 5 of Sustainable Robotics. In research, there are solutions like MilliMobile: A battery-free autonomous robot capable of operating on harvested solar and RF power. MilliMobile, developed at the Paul G. Allen School of Computer Science & Engineering, is the size of a penny and can steer itself, sense its environment, and communicate wirelessly using energy harvested from light and radio waves.

It’s not just research though. In the last two years, a number of solar powered agricultural robots have entered the market. Solinftec has a solar powered spray robot, as has EcoRobotix and AIGEN, which is also powered by wind.

Modular robotics will reduce our material wastage and energy needs by making robotics multipurpose, rather than requiring multiple specialist robots. Meanwhile self powering and self repairing technologies will allow robots to enter many previously unreachable areas, including off planet, while removing our reliance on the grid. As robots incorporate self repairing materials, the product lifecycle is increased. This is Level 5 of Sustainable Robotics.

And in the future?

While we’re waiting for the future, here are a couple of resources for turning your entire company into a sustainable robotics company. Sustainable Manufacturing 101 from ITA, the International Trade Administration and the Sustainable Manufacturing Toolkit from the OECD.

References

  1. https://www.cnbc.com/2023/08/08/everestlabs-using-robotic-arms-and-ai-to-make-recycling-more-efficient.html
  2. https://www.greenbiz.com/article/great-lakes-are-awash-plastic-can-robots-and-drones-help
  3. https://www.economist.com/science-and-technology/2020/02/06/using-artificial-intelligence-agricultural-robots-are-on-the-rise
  4. https://www.wired.co.uk/article/farming-robots-small-robot-company-tractors
  5. https://news.cornell.edu/stories/2023/09/garbatrage-spins-e-waste-prototyping-gold

Humans vs. robots: Study compares 27 humanoid robots with humans to see who is superior

Science fiction films portray the idea relatively simply: the terminator—who either tries to destroy or rescue humanity—is such a perfect humanoid robot that in most cases it is superior to humans. But how well do humanoid robots perform nowadays away from the cinema screen?

Powered by AI, new system makes human-to-robot communication more seamless

The black and yellow robot, meant to resemble a large dog, stood waiting for directions. When they came, the instructions weren't in code but instead in plain English: "Visit the wooden desk exactly two times; in addition, don't go to the wooden desk before the bookshelf."

Using language to give robots a better grasp of an open-ended world

Feature Fields for Robotic Manipulation (F3RM) enables robots to interpret open-ended text prompts using natural language, helping the machines manipulate unfamiliar objects. The system’s 3D feature fields could be helpful in environments that contain thousands of objects, such as warehouses. Images courtesy of the researchers.

By Alex Shipps | MIT CSAIL

Imagine you’re visiting a friend abroad, and you look inside their fridge to see what would make for a great breakfast. Many of the items initially appear foreign to you, with each one encased in unfamiliar packaging and containers. Despite these visual distinctions, you begin to understand what each one is used for and pick them up as needed.

Inspired by humans’ ability to handle unfamiliar objects, a group from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) designed Feature Fields for Robotic Manipulation (F3RM), a system that blends 2D images with foundation model features into 3D scenes to help robots identify and grasp nearby items. F3RM can interpret open-ended language prompts from humans, making the method helpful in real-world environments that contain thousands of objects, like warehouses and households.

F3RM offers robots the ability to interpret open-ended text prompts using natural language, helping the machines manipulate objects. As a result, the machines can understand less-specific requests from humans and still complete the desired task. For example, if a user asks the robot to “pick up a tall mug,” the robot can locate and grab the item that best fits that description.

“Making robots that can actually generalize in the real world is incredibly hard,” says Ge Yang, postdoc at the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions and MIT CSAIL. “We really want to figure out how to do that, so with this project, we try to push for an aggressive level of generalization, from just three or four objects to anything we find in MIT’s Stata Center. We wanted to learn how to make robots as flexible as ourselves, since we can grasp and place objects even though we’ve never seen them before.”

Learning “what’s where by looking”

The method could assist robots with picking items in large fulfillment centers with inevitable clutter and unpredictability. In these warehouses, robots are often given a description of the inventory that they’re required to identify. The robots must match the text provided to an object, regardless of variations in packaging, so that customers’ orders are shipped correctly.

For example, the fulfillment centers of major online retailers can contain millions of items, many of which a robot will have never encountered before. To operate at such a scale, robots need to understand the geometry and semantics of different items, with some being in tight spaces. With F3RM’s advanced spatial and semantic perception abilities, a robot could become more effective at locating an object, placing it in a bin, and then sending it along for packaging. Ultimately, this would help factory workers ship customers’ orders more efficiently.

“One thing that often surprises people with F3RM is that the same system also works on a room and building scale, and can be used to build simulation environments for robot learning and large maps,” says Yang. “But before we scale up this work further, we want to first make this system work really fast. This way, we can use this type of representation for more dynamic robotic control tasks, hopefully in real-time, so that robots that handle more dynamic tasks can use it for perception.”

The MIT team notes that F3RM’s ability to understand different scenes could make it useful in urban and household environments. For example, the approach could help personalized robots identify and pick up specific items. The system aids robots in grasping their surroundings — both physically and perceptively.

“Visual perception was defined by David Marr as the problem of knowing ‘what is where by looking,’” says senior author Phillip Isola, MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. “Recent foundation models have gotten really good at knowing what they are looking at; they can recognize thousands of object categories and provide detailed text descriptions of images. At the same time, radiance fields have gotten really good at representing where stuff is in a scene. The combination of these two approaches can create a representation of what is where in 3D, and what our work shows is that this combination is especially useful for robotic tasks, which require manipulating objects in 3D.”

Creating a “digital twin”

F3RM begins to understand its surroundings by taking pictures on a selfie stick. The mounted camera snaps 50 images at different poses, enabling it to build a neural radiance field (NeRF), a deep learning method that takes 2D images to construct a 3D scene. This collage of RGB photos creates a “digital twin” of its surroundings in the form of a 360-degree representation of what’s nearby.

In addition to a highly detailed neural radiance field, F3RM also builds a feature field to augment geometry with semantic information. The system uses CLIP, a vision foundation model trained on hundreds of millions of images to efficiently learn visual concepts. By reconstructing the 2D CLIP features for the images taken by the selfie stick, F3RM effectively lifts the 2D features into a 3D representation.

Keeping things open-ended

After receiving a few demonstrations, the robot applies what it knows about geometry and semantics to grasp objects it has never encountered before. Once a user submits a text query, the robot searches through the space of possible grasps to identify those most likely to succeed in picking up the object requested by the user. Each potential option is scored based on its relevance to the prompt, similarity to the demonstrations the robot has been trained on, and if it causes any collisions. The highest-scored grasp is then chosen and executed.

To demonstrate the system’s ability to interpret open-ended requests from humans, the researchers prompted the robot to pick up Baymax, a character from Disney’s “Big Hero 6.” While F3RM had never been directly trained to pick up a toy of the cartoon superhero, the robot used its spatial awareness and vision-language features from the foundation models to decide which object to grasp and how to pick it up.

F3RM also enables users to specify which object they want the robot to handle at different levels of linguistic detail. For example, if there is a metal mug and a glass mug, the user can ask the robot for the “glass mug.” If the bot sees two glass mugs and one of them is filled with coffee and the other with juice, the user can ask for the “glass mug with coffee.” The foundation model features embedded within the feature field enable this level of open-ended understanding.

“If I showed a person how to pick up a mug by the lip, they could easily transfer that knowledge to pick up objects with similar geometries such as bowls, measuring beakers, or even rolls of tape. For robots, achieving this level of adaptability has been quite challenging,” says MIT PhD student, CSAIL affiliate, and co-lead author William Shen. “F3RM combines geometric understanding with semantics from foundation models trained on internet-scale data to enable this level of aggressive generalization from just a small number of demonstrations.”

Shen and Yang wrote the paper under the supervision of Isola, with MIT professor and CSAIL principal investigator Leslie Pack Kaelbling and undergraduate students Alan Yu and Jansen Wong as co-authors. The team was supported, in part, by Amazon.com Services, the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research’s Multidisciplinary University Initiative, the Army Research Office, the MIT-IBM Watson Lab, and the MIT Quest for Intelligence. Their work will be presented at the 2023 Conference on Robot Learning.

Spider-inspired, shape-changing robot now even smaller

This shape-changing robot just got a lot smaller. In a new study, engineers at the University of Colorado Boulder debuted mCLARI, a 2-centimeter-long modular robot that can passively change its shape to squeeze through narrow gaps in multiple directions. It weighs less than a gram but can support over three times its body weight as an additional payload.

Strategy for promoting adaptive grasping, dexterous manipulation, and human-robot interaction with tactile sensing

Hands possess an awe-inspiring ability to perceive friction forces with remarkable accuracy, all thanks to the mechanical receptors nestled within skin. This natural gift allows objects to be handled deftly and tools to be wielded effortlessly, infusing daily life with a delightful flexibility. But what if this tactile prowess could be unlocked in robots?

Robot Talk Episode 60 – Carl Strathearn

Claire chatted to Carl Strathearn from Edinburgh Napier University about humanoid robots, realistic robot faces and speech.

Carl Strathearn is a researcher interested in creating assistive social humanoid robots with embodied AI systems that appear, function, and interact like humans. He believes that creating realistic humanoid robots is significant to humanity as the human face is the most natural interface for human communication, and by emulating these conditions, we can increase accessibility to state-of-the-art technology for everyone and support people with specific health conditions and circumstances in their day-to-day lives.

Using language to give robots a better grasp of an open-ended world

Imagine you're visiting a friend abroad, and you look inside their fridge to see what would make for a great breakfast. Many of the items initially appear foreign to you, with each one encased in unfamiliar packaging and containers. Despite these visual distinctions, you begin to understand what each one is used for and pick them up as needed.
Page 5 of 6
1 3 4 5 6