Archive 13.07.2023

Page 5 of 6
1 3 4 5 6

Improving Stability of Bipedal Robots

When building bipedal robots (robots that walk on 2 legs), ensuring stability is a primary objective.

The following strategies are among the most important:

1- First of all, before even considering the rest, we need to make the physical shape and mass distribution as good as possible for a proper, stable stance. This simply means
-the center of mass is as low as possible, (for example placing heavier components at the bottom and or making upper components of lighter weight material
-the mass is distributed as evenly as possible
-the mass bears on as wide surface area as possible (this can be achieved by positioning and proportioning of sizes of legs and feet accordingly but this will obviously affect maneuverability

These principles are simply following basic physics rules, similar to making buildings stable. If this item is not properly done, the rest below can only go so far, or it will mean difficult and costly/time consuming solutions. This item is similar to an architect designing the overall shape of the building as regularly as possible in the first place, for a smooth and efficient flow of forces from top floors all the way to the foundation. If the architect’s design is irregular there is only so much the structural engineer can do to accommodate those irregularities or the solution will need stronger members, connections and load carrying system, which will be costlier and longer to build.

2- The robot must be able to predict the immediate future situations and adjust controls accordingly.

3-Robots joints (limbs) must be designed to provide a balanced and stable motion. The control algorithm must adjust joint torques instantly to respond to feedback from sensors.

4- Machine learning and adaptation techniques can also be applied. With proper algorithm, by learning from failures, the robot will be able to refine its responses.

5-Sensor inputs must provide adequate information to the robots control algorithm. Sensors inputs such as visual, inertial, force and torque are necessary components.

Also search the term Zero Moment Point (ZMP)

Magnetic robots walk, crawl, and swim

MIT professor of materials science and engineering and brain and cognitive sciences Polina Anikeeva in her lab. Photo: Steph Stevens

By Jennifer Michalowski | McGovern Institute for Brain Research

MIT scientists have developed tiny, soft-bodied robots that can be controlled with a weak magnet. The robots, formed from rubbery magnetic spirals, can be programmed to walk, crawl, swim — all in response to a simple, easy-to-apply magnetic field.

“This is the first time this has been done, to be able to control three-dimensional locomotion of robots with a one-dimensional magnetic field,” says Professor Polina Anikeeva, whose team published an open-access paper on the magnetic robots in the journal Advanced Materials. “And because they are predominantly composed of polymer and polymers are soft, you don’t need a very large magnetic field to activate them. It’s actually a really tiny magnetic field that drives these robots,” adds Anikeeva, who is a professor of materials science and engineering and brain and cognitive sciences at MIT, a McGovern Institute for Brain Research associate investigator, as well as the associate director of MIT’s Research Laboratory of Electronics and director of MIT’s K. Lisa Yang Brain-Body Center.

The new robots are well suited to transport cargo through confined spaces and their rubber bodies are gentle on fragile environments, opening the possibility that the technology could be developed for biomedical applications. Anikeeva and her team have made their robots millimeters long, but she says the same approach could be used to produce much smaller robots.

Magnetically actuated fiber-based soft robots

Engineering magnetic robots

Anikeeva says that until now, magnetic robots have moved in response to moving magnetic fields. She explains that for these models, “if you want your robot to walk, your magnet walks with it. If you want it to rotate, you rotate your magnet.” That limits the settings in which such robots might be deployed. “If you are trying to operate in a really constrained environment, a moving magnet may not be the safest solution. You want to be able to have a stationary instrument that just applies magnetic field to the whole sample,” she explains.

Youngbin Lee PhD ’22, a former graduate student in Anikeeva’s lab, engineered a solution to this problem. The robots he developed in Anikeeva’s lab are not uniformly magnetized. Instead, they are strategically magnetized in different zones and directions so a single magnetic field can enable a movement-driving profile of magnetic forces.

Before they are magnetized, however, the flexible, lightweight bodies of the robots must be fabricated. Lee starts this process with two kinds of rubber, each with a different stiffness. These are sandwiched together, then heated and stretched into a long, thin fiber. Because of the two materials’ different properties, one of the rubbers retains its elasticity through this stretching process, but the other deforms and cannot return to its original size. So when the strain is released, one layer of the fiber contracts, tugging on the other side and pulling the whole thing into a tight coil. Anikeeva says the helical fiber is modeled after the twisty tendrils of a cucumber plant, which spiral when one layer of cells loses water and contracts faster than a second layer.

A third material — one whose particles have the potential to become magnetic — is incorporated in a channel that runs through the rubbery fiber. So once the spiral has been made, a magnetization pattern that enables a particular type of movement can be introduced.

“Youngbin thought very carefully about how to magnetize our robots to make them able to move just as he programmed them to move,” Anikeeva says. “He made calculations to determine how to establish such a profile of forces on it when we apply a magnetic field that it will actually start walking or crawling.”

To form a caterpillar-like crawling robot, for example, the helical fiber is shaped into gentle undulations, and then the body, head, and tail are magnetized so that a magnetic field applied perpendicular to the robot’s plane of motion will cause the body to compress. When the field is reduced to zero, the compression is released, and the crawling robot stretches. Together, these movements propel the robot forward. Another robot in which two foot-like helical fibers are connected with a joint is magnetized in a pattern that enables a movement more like walking.

Biomedical potential

This precise magnetization process generates a program for each robot and ensures that that once the robots are made, they are simple to control. A weak magnetic field activates each robot’s program and drives its particular type of movement. A single magnetic field can even send multiple robots moving in opposite directions, if they have been programmed to do so. The team found that one minor manipulation of the magnetic field has a useful effect: With the flip of a switch to reverse the field, a cargo-carrying robot can be made to gently shake and release its payload.

Anikeeva says she can imagine these soft-bodied robots — whose straightforward production will be easy to scale up — delivering materials through narrow pipes, or even inside the human body. For example, they might carry a drug through narrow blood vessels, releasing it exactly where it is needed. She says the magnetically-actuated devices have biomedical potential beyond robots as well, and might one day be incorporated into artificial muscles or materials that support tissue regeneration.

On the Stepwise Nature of <br> Self-Supervised Learning

Figure 1: stepwise behavior in self-supervised learning. When training common SSL algorithms, we find that the loss descends in a stepwise fashion (top left) and the learned embeddings iteratively increase in dimensionality (bottom left). Direct visualization of embeddings (right; top three PCA directions shown) confirms that embeddings are initially collapsed to a point, which then expands to a 1D manifold, a 2D manifold, and beyond concurrently with steps in the loss.

It is widely believed that deep learning’s stunning success is due in part to its ability to discover and extract useful representations of complex data. Self-supervised learning (SSL) has emerged as a leading framework for learning these representations for images directly from unlabeled data, similar to how LLMs learn representations for language directly from web-scraped text. Yet despite SSL’s key role in state-of-the-art models such as CLIP and MidJourney, fundamental questions like “what are self-supervised image systems really learning?” and “how does that learning actually occur?” lack basic answers.

Our recent paper (to appear at ICML 2023) presents what we suggest is the first compelling mathematical picture of the training process of large-scale SSL methods. Our simplified theoretical model, which we solve exactly, learns aspects of the data in a series of discrete, well-separated steps. We then demonstrate that this behavior can be observed in the wild across many current state-of-the-art systems. This discovery opens new avenues for improving SSL methods, and enables a whole range of new scientific questions that, when answered, will provide a powerful lens for understanding some of today’s most important deep learning systems.

Read More

UN tech agency rolls out human-looking robots for questions at a Geneva news conference

A United Nations technology agency assembled a group of robots that physically resembled humans at a news conference Friday, inviting reporters to ask them questions in an event meant to spark discussion about the future of artificial intelligence.

Visual navigation to objects in real homes

Today’s robots are often static and isolated from humans in structured environments — you can think of robot arms employed by Amazon for picking and packaging products within warehouses. But the true potential of robotics lies in mobile robots operating alongside humans in messy environments like our homes and hospitals — this requires navigation skills.

Imagine dropping a robot in a completely unseen home and asking it to find an object, let’s say a toilet. Humans can do this effortlessly: when looking for a glass of water at a friend’s house we’re visiting for the first time, we can easily find the kitchen without going to bedrooms or storage closets. But teaching this kind of spatial common sense to robots is challenging.

Many learning-based visual navigation policies have been proposed to tackle this problem. But learned visual navigation policies have predominantly been evaluated in simulation. How well do different classes of methods work on a robot?

We present a large-scale empirical study of semantic visual navigation methods comparing representative methods from classical, modular, and end-to-end learning approaches across six homes with no prior experience, maps, or instrumentation. We find that modular learning works well in the real world, attaining a 90% success rate. In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality.

Object goal navigation

We instantiate semantic navigation with the Object Goal navigation task, where a robot starts in a completely unseen environment and is asked to find an instance of an object category, let’s say a toilet. The robot has access to only a first-person RGB and depth camera and a pose sensor.

This task is challenging. It requires not only spatial scene understanding of distinguishing free space and obstacles and semantic scene understanding of detecting objects, but also requires learning semantic exploration priors. For example, if a human wants to find a toilet in this scene, most of us would choose the hallway because it is most likely to lead to a toilet. Teaching this kind of common sense or semantic priors to an autonomous agent is challenging. While exploring the scene for the desired object, the robot also needs to remember explored and unexplored areas.

Methods

So how do we train autonomous agents capable of efficient navigation while tackling all these challenges? A classical approach to this problem builds a geometric map using depth sensors, explores the environment with a heuristic, like frontier exploration, which explores the closest unexplored region, and uses an analytical planner to reach exploration goals and the goal object as soon as it is in sight. An end-to-end learning approach predicts actions directly from raw observations with a deep neural network consisting of visual encoders for image frames followed by a recurrent layer for memory. A modular learning approach builds a semantic map by projecting predicted semantic segmentation using depth, predicts an exploration goal with a goal-oriented semantic policy as a function of the semantic map and the goal object, and reaches it with a planner.

Large-scale real-world empirical evaluation

While many approaches to navigate to objects have been proposed over the past few years, learned navigation policies have predominantly been evaluated in simulation, which opens the field to the risk of sim-only research that does not generalize to the real world. We address this issue through a large-scale empirical evaluation of representative classical, end-to-end learning, and modular learning approaches across 6 unseen homes and 6 goal object categories.

Results

We compare approaches in terms of success rate within a limited budget of 200 robot actions and Success weighted by Path Length (SPL), a measure of path efficiency. In simulation, all approaches perform comparably, at around 80% success rate. But in the real world, modular learning and classical approaches transfer really well, up from 81% to 90% and 78% to 80% success rates, respectively. While end-to-end learning fails to transfer, down from 77% to 23% success rate.

We illustrate these results qualitatively with one representative trajectory. All approaches start in a bedroom and are tasked with finding a couch. On the left, modular learning first successfully reaches the couch goal. In the middle, end-to-end learning fails after colliding too many times. On the right, the classical policy finally reaches the couch goal after a detour through the kitchen.

Result 1: modular learning is reliable

We find that modular learning is very reliable on a robot, with a 90% success rate. Here, we can see it finds a plant in a first home efficiently, a chair in a second home, and a toilet in a third.

Result 2: modular learning explores more efficiently than classical

Modular learning improves by 10% real-world success rate over the classical approach. On the left, the goal-oriented semantic exploration policy directly heads towards the bedroom and finds the bed in 98 steps with an SPL of 0.90. On the right, because frontier exploration is agnostic to the bed goal, the policy makes detours through the kitchen and the entrance hallway before finally reaching the bed in 152 steps with an SPL of 0.52. With a limited time budget, inefficient exploration can lead to failure.

Result 3: end-to-end learning fails to transfer

While classical and modular learning approaches work well on a robot, end-to-end learning does not, at only 23% success rate. The policy collides often, revisits the same places, and even fails to stop in front of goal objects when they are in sight.

Analysis

Insight 1: why does modular transfer while end-to-end does not?

Why does modular learning transfer so well while end-to-end learning does not? To answer this question, we reconstructed one real-world home in simulation and conducted experiments with identical episodes in sim and reality.

The semantic exploration policy of the modular learning approach takes a semantic map as input, while the end-to-end policy directly operates on the RGB-D frames. The semantic map space is invariant between sim and reality, while the image space exhibits a large domain gap. In this example, this gap leads to a segmentation model trained on real-world images to predict a bed false positive in the kitchen.

The semantic map domain invariance allows the modular learning approach to transfer well from sim to reality. In contrast, the image domain gap causes a large drop in performance when transferring a segmentation model trained in the real world to simulation and vice versa. If semantic segmentation transfers poorly from sim to reality, it is reasonable to expect an end-to-end semantic navigation policy trained on sim images to transfer poorly to real-world images.

Insight 2: sim vs real gap in error modes for modular learning

Surprisingly, modular learning works even better in reality than simulation. Detailed analysis reveals that a lot of the failures of the modular learning policy that occur in sim are due to reconstruction errors, which do not happen in reality. Visual reconstruction errors represent 10% out of the total 19% episode failures, and physical reconstruction errors another 5%. In contrast, failures in the real world are predominantly due to depth sensor errors, while most semantic navigation benchmarks in simulation assume perfect depth sensing. Besides explaining the performance gap between sim and reality for modular learning, this gap in error modes is concerning because it limits the usefulness of simulation to diagnose bottlenecks and further improve policies. We show representative examples of each error mode and propose concrete steps forward to close this gap in the paper.

Takeaways

For practitioners:

  • Modular learning can reliably navigate to objects with 90% success.

For researchers:

  • Models relying on RGB images are hard to transfer from sim to real => leverage modularity and abstraction in policies.
  • Disconnect between sim and real error modes => evaluate semantic navigation on real robots.

For more content about robotics and machine learning, check out my blog.

Big robot bugs reveal force-sensing secrets of insect locomotion

Researchers have combined research with real and robotic insects to better understand how they sense forces in their limbs while walking, providing new insights into the biomechanics and neural dynamics of insects and informing new applications for large legged robots. They presented their findings at the SEB Centenary Conference 2023.
Page 5 of 6
1 3 4 5 6