Archive 28.10.2022

Page 11 of 64
1 9 10 11 12 13 64

Scientist develops an open-source algorithm for selecting a dictionary of a neurointerface

Associate Professor of the Department of Information Technologies and Computer Sciences at MISIS University, Ph.D., mathematician and doctor Alexandra Bernadotte has developed algorithms that significantly increase the accuracy of recognition of mental commands by robotic devices. The result is achieved by optimizing the selection of a dictionary. Algorithms implemented in robotic devices can be used to transmit information through noisy communication channels. The results have been published in the peer-reviewed international scientific journal Mathematics.

An automated system to clean restrooms in convenience stores

Researchers at Tokyo Metropolitan University have created a robotic system that could automate the cleaning of restrooms in convenience stores and other public spaces. This system, introduced in a paper published in Advanced Robotics, will be competing in the Future Convenience Store Challenge (FCSC) at the World Robot Summit (WRS), a competition for state-of-the-art technologies to automate convenience stores.

Sarcos Successfully Executes Field Trials Demonstrating Suite of Robotic Technologies for Maintenance, Inspection, and Repair in Shipyard Operations

Sarcos robotic systems are designed to carry out maintenance, inspection, and repair activities, on and around ships that are underway and pier side, creating safer and more effective shipyard operations and improving the efficiency of sailors and shipyard workers.

Big step towards tiny autonomous drones

Scientists have developed a theory that can explain how flying insects determine the gravity direction without using accelerometers. It also forms a substantial step in the creation of tiny, autonomous drones.

Scientists have discovered a novel manner for flying drones and insects to estimate the gravity direction. Whereas drones typically use accelerometers to this end, the way in which flying insects do this has until now been shrouded in mystery, since they lack a specific sense for acceleration. In an article published in Nature, scientists from TU Delft and Aix Marseille Université / CNRS in France have shown that drones can estimate the gravity direction by combining visual motion sensing with a model of how they move. The study is a great example of the synergy between technology and biology.

On the one hand, the new approach is an important step for the creation of autonomous tiny, insect-sized drones, since it requires fewer sensors. On the other hand, it forms a hypothesis for how insects control their attitude, as the theory forms a parsimonious explanation of multiple phenomena observed in biology.

The importance of finding the gravity direction

Successful flight requires knowing the direction of gravity. As ground-bound animals, we humans typically have no trouble determining which way is down. However, this becomes more difficult when flying. Indeed, the passengers in an airplane are normally not aware of the plane being slightly tilted sideways in the air to make a wide circle. When humans started to take the skies, pilots relied purely on visually detecting the horizon line for determining the plane’s “attitude”, that is, its body orientation with respect to gravity. However, when flying through clouds the horizon line is no longer visible, which can lead to an increasingly wrong impression of what is up and down – with potentially disastrous consequences.

Also drones and flying insects need to control their attitude. Drones typically use accelerometers for determining the gravity direction. However, in flying insects no sensing organ for measuring accelerations has been found. Hence, for insects it is currently still a mystery how they estimate attitude, and some even question whether they estimate attitude at all.

Optic flow suffices for finding attitude

Although it is unknown how flying insects estimate and control their attitude, it is very well known that they visually observe motion by means of “optic flow”. Optic flow captures the relative motion between an observer and its environment. For example, when sitting in a train, trees close by seem to move very fast (have a large optic flow), while mountains in the distance seem to move very slowly (have a small optic flow).

“Optic flow itself carries no information on attitude. However, we found out that combining optic flow with a motion model allows to retrieve the gravity direction.”, says Guido de Croon, full professor of bio-inspired micro air vehicles at TU Delft, “Having a motion model means that a robot or animal can predict how it will move when taking actions. For example, drones can predict what will happen when they spin their two right propellers faster than their left propellers. Since a drone’s attitude determines in which direction it accelerates, and this direction can be picked up by changes in optic flow, the combination allows a drone to determine its attitude.”

The theoretical analysis in the article shows that finding the gravity direction with optic flow works almost under any condition, except for specific cases such as when the observer is completely still. “Whereas engineers would find such an observability problem unacceptable, we hypothesise that nature has simply accepted it”, says Guido de Croon. “In the article we provide a theoretical proof that despite this problem, an attitude controller will still work around hover at the cost of slight oscillations – reminiscent of the more erratic flight behaviour of flying insects.”

Implications for robotics

The researchers confirmed the theory’s validity with robotic implementations, demonstrating its promise for the field of robotics. De Croon: “Tiny flapping wing drones can be useful for tasks like search-and-rescue or pollination. Designing such drones means dealing with a major challenge that nature also had to face; how to achieve a fully autonomous system subject to extreme payload restrictions. This makes even tiny accelerometers a considerable burden. Our proposed theory will contribute to the design of tiny drones by allowing for a smaller sensor suite.”

Biological insights

The proposed theory has the potential to give insight into various biological phenomena. “It was known that optic flow played a role in attitude control, but until now the precise mechanism for this was unclear.”, explains Franck Ruffier, bio-roboticist and director of research at Aixe Marseille Université / CNRS, “The proposed theory can explain how flying insects succeed in estimating and controlling their attitude even in difficult, cluttered environments where the horizon line is not visible. It also provides insight into other phenomena, for example, why locusts fly less well when their ocelli (eyes on the top of their heads) are occluded.”

”We expect that novel biological experiments, specifically designed for testing our theory will be necessary for verifying the use of the proposed mechanism in insects”, adds Franck Ruffier.

Click here for the original publication in Nature. The scientific article shows how the synergy between robotics and biology can lead to technological advances and novel avenues for biological research.

The post Big step towards tiny autonomous drones appeared first on RoboHouse.

Research team proposes unclonable, invisible machine vision markers using cholesteric spherical reflectors

Over the last three decades, the digital world that we access through smartphones and computers has grown so rich and detailed that much of our physical world has a corresponding life in this digital reality. Today, the physical and digital realities are on a steady course to merging, as robots, Augmented Reality (AR) and wearable digital devices enter our physical world, and physical items get their digital twin computer representations in the digital world.

How AI image generators could help robots

AI image generators, which create fantastical sights at the intersection of dreams and reality, bubble up on every corner of the web. Their entertainment value is demonstrated by an ever-expanding treasure trove of whimsical and random images serving as indirect portals to the brains of human designers. A simple text prompt yields a nearly instantaneous image, satisfying our primitive brains, which are hardwired for instant gratification.

Using small drones to measure wind speeds in the polar regions

Drones and similar small unmanned aerial vehicles (sUAVs) have seen a massive surge in popularity over the past few years due to their innovative applications, such as crop monitoring, search and rescue operations, and coast profiling. The potential of sUAVs in atmospheric science and meteorology has not gone unnoticed either as drones offer an efficient way to place various kinds of sensors up above in the lower atmosphere.

RobotFalcon found to be effective in chasing off flocks of birds around airports

A team of researchers from the University of Groningen, the University of Tuscia, Roflight, Lemselobrink and the Royal Netherlands Air Force has designed, built and tested a robot named RobotFalcon fashioned to look and fly like a peregrine falcon as a means of driving off flocks of birds around airports. The group describes their approach in Journal of the Royal Society Interface.

Reprogrammable materials selectively self-assemble

With just a random disturbance that energizes the cubes, they selectively self-assemble into a larger block. Photos courtesy of MIT CSAIL.

By Rachel Gordon | MIT CSAIL

While automated manufacturing is ubiquitous today, it was once a nascent field birthed by inventors such as Oliver Evans, who is credited with creating the first fully automated industrial process, in flour mill he built and gradually automated in the late 1700s. The processes for creating automated structures or machines are still very top-down, requiring humans, factories, or robots to do the assembling and making. 

However, the way nature does assembly is ubiquitously bottom-up; animals and plants are self-assembled at a cellular level, relying on proteins to self-fold into target geometries that encode all the different functions that keep us ticking. For a more bio-inspired, bottom-up approach to assembly, then, human-architected materials need to do better on their own. Making them scalable, selective, and reprogrammable in a way that could mimic nature’s versatility means some teething problems, though. 

Now, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have attempted to get over these growing pains with a new method: introducing magnetically reprogrammable materials that they coat different parts with — like robotic cubes — to let them self-assemble. Key to their process is a way to make these magnetic programs highly selective about what they connect with, enabling robust self-assembly into specific shapes and chosen configurations. 

The soft magnetic material coating the researchers used, sourced from inexpensive refrigerator magnets, endows each of the cubes they built with a magnetic signature on each of its faces. The signatures ensure that each face is selectively attractive to only one other face from all the other cubes, in both translation and rotation. All of the cubes — which run for about 23 cents — can be magnetically programmed at a very fine resolution. Once they’re tossed into a water tank (they used eight cubes for a demo), with a totally random disturbance — you could even just shake them in a box — they’ll bump into each other. If they meet the wrong mate, they’ll drop off, but if they find their suitable mate, they’ll attach. 

An analogy would be to think of a set of furniture parts that you need to assemble into a chair. Traditionally, you’d need a set of instructions to manually assemble parts into a chair (a top-down approach), but using the researchers’ method, these same parts, once programmed magnetically, would self-assemble into the chair using just a random disturbance that makes them collide. Without the signatures they generate, however, the chair would assemble with its legs in the wrong places.

“This work is a step forward in terms of the resolution, cost, and efficacy with which we can self-assemble particular structures,” says Martin Nisser, a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS), an affiliate of CSAIL, and the lead author on a new paper about the system. “Prior work in self-assembly has typically required individual parts to be geometrically dissimilar, just like puzzle pieces, which requires individual fabrication of all the parts. Using magnetic programs, however, we can bulk-manufacture homogeneous parts and program them to acquire specific target structures, and importantly, reprogram them to acquire new shapes later on without having to refabricate the parts anew.” 

Using the team’s magnetic plotting machine, one can stick a cube back in the plotter and reprogram it. Every time the plotter touches the material, it creates either a “north”- or “south”-oriented magnetic pixel on the cube’s soft magnetic coating, letting the cubes be repurposed to assemble new target shapes when required. Before plotting, a search algorithm checks each signature for mutual compatibility with all previously programmed signatures to ensure they are selective enough for successful self-assembly.

With self-assembly, you can go the passive or active route. With active assembly, robotic parts modulate their behavior online to locate, position, and bond to their neighbors, and each module needs to be embedded with hardware for the computation, sensing, and actuation required to self-assemble themselves. What’s more, a human or computer is needed in the loop to actively control the actuators embedded in each part to make it move. While active assembly has been successful in reconfiguring a variety of robotic systems, the cost and complexity of the electronics and actuators have been a significant barrier to scaling self-assembling hardware up in numbers and down in size. 

With passive methods like these researchers’, there’s no need for embedded actuation and control.

Once programmed and set free under a random disturbance that gives them the energy to collide with one another, they’re on their own to shapeshift, without any guiding intelligence.  

If you want a structure built from hundreds or thousands of parts, like a ladder or bridge, for example, you wouldn’t want to manufacture a million uniquely different parts, or to have to re-manufacture them when you need a second structure assembled.

The trick the team used toward this goal lies in the mathematical description of the magnetic signatures, which describes each signature as a 2D matrix of pixels. These matrices ensure that any magnetically programmed parts that shouldn’t connect will interact to produce just as many pixels in attraction as those in repulsion, letting them remain agnostic to all non-mating parts in both translation and rotation. 

While the system is currently good enough to do self-assembly using a handful of cubes, the team wants to further develop the mathematical descriptions of the signatures. In particular, they want to leverage design heuristics that would enable assembly with very large numbers of cubes, while avoiding computationally expensive search algorithms. 

“Self-assembly processes are ubiquitous in nature, leading to the incredibly complex and beautiful life we see all around us,” says Hod Lipson, the James and Sally Scapa Professor of Innovation at Columbia University, who was not involved in the paper. “But the underpinnings of self-assembly have baffled engineers: How do two proteins destined to join find each other in a soup of billions of other proteins? Lacking the answer, we have been able to self-assemble only relatively simple structures so far, and resort to top-down manufacturing for the rest. This paper goes a long way to answer this question, proposing a new way in which self-assembling building blocks can find each other. Hopefully, this will allow us to begin climbing the ladder of self-assembled complexity.”

Nisser wrote the paper alongside recent EECS graduates Yashaswini Makaram ’21 and Faraz Faruqi SM ’22, both of whom are former CSAIL affiliates; Ryo Suzuki, assistant professor of computer science at the University of Calgary; and MIT associate professor of EECS Stefanie Mueller, who is a CSAIL affiliate. They will present their research at the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022).

Page 11 of 64
1 9 10 11 12 13 64