Special drone collects environmental DNA from trees

By Peter Rüegg

Ecologists are increasingly using traces of genetic material left behind by living organisms left behind in the environment, called environmental DNA (eDNA), to catalogue and monitor biodiversity. Based on these DNA traces, researchers can determine which species are present in a certain area.

Obtaining samples from water or soil is easy, but other habitats – such as the forest canopy – are difficult for researchers to access. As a result, many species remain untracked in poorly explored areas.

Researchers at ETH Zurich and the Swiss Federal Institute for Forest, Snow and Landscape Research WSL, and the company SPYGEN have partnered to develop a special drone that can autonomously collect samples on tree branches.

(Video: ETH Zürich)

How the drone collects material

The drone is equipped with adhesive strips. When the aircraft lands on a branch, material from the branch sticks to these strips. Researchers can then extract DNA in the lab, analyse it and assign it to genetic matches of the various organisms using database comparisons.

But not all branches are the same: they vary in terms of their thickness and elasticity. Branches also bend and rebound when a drone lands on them. Programming the aircraft in such a way that it can still approach a branch autonomously and remain stable on it long enough to take samples was a major challenge for the roboticists.

“Landing on branches requires complex control,” explains Stefano Mintchev, Professor of Environmental Robotics at ETH Zurich and WSL. Initially, the drone does not know how flexible a branch is, so the researchers fitted it with a force sensing cage. This allows the drone to measure this factor at the scene and incorporate it into its flight manoeuvre.

Scheme: DNA is extracted from the collected branch material, amplified, sequenced and the sequences found are compared with databases. This allows the species to be identified. (Graphic: Stefano Mintchev / ETH Zürich)

Preparing rainforest operations at Zoo Zurich

Researchers have tested their new device on seven tree species. In the samples, they found DNA from 21 distinct groups of organisms, or taxa, including birds, mammals and insects. “This is encouraging, because it shows that the collection technique works,“ says Mintchev, who co-​authored the study that has appeared in the journal Science Robotics.

The researchers now want to improve their drone further to get it ready for a competition in which the aim is to detect as many different species as possible across 100 hectares of rainforest in Singapore in 24 hours.

To test the drone’s efficiency under conditions similar to those it will experience at the competition, Mintchev and his team are currently working at the Zoo Zurich’s Masoala Rainforest. “Here we have the advantage of knowing which species are present, which will help us to better assess how thorough we are in capturing all eDNA traces with this technique or if we’re missing something,“ Mintchev says.

For this event, however, the collection device must become more efficient and mobilize faster. In the tests in Switzerland, the drone collected material from seven trees in three days; in Singapore, it must be able to fly to and collect samples from ten times as many trees in just one day.

Collecting samples in a natural rainforest, however, presents the researchers with even tougher challenges. Frequent rain washes eDNA off surfaces, while wind and clouds impede drone operation. “We are therefore very curious to see whether our sampling method will also prove itself under extreme conditions in the tropics,” Mintchev says.

In search for the intelligent machine

Elvis Nava is a fellow at ETH’ Zurich’s AI center as well as a doctoral student at the Institute of Neuroinformatics and in the Soft Robotics Lab. (Photograph: Daniel Winkler / ETH Zurich)

By Christoph Elhardt

In ETH Zurich’s Soft Robotics Lab, a white robot hand reaches for a beer can, lifts it up and moves it to a glass at the other end of the table. There, the hand carefully tilts the can to the right and pours the sparkling, gold-coloured liquid into the glass without spilling it. Cheers!

Computer scientist Elvis Nava is the person controlling the robot hand developed by ETH start-up Faive Robotics. The 26-year-old doctoral student’s own hand hovers over a surface equipped with sensors and a camera. The robot hand follows Nava’s hand movement. When he spreads his fingers, the robot does the same. And when he points at something, the robot hand follows suit.

But for Nava, this is only the beginning: “We hope that in future, the robot will be able to do something without our having to explain exactly how,” he says. He wants to teach machines to carry out written and oral commands. His goal is to make them so intelligent that they can quickly acquire new abilities, understand people and help them with different tasks.

Functions that currently require specific instructions from programmers will then be controlled by simple commands such as “pour me a beer” or “hand me the apple”. To achieve this goal, Nava received a doctoral fellowship from ETH Zurich’s AI Center in 2021: this program promotes talents that bridges different research disciplines to develop new AI applications. In addition, the Italian – who grew up in Bergamo – is doing his doctorate at Benjamin Grewe’s professorship of neuroinformatics and in Robert Katzschmann’s lab for soft robotics.

Developed by the ETH start-​up Faive Robotics, the robot hand imitates the movements of a human hand. (Video: Faive Robotics)

Combining sensory stimuli

But how do you get a machine to carry out commands? What does this combination of artificial intelligence and robotics look like? To answer these questions, it is crucial to understand the human brain.

We perceive our environment by combining different sensory stimuli. Usually, our brain effortlessly integrates images, sounds, smells, tastes and haptic stimuli into a coherent overall impression. This ability enables us to quickly adapt to new situations. We intuitively know how to apply acquired knowledge to unfamiliar tasks.

“Computers and robots often lack this ability,” Nava says. Thanks to machine learning, computer programs today may write texts, have conversations or paint pictures, and robots may move quickly and independently through difficult terrain, but the underlying learning algorithms are usually based on only one data source. They are – to use a computer science term – not multimodal.

For Nava, this is precisely what stands in the way of more intelligent robots: “Algorithms are often trained for just one set of functions, using large data sets that are available online. While this enables language processing models to use the word ‘cat’ in a grammatically correct way, they don’t know what a cat looks like. And robots can move effectively but usually lack the capacity for speech and image recognition.”

“Every couple of years, our discipline changes the way we think about what it means to be a researcher,” Elvis Nava says. (Video: ETH AI Center)

Robots have to go to preschool

This is why Nava is developing learning algorithms for robots that teach them exactly that: to combine information from different sources. “When I tell a robot arm to ‘hand me the apple on the table,’ it has to connect the word ‘apple’ to the visual features of an apple. What’s more, it has to recognise the apple on the table and know how to grab it.”

But how does the Nava teach the robot arm to do all that? In simple terms, he sends it to a two-stage training camp. First, the robot acquires general abilities such as speech and image recognition as well as simple hand movements in a kind of preschool.

Open-source models that have been trained using giant text, image and video data sets are already available for these abilities. Researchers feed, say, an image recognition algorithm with thousands of images labelled ‘dog’ or ‘cat.’ Then, the algorithm learns independently what features – in this case pixel structures – constitute an image of a cat or a dog.

A new learning algorithm for robots

Nava’s job is to combine the best available models into a learning algorithm, which has to translate different data, images, texts or spatial information into a uniform command language for the robot arm. “In the model, the same vector represents both the word ‘beer’ and images labelled ‘beer’,” Nava says. That way, the robot knows what to reach for when it receives the command “pour me a beer”.

Researchers who deal with artificial intelligence on a deeper level have known for a while that integrating different data sources and models holds a lot of promise. However, the corresponding models have only recently become available and publicly accessible. What’s more, there is now enough computing power to get them up and running in tandem as well.

When Nava talks about these things, they sound simple and intuitive. But that’s deceptive: “You have to know the newest models really well, but that’s not enough; sometimes getting them up and running in tandem is an art rather than a science,” he says. It’s tricky problems like these that especially interest Nava. He can work on them for hours, continuously trying out new solutions.

Nava spends the majority of his time coding. (Photograph: Elvis Nava)

Nava evaluates his learning algorithm. The results of the experiment in a nutshell. (Photograph: Elvis Nava)

Special training: Imitating humans

Once the robot arm has completed preschool and has learnt to understand speech, recognise images and carry out simple movements, Nava sends it to special training. There, the machine learns to, say, imitate the movements of a human hand when pouring a glass of beer. “As this involves very specific sequences of movements, existing models no longer suffice,” Nava says.

Instead, he shows his learning algorithm a video of a hand pouring a glass of beer. Based on just a few examples, the robot then tries to imitate these movements, drawing on what it has learnt in preschool. Without prior knowledge, it simply wouldn’t be able to imitate such a complex sequence of movements.

“If the robot manages to pour the beer without spilling, we tell it ‘well done’ and it memorises the sequence of movements,” Nava says. This method is known as reinforcement learning in technical jargon.

Elvis Nava teaches robots to carry out oral commands such as “pour me a beer”. (Photograph: Daniel Winkler / ETH Zürich)

Foundations for robotic helpers

With this two-stage learning strategy, Nava hopes to get a little closer to realising the dream of creating an intelligent machine. How far it will take him, he does not yet know. “It’s unclear whether this approach will enable robots to carry out tasks we haven’t shown them before.”

It is much more probable that we will see robotic helpers that carry out oral commands and fulfil tasks they are already familiar with or that closely resemble them. Nava avoids making predictions as to how long it will take before these applications can be used in areas such as the care sector or construction.

Developments in the field of artificial intelligence are too fast and unpredictable. In fact, Nava would be quite happy if the robot would just hand him the beer he will politely request after his dissertation defence.

Fighting tumours with magnetic bacteria

Magnetic bacteria (grey) can squeeze through narrow intercellular spaces to cross the blood vessel wall and infiltrate tumours. (Visualisations: Yimo Yan / ETH Zurich)

By Fabio Bergamin

Scientists around the world are researching how anti-cancer drugs can most efficiently reach the tumours they target. One possibility is to use modified bacteria as “ferries” to carry the drugs through the bloodstream to the tumours. Researchers at ETH Zurich have now succeeded in controlling certain bacteria so that they can effectively cross the blood vessel wall and infiltrate tumour tissue.

Led by Simone Schürle, Professor of Responsive Biomedical Systems, the ETH Zurich researchers chose to work with bacteria that are naturally magnetic due to iron oxide particles they contain. These bacteria of the genus Magnetospirillum respond to magnetic fields and can be controlled by magnets from outside the body; for more on this, see an earlier article in ETH News.

Exploiting temporary gaps

In cell cultures and in mice, Schürle and her team have now shown that a rotating magnetic field applied at the tumour improves the bacteria’s ability to cross the vascular wall near the cancerous growth. At the vascular wall, the rotating magnetic field propels the bacteria forward in a circular motion.

To better understand the mechanism to cross the vessel wall works, a detailed look is necessary: The blood vessel wall consists of a layer of cells and serves as a barrier between the bloodstream and the tumour tissue, which is permeated by many small blood vessels. Narrow spaces between these cells allow certain molecules from the to pass through the vessel wall. How large these intercellular spaces are is regulated by the cells of the vessel wall, and they can be temporarily wide enough to allow even bacteria to pass through the vessel wall.

Strong propulsion and high probability

With the help of experiments and computer simulations, the ETH Zurich researchers were able to show that propelling the bacteria using a rotating magnetic field is effective for three reasons. First, propulsion via a rotating magnetic field is ten times more powerful than propulsion via a static magnetic field. The latter merely sets the direction and the bacteria have to move under their own power.

The second and most critical reason is that bacteria driven by the rotating magnetic field are constantly in motion, travelling along the vascular wall. This makes them more likely to encounter the gaps that briefly open between vessel wall cells compared to other propulsion types, in which the bacteria’s motion is less explorative. And third, unlike other methods, the bacteria do not need to be tracked via imaging. Once the magnetic field is positioned over the tumour, it does not need to be readjusted.

“Cargo” accumulates in tumour tissue

“We make use of the bacteria’s natural and autonomous locomotion as well,” Schürle explains. “Once the bacteria have passed through the blood vessel wall and are in the tumour, they can independently migrate deep into its interior.” For this reason, the scientists use the propulsion via the external magnetic field for just one hour – long enough for the bacteria to efficiently pass through the vascular wall and reach the tumour.

Such bacteria could carry anti-cancer drugs in the future. In their cell culture studies, the ETH Zurich researchers simulated this application by attaching liposomes (nanospheres of fat-like substances) to the bacteria. They tagged these liposomes with a fluorescent dye, which allowed them to demonstrate in the Petri dish that the bacteria had indeed delivered their “cargo” inside the cancerous tissue, where it accumulated. In a future medical application, the liposomes would be filled with a drug.

Bacterial cancer therapy

Using bacteria as ferries for drugs is one of two ways that bacteria can help in the fight against cancer. The other approach is over a hundred years old and currently experiencing a revival: using the natural propensity of certain species of bacteria to damage tumour cells. This may involve several mechanisms. In any case, it is known that the bacteria stimulate certain cells of the immune system, which then eliminate the tumour.

“We think that we can increase the efficacy of bacterial cancer therapy by using a engineering approach.”

– Simone Schürle

Multiple research projects are currently investigating the efficacy of E. coli bacteria against tumours. Today, it is possible to modify bacteria using synthetic biology to optimise their therapeutic effect, reduce side effects and make them safer.

Making non-magnetic bacteria magnetic

Yet to use the inherent properties of bacteria in cancer therapy, the question of how these bacteria can reach the tumour efficiently still remains. While it is possible to inject the bacteria directly into tumours near the surface of the body, this is not possible for tumours deep inside the body. That is where Professor Schürle’s microrobotic control comes in. “We believe we can use our engineering approach to increase the efficacy of bacterial cancer therapy,” she says.

E. coli used in the cancer studies is not magnetic and thus cannot be propelled and controlled by a magnetic field. In general, magnetic responsiveness is a very rare phenomenon among bacteria. Magnetospirillum is one of the few genera of bacteria that have this property.

Schürle therefore wants to make E. coli bacteria magnetic as well. This could one day make it possible to use a magnetic field to control clinically used therapeutic bacteria that have no natural magnetism.

The one-wheel Cubli

Researchers Matthias Hofer, Michael Muehlebach and Raffaello D’Andrea have developed the one-wheel Cubli, a three-dimensional pendulum system that can balance on its pivot using a single reaction wheel. How is it possible to stabilize the two tilt angles of the system with only a single reaction wheel?

The key is to design the system such that the inertia in one direction is higher than in the other direction by attaching two masses far away from the center. As a consequence, the system moves faster in the direction with the lower inertia and slower in the direction with the higher inertia. The controller can leverage this property and stabilize both directions simultaneously.

This work was carried out at the Institute for Dynamic Systems and Control, ETH Zurich, Switzerland.

Almost a decade has passed since the first Cubli

The Cubli robot started with a simple idea: Can we build a 15cm sided cube that can jump up, balance on its corner, and walk across our desk using off-the-shelf motors, batteries, and electronic components? The educational article Cubli – A cube that can jump up, balance, and walk across your desk shows all the design principles and prototypes that led to the development of the robot.

Cubli, from ETH Zurich.

New imaging method makes tiny robots visible in the body

By Florian Meyer

How can a blood clot be removed from the brain without any major surgical intervention? How can a drug be delivered precisely into a diseased organ that is difficult to reach? Those are just two examples of the countless innovations envisioned by the researchers in the field of medical microrobotics. Tiny robots promise to fundamentally change future medical treatments: one day, they could move through patient’s vasculature to eliminate malignancies, fight infections or provide precise diagnostic information entirely noninvasively. In principle, so the researchers argue, the circulatory system might serve as an ideal delivery route for the microrobots, since it reaches all organs and tissues in the body.

For such microrobots to be able to perform the intended medical interventions safely and reliably, they must not be larger than a biological cell. In humans, a cell has an average diameter of 25 micrometres – a micrometre is one millionth of a metre. The smallest blood vessels in humans, the capillaries, are even thinner: their average diameter is only 8 micrometres. The microrobots must be correspondingly small if they are to pass through the smallest blood vessels unhindered. However, such a small size also makes them invisible to the naked eye – and science too, has not yet found a technical solution to detect and track the micron-​sized robots individually as they circulate in the body.

Tracking circulating microrobots for the first time

“Before this future scenario becomes reality and microrobots are actually used in humans, the precise visualisation and tracking of these tiny machines is absolutely necessary,” says Paul Wrede, who is a doctoral fellow at the Max Planck ETH Center for Learnings Systems (CLS). “Without imaging, microrobotics is essentially blind,” adds Daniel Razansky, Professor of Biomedical Imaging at ETH Zurich and the University of Zurich and a member of the CLS. “Real-​time, high-​resolution imaging is thus essential for detecting and controlling cell-​sized microrobots in a living organism.” Further, imaging is also a prerequisite for monitoring therapeutic interventions performed by the robots and verifying that they have carried out their task as intended. “The lack of ability to provide real-​time feedback on the microrobots was therefore a major obstacle on the way to clinical application.”

“Without imaging, microrobotics is essentially blind.”

Daniel Razansky

Together with Metin Sitti, a world-​leading microrobotics expert who is also a CLS member as Director at the Max Planck Institute for Intelligent Systems (MPI-​IS) and ETH Professor of Physical Intelligence, and other researchers, the team has now achieved an important breakthrough in efficiently merging microrobotics and imaging. In a study just published in the scientific journal Science Advances, they managed for the first time to clearly detect and track tiny robots as small as five micrometres in real time in the brain vessels of mice using a non-​invasive imaging technique.

The researchers used microrobots with sizes ranging from 5 to 20 micrometres. The tiniest robots are about the size of red blood cells, which are 7 to 8 micrometres in diameter. This size makes it possible for the intravenously injected microrobots to travel even through the thinnest microcapillaries in the mouse brain.

A breakthrough: Tiny circulating microrobots, which are as small as red blood cells (left picture), were visualised one-​by-one in the blood vessels of mice with optoacoustic imaging (right picture). Image: ETH Zurich / Max Planck Institute for Intelligent Systems

The researchers also developed a dedicated optoacoustic tomography technology in order to actually detect the tiny robots one by one, in high resolution and in real time. This unique imaging method makes it possible to detect the tiny robots in deep and hard-​to-reach regions of the body and brain, which would not have been possible with optical microscopy or any other imaging technique. The method is called optoacoustic because light is first emitted and absorbed by the respective tissue. The absorption then produces tiny ultrasound waves that can be detected and analysed to result in high-​resolution volumetric images.

Janus-​faced robots with gold layer

To make the microrobots highly visible in the images, the researchers needed a suitable contrast material. For their study, they therefore used spherical, silica particle-​based microrobots with a so-​called Janus-​type coating. This type of robot has a very robust design and is very well qualified for complex medical tasks. It is named after the Roman god Janus, who had two faces. In the robots, the two halves of the sphere are coated differently. In the current study, the researchers coated one half of the robot with nickel and the other half with gold.

The spherical microrobots consist of silica-based particles and have been coated half with nickel (Ni) and half with gold (Au) and loaded with green-dyed nanobubbles (liposomes). In this way, they can be detected individually with the new optoacoustic imaging technique. Image: ETH Zurich / MPI-IS

“Gold is a very good contrast agent for optoacoustic imaging,” explains Razansky, “without the golden layer, the signal generated by the microrobots is just too weak to be detected.” In addition to gold, the researchers also tested the use of small bubbles called nanoliposomes, which contained a fluorescent green dye that also served as a contrast agent. “Liposomes also have the advantage that you can load them with potent drugs, which is important for future approaches to targeted drug delivery,” says Wrede, the first author of the study. The potential uses of liposomes will be investigated in a follow-​up study.

Furthermore, the gold also allows to minimise the cytotoxic effect of the nickel coating – after all, if in the future microrobots are to operate in living animals or humans, they must be made biocompatible and non-​toxic, which is part of an ongoing research. In the present study, the researchers used nickel as a magnetic drive medium and a simple permanent magnet to pull the robots. In follow-​up studies, they want to test the optoacoustic imaging with more complex manipulations using rotating magnetic fields.

“This would give us the ability to precisely control and move the microrobots even in strongly flowing blood,” says Metin Sitti. “In the present study we focused on visualising the microrobots. The project was tremendously successful thanks to the excellent collaborative environment at the CLS that allowed combining the expertise of the two research groups at MPI-​IS in Stuttgart for the robotic part and ETH Zurich for the imaging part,” Sitti concludes.

How robots learn to hike

The legged robot ANYmal on the rocky path to the summit of Mount Etzel, which stands 1,098 metres above sea level. (Photo: Takahiro Miki)

By Christoph Elhardt

Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-​metre-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. But ANYmal, the quadrupedal robot from the Robotic Systems Lab at ETH Zurich, overcomes the 120 vertical metres effortlessly in a 31-​minute hike. That’s 4 minutes faster than the estimated duration for human hikers – and with no falls or missteps.

This is made possible by a new control technology, which researchers at ETH Zurich led by robotics professor Marco Hutter recently presented in the journal Science Robotics. “The robot has learned to combine visual perception of its environment with proprioception – its sense of touch – based on direct leg contact. This allows it to tackle rough terrain faster, more efficiently and, above all, more robustly,” Hutter says. In the future, ANYmal can be used anywhere that is too dangerous for humans or too impassable for other robots.

Video: Nicole Davidson / ETH Zurich

Perceiving the environment accurately

To navigate difficult terrain, humans and animals quite automatically combine the visual perception of their environment with the proprioception of their legs and hands. This allows them to easily handle slippery or soft ground and move around with confidence, even when visibility is low. Until now, legged robots have been able to do this only to a limited extent.

“The reason is that the information about the immediate environment recorded by laser sensors and cameras is often incomplete and ambiguous,” explains Takahiro Miki, a doctoral student in Hutter’s group and lead author of the study. For example, tall grass, shallow puddles or snow appear as insurmountable obstacles or are partially invisible, even though the robot could actually traverse them. In addition, the robot’s view can be obscured in the field by difficult lighting conditions, dust or fog.

“That’s why robots like ANYmal have to be able to decide for themselves when to trust the visual perception of their environment and move forward briskly, and when it is better to proceed cautiously and with small steps,” Miki says. “And that’s the big challenge.”

A virtual training camp

Thanks to a new controller based on a neural network, the legged robot ANYmal, which was developed by ETH Zurich researchers and commercialized by the ETH spin-​off ANYbotics, is now able to combine external and proprioceptive perception for the first time. Before the robot could put its capabilities to the test in the real world, the scientists exposed the system to numerous obstacles and sources of error in a virtual training camp. This let the network learn the ideal way for the robot to overcome obstacles, as well as when it can rely on environmental data – and when it would do better to ignore that data.

“With this training, the robot is able to master the most difficult natural terrain without having seen it before,” says ETH Zurich Professor Hutter. This works even if the sensor data on the immediate environment is ambiguous or vague. ANYmal then plays it safe and relies on its proprioception. According to Hutter, this allows the robot to combine the best of both worlds: the speed and efficiency of external sensing and the safety of proprioceptive sensing.

Use under extreme conditions

Whether after an earthquake, after a nuclear disaster, or during a forest fire, robots like ANYmal can be used primarily wherever it is too dangerous for humans and where other robots cannot cope with the difficult terrain.

In September of last year, ANYmal was able to demonstrate just how well the new control technology works at the DARPA Subterranean Challenge, the world’s best-​known robotics competition. The ETH Zurich robot automatically and quickly overcame numerous obstacles and difficult terrain while autonomously exploring an underground system of narrow tunnels, caves, and urban infrastructure. This was a major part of why the ETH Zurich researchers, as part of the CERBERUS team, took first place with a prize of 2 million dollars.

Finding inspiration in starfish larva

The new microbot inspired by starfish larva stirs up plastic beads. (Image: Cornel Dillinger/ETH Zurich)

By Rahel Künzler

Among scientists, there is great interest in tiny machines that are set to revolutionise medicine. These microrobots, often only a fraction of the diameter of a hair, are made to swim through the body to deliver medication to specific areas and perform the smallest surgical procedures.

The designs of these robots are often inspired by natural microorganisms such as bacteria or algae. Now, for the first time, a research group at ETH Zurich has developed a microrobot design inspired by starfish larva, which use ciliary bands on their surface to swim and feed. The ultrasound-​activated synthetic system mimics the natural arrangements of starfish ciliary bands and leverages nonlinear acoustics to replicate the larva’s motion and manipulation techniques.

Hairs to push liquid away or suck it in

Depending on whether it is swimming or feeding, the starfish larva generates different patterns of vortices. (Image: Prakash Lab, Stanford University)

At first glance, the microrobots bear only scant similarity to starfish larva. In its larval stage, a starfish has a lobed body that measures just a few millimetres across. Meanwhile, the microrobot is a rectangle and ten times smaller, only a quarter of a millimetre across. But the two do share one important feature: a series of fine, movable hairs on the surface, called cilia.

A starfish larva is blanketed with hundreds of thousands of these hairs. Arranged in rows, they beat back and forth in a coordinated fashion, creating eddies in the surrounding water. The relative orientation of two rows determines the end result: Inclining two bands of beating cilia toward each other creates a vortex with a thrust effect, propelling the larva. On the other hand, inclining two bands away from each other creates a vortex that draws liquid in, trapping particles on which the larva feeds.

Artificial swimmers beat faster

These cilia were the key design element for the new microrobot developed by ETH researchers led by Daniel Ahmed, who is a Professor of Acoustic Robotics for life sciences and healthcare. “In the beginning,” Ahmed said, “we simply wanted to test whether we could create vortices similar to those of the starfish larva with rows of cilia inclined toward or away from each other.

To this end, the researchers used photolithography to construct a microrobot with appropriately inclined ciliary bands. They then applied ultrasound waves from an external source to make the cilia oscillate. The synthetic versions beat back and forth more than ten thousand times per second – about a thousand times faster than those of a starfish larva. And as with the larva, these beating cilia can be used to generate a vortex with a suction effect at the front and a vortex with a thrust effect at the rear, the combined effect “rocketing” the robot forward.

Besides swimming, the new microrobot can collect particles and steer them in a predetermined direction. (Video: Cornel Dillinger/ETH Zurich)

In their lab, the researchers showed that the microrobots can swim in a straight line through liquid such as water. Adding tiny plastic beads to the water made it possible to visualize the vortices created by the microrobot. The result is astonishing: both starfish larva and microrobots generate virtually identical flow patterns.

Next, the researchers arranged the ciliary bands so that a suction vortex was positioned next to a thrust vortex, imitating the feeding technique used by starfish larva. This arrangement enabled the robots to collect particles and send them out in a predetermined direction.

Ultrasound offers many advantages

Ahmed is convinced that this new type of microrobot will be ready for use in medicine in the foreseeable future. This is because a system that relies only on ultrasound offers decisive advantages: ultrasound waves are already widely used in imaging, penetrate deep inside the body, and pose no health risks.

“Our vision is to use ultrasound for propulsion, imaging and drug delivery.”

– Daniel Ahmed

The fact that this therapy requires only an ultrasound device makes it cheap, he adds, and hence suitable for use in both developed and developing countries.

Ahmed believes one initial field of application could be the treatment of gastric tumours. Uptake of conventional drugs by diffusion is inefficient, but having microrobots transport a drug specifically to the site of a stomach tumour and then deliver it there might make the drug’s uptake into tumour cells more efficient and reduce side effects.

Sharper images thanks to contrast agents

But before this vision can be realized, a major challenge remains to be overcome: imaging. Steering the tiny machines to the right place requires that a sharp image be generated in real time. The researchers have plans to make the microrobots more visible by incorporating contrast agents such as those already used in medical imaging with ultrasound.

In addition to medical applications, Ahmed anticipates this starfish-​inspired design to have important implications for the manipulation of smallest liquid volumes in research and in industry. Bands of beating cilia could execute tasks such as mixing, pumping and particle trapping.