Page 1 of 2
1 2

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik

Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape.

In the last three decades, the global biomass of flying insects has shrunk by 75%. Among the trend’s most notables victims is the world’s most important pollinator, the honeybee. In the United States, 48% of honeybee colonies died in 2023 alone, making it the second deadliest year on record. This significant loss is due in part to colony collapse disorder (CCD), the sudden disappearance of bees. In contrast, European countries report lower but still worrisome rates of colony losses, ranging from 6% to 32%.

This decline causes many of our essential food crops to be under-pollinated, a phenomenon that threatens our society’s food security.

Debunking the sci-fi myth of robotic bees

So, what can be done? Given pesticides’ role in the decline of bee colonies, commonly proposed solutions include a shift away from industrial farming and toward less pesticide-intensive, more sustainable forms of agriculture.

Others tend to look toward the sci-fi end of things, with some scientists imagining that we could eventually replace live honeybees with robotic ones. Such artificial bees could interact with flowers like natural insects, maintaining pollination levels despite the declining numbers of natural pollinators. The vision of artificial pollinators contributed to ingenious designs of insect-sized robots capable of flying.

In reality, such inventions are more effective at educating us over engineers’ fantasies than they are at reviving bee colonies, so slim are their prospects of materialising. First, these artificial pollinators would have to be equipped for much more more than just flying. Daily tasks carried out by the common bee include searching for plants, identifying flowers, unobtrusively interacting with them, locating energy sources, ducking potential predators, and dealing with adverse weather conditions. Robots would have to perform all of these in the wild with a very high degree of reliability since any broken-down or lost robot can cause damage and spread pollution. Second, it remains to be seen whether our technological knowledge would be even capable of manufacturing such inventions. This is without even mentioning the price tag of a swarm of robots capable of substituting pollination provided by a single honeybee colony.

Inside a smart hive

Bees on one of Hiveopolis’s augmented hives.
Hiveopolis, Fourni par l’auteur

Rather than trying to replace honeybees with robots, our two latest projects funded by the European Union propose that the robots and honeybees actually team up. Were these to succeed, struggling honeybee colonies could be transformed into bio-hybrid entities consisting of biological and technological components with complementary skills. This would hopefully boost and secure the colonies’ population growth as more bees survive over harsh winters and yield more foragers to pollinate surrounding ecosystems.

The first of these projects, Hiveopolis, investigates how the complex decentralised decision-making mechanism in a honeybee colony can be nudged by digital technology. Begun in 2019 and set to end in March 2024, the experiment introduces technology into three observation hives each containing 4,000 bees, by contrast to 40,000 bees for a normal colony.

The foundation of an augmented honeycomb.
Hiveopolis, Fourni par l’auteur

Within this honeybee smart home, combs have integrated temperature sensors and heating devices, allowing the bees to enjoy optimal conditions inside the colony. Since bees tend to snuggle up to warmer locations, the combs also enables us to direct them toward different areas of the hive. And as if that control weren’t enough, the hives are also equipped with a system of electronic gates that monitors the insects movements. Both technologies allow us to decide where the bees store honey and pollen, but also when they vacate the combs so as to enable us to harvest honey. Last but not least, the smart hive contains a robotic dancing bee that can direct foraging bees toward areas with plants to be pollinated.

Due to the experiment’s small scale, it is impossible to draw conclusions on the extent to which our technologies may have prevented bee losses. However, there is little doubt what we have seen thus far give reasons to be hopeful. We can confidently assert that our smart beehives allowed colonies to survive extreme cold during the winter in a way that wouldn’t otherwise be possible. To precisely assess how many bees these technologies have saved would require upscaling the experiment to hundreds of colonies.

Pampering the queen bee

Our second EU-funded project, RoboRoyale, focuses on the honeybee queen and her courtyard bees, with robots in this instance continuously monitoring and interacting with her Royal Highness.

Come 2024, we will equip each hive with a group of six bee-sized robots, which will groom and feed the honeybee queen to affect the number of eggs she lays. Some of these robots will be equipped with royal jelly micro-pumps to feed her, while others will feature compliant micro-actuators to groom her. These robots will then be connected to a larger robotic arm with infrared cameras, that will continuously monitor the queen and her vicinity.

A RoboRoyale robot arm susses out a honeybee colony.
RoboRoyale, Fourni par l’auteur

As witnessed by the photo to the right and also below, we have already been able to successfully introduce the robotic arm within a living colony. There it continuously monitored the queen and determined her whereabouts through light stimuli.

Emulating the worker bees

In a second phase, it is hoped the bee-sized robots and robotic arm will be able to emulate the behaviour of the workers, the female bees lacking reproductive capacity who attend to the queen and feed her royal jelly. Rich in water, proteins, carbohydrates, lipids, vitamins and minerals, this nutritious substance secreted by the glands of the worker bees enables the queen to lay up to thousands of eggs a day.

Worker bees also engage in cleaning the queen, which involves licking her. During such interactions, they collect some of the queen’s pheromones and disperse them throughout the colony as they move across the hive. The presence of these pheromones controls many of the colony’s behaviours and notifies the colony of a queen’s presence. For example, in the event of the queen’s demise, a new queen must be quickly reared from an egg laid by the late queen, leaving only a narrow time window for the colony to react.

One of RoboRoyale’s first experiments has consisted in simple interactions with the queen bee through light stimulus. The next months will then see the robotic arm stretch out to physically touch and groom her.
RoboRoyale, Fourni par l’auteur

Finally, it is believed worker bees may also act as the queen’s guides, leading her to laying eggs in specific comb cells. The size of these cells can determine if the queen lays a diploid or haploid egg, resulting in the bee developing into either into drone (male) or worker (female) bee. Taking over these guiding duties could affect no less than the rate’s entire reproductive rate.

How robots can prevent bee cannibalism

This could have another virtuous effect: preventing cannibalism.

During tough times, such as long periods of rain, bees have to make do with little pollen intake. This forces them to feed young larvae to older ones so that at least the older larvae has a chance to survive. Through RoboRoyale, we will look not only to reduce chances of this behaviour occurring, but also quantify to what extent it occurs under normal conditions.

Ultimately, our robots will enable us to deepen our understanding of the very complex regulation processes inside honeybee colonies through novel experimental procedures. The insights gained from these new research tracks will be necessary to better protect these valuable social insects and ensure sufficient pollination in the future – a high stakes enterprise for food security.

This article is the result of The Conversation’s collaboration with Horizon, the EU research and innovation magazine.

The Conversation

Farshad Arvin is a member of the Department of Computer Science at Durham University in the UK. The research of Farshad Arvin is primarily funded by the EU H2020 and Horizon Europe programmes.

Martin Stefanec is a member of the Institute of Biology at the University of Graz. He has received funding from the EU programs H2020 and Horizon Europe.

Tomas Krajnik is member of the Institute of Electrical and Electronics Engineers (IEEE). The research of Tomas Krajnik is primarily funded by EU H2020 Horizon programme and Czech National Science Foundation.

Mobile robots get a leg up from a more-is-better communications principle

Getting a leg up from mobile robots comes down to getting a bunch of legs. Georgia Institute of Technology

By Baxi Chong (Postdoctoral Fellow, School of Physics, Georgia Institute of Technology)

Adding legs to robots that have minimal awareness of the environment around them can help the robots operate more effectively in difficult terrain, my colleagues and I found.

We were inspired by mathematician and engineer Claude Shannon’s communication theory about how to transmit signals over distance. Instead of spending a huge amount of money to build the perfect wire, Shannon illustrated that it is good enough to use redundancy to reliably convey information over noisy communication channels. We wondered if we could do the same thing for transporting cargo via robots. That is, if we want to transport cargo over “noisy” terrain, say fallen trees and large rocks, in a reasonable amount of time, could we do it by just adding legs to the robot carrying the cargo and do so without sensors and cameras on the robot?

Most mobile robots use inertial sensors to gain an awareness of how they are moving through space. Our key idea is to forget about inertia and replace it with the simple function of repeatedly making steps. In doing so, our theoretical analysis confirms our hypothesis of reliable and predictable robot locomotion – and hence cargo transport – without additional sensing and control.

To verify our hypothesis, we built robots inspired by centipedes. We discovered that the more legs we added, the better the robot could move across uneven surfaces without any additional sensing or control technology. Specifically, we conducted a series of experiments where we built terrain to mimic an inconsistent natural environment. We evaluated the robot locomotion performance by gradually increasing the number of legs in increments of two, beginning with six legs and eventually reaching a total of 16 legs.

Navigating rough terrain can be as simple as taking it a step at a time, at least if you have a lot of legs.

As the number of legs increased, we observed that the robot exhibited enhanced agility in traversing the terrain, even in the absence of sensors. To further assess its capabilities, we conducted outdoor tests on real terrain to evaluate its performance in more realistic conditions, where it performed just as well. There is potential to use many-legged robots for agriculture, space exploration and search and rescue.

Why it matters

Transporting things – food, fuel, building materials, medical supplies – is essential to modern societies, and effective goods exchange is the cornerstone of commercial activity. For centuries, transporting material on land has required building roads and tracks. However, roads and tracks are not available everywhere. Places such as hilly countryside have had limited access to cargo. Robots might be a way to transport payloads in these regions.

What other research is being done in this field

Other researchers have been developing humanoid robots and robot dogs, which have become increasingly agile in recent years. These robots rely on accurate sensors to know where they are and what is in front of them, and then make decisions on how to navigate.

However, their strong dependence on environmental awareness limits them in unpredictable environments. For example, in search-and-rescue tasks, sensors can be damaged and environments can change.

What’s next

My colleagues and I have taken valuable insights from our research and applied them to the field of crop farming. We have founded a company that uses these robots to efficiently weed farmland. As we continue to advance this technology, we are focused on refining the robot’s design and functionality.

While we understand the functional aspects of the centipede robot framework, our ongoing efforts are aimed at determining the optimal number of legs required for motion without relying on external sensing. Our goal is to strike a balance between cost-effectiveness and retaining the benefits of the system. Currently, we have shown that 12 is the minimum number of legs for these robots to be effective, but we are still investigating the ideal number.

The Research Brief is a short take on interesting academic work.

The Conversation

The authors has received funding from NSF-Simons Southeast Center for Mathematics and Biology (Simons Foundation SFARI 594594), Georgia Research Alliance (GRA.VL22.B12), Army Research Office (ARO) MURI program, Army Research Office Grant W911NF-11-1-0514 and a Dunn Family Professorship.

The author and his colleagues have one or more pending patent applications related to the research covered in this article.

The author and his colleagues have established a start-up company, Ground Control Robotics, Inc., partially based on this work.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Titan submersible disaster underscores dangers of deep-sea exploration – an engineer explains why most ocean science is conducted with crewless submarines

Researchers are increasingly using small, autonomous underwater robots to collect data in the world’s oceans. NOAA Teacher at Sea Program, NOAA Ship PISCES, CC BY-SA

By Nina Mahmoudian (Associate Professor of Mechanical Engineering, Purdue University)

Rescuers spotted debris from the tourist submarine Titan on the ocean floor near the wreck of the Titanic on June 22, 2023, indicating that the vessel suffered a catastrophic failure and the five people aboard were killed.

Bringing people to the bottom of the deep ocean is inherently dangerous. At the same time, climate change means collecting data from the world’s oceans is more vital than ever. Purdue University mechanical engineer Nina Mahmoudian explains how researchers reduce the risks and costs associated with deep-sea exploration: Send down subs, but keep people on the surface.

Why is most underwater research conducted with remotely operated and autonomous underwater vehicles?

When we talk about water studies, we’re talking about vast areas. And covering vast areas requires tools that can work for extended periods of time, sometimes months. Having people aboard underwater vehicles, especially for such long periods of time, is expensive and dangerous.

One of the tools researchers use is remotely operated vehicles, or ROVs. Basically, there is a cable between the vehicle and operator that allows the operator to command and move the vehicle, and the vehicle can relay data in real time. ROV technology has progressed a lot to be able to reach deep ocean – up to a depth of 6,000 meters (19,685 feet). It’s also better able to provide the mobility necessary for observing the sea bed and gathering data.

Autonomous underwater vehicles provide another opportunity for underwater exploration. They are usually not tethered to a ship. They are typically programmed ahead of time to do a specific mission. And while they are underwater they usually don’t have constant communication. At some interval, they surface, relay the whole amount of data that they have gathered, change the battery or recharge and receive renewed instructions before again submerging and continuing their mission.

What can remotely operated and autonomous underwater vehicles do that crewed submersibles can’t, and vice versa?

Crewed submersibles will be exciting for the public and those involved and helpful for the increased capabilities humans bring in operating instruments and making decisions, similar to crewed space exploration. However, it will be much more expensive compared with uncrewed explorations because of the required size of the platforms and the need for life-support systems and safety systems. Crewed submersibles today cost tens of thousands of dollars a day to operate.

Use of unmanned systems will provide better opportunities for exploration at less cost and risk in operating over vast areas and in inhospitable locations. Using remotely operated and autonomous underwater vehicles gives operators the opportunity to perform tasks that are dangerous for humans, like observing under ice and detecting underwater mines.

Remotely operated vehicles can operate under Antarctic ice and other dangerous places.

How has the technology for deep ocean research evolved?

The technology has advanced dramatically in recent years due to progress in sensors and computation. There has been great progress in miniaturization of acoustic sensors and sonars for use underwater. Computers have also become more miniaturized, capable and power efficient. There has been a lot of work on battery technology and connectors that are watertight. Additive manufacturing and 3D printing also help build hulls and components that can withstand the high pressures at depth at much lower costs.

There has also been great progress toward increasing autonomy using more advanced algorithms, in addition to traditional methods for navigation, localization and detection. For example, machine learning algorithms can help a vehicle detect and classify objects, whether stationary like a pipeline or mobile like schools of fish.

What kinds of discoveries have been made using remotely operated and autonomous underwater vehicles?

One example is underwater gliders. These are buoyancy-driven autonomous underwater vehicles. They can stay in water for months. They can collect data on pressure, temperature and salinity as they go up and down in water. All of these are very helpful for researchers to have an understanding of changes that are happening in oceans.

One of these platforms traveled across the North Atlantic Ocean from the coast of Massachusetts to Ireland for nearly a year in 2016 and 2017. The amount of data that was captured in that amount of time was unprecedented. To put it in perspective, a vehicle like that costs about $200,000. The operators were remote. Every eight hours the glider came to the surface, got connected to GPS and said, “Hey, I am here,” and the crew basically gave it the plan for the next leg of the mission. If a crewed ship was sent to gather that amount of data for that long it would cost in the millions.

In 2019, researchers used an autonomous underwater vehicle to collect invaluable data about the seabed beneath the Thwaites glacier in Antarctica.

Energy companies are also using remotely operated and autonomous underwater vehicles for inspecting and monitoring offshore renewable energy and oil and gas infrastructure on the seabed.

Where is the technology headed?

Underwater systems are slow-moving platforms, and if researchers can deploy them in large numbers that would give them an advantage for covering large areas of ocean. A great deal of effort is being put into coordination and fleet-oriented autonomy of these platforms, as well as into advancing data gathering using onboard sensors such as cameras, sonars and dissolved oxygen sensors. Another aspect of advancing vehicle autonomy is real-time underwater decision-making and data analysis.

What is the focus of your research on these submersibles?

My team and I focus on developing navigational and mission-planning algorithms for persistent operations, meaning long-term missions with minimal human oversight. The goal is to respond to two of the main constraints in the deployment of autonomous systems. One is battery life. The other is unknown situations.

The author’s research includes a project to allow autonomous underwater vehicles to recharge their batteries without human intervention.

For battery life, we work on at-sea recharging, both underwater and surface water. We are developing tools for autonomous deployment, recovery, recharging and data transfer for longer missions at sea. For unknown situations, we are working on recognizing and avoiding obstacles and adapting to different ocean currents – basically allowing a vehicle to navigate in rough conditions on its own.

To adapt to changing dynamics and component failures, we are working on methodologies to help the vehicle detect the change and compensate to be able to continue and finish the mission.

These efforts will enable long-term ocean studies including observing environmental conditions and mapping uncharted areas.

The Conversation

Nina Mahmoudian receives funding from National Science Foundation and Office of Naval Research.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We need to discuss what jobs robots should do, before the decision is made for us

By Thusha Rajendran (Professor of Psychology, The National Robotarium, Heriot-Watt University)

The social separation imposed by the pandemic led us to rely on technology to an extent we might never have imagined – from Teams and Zoom to online banking and vaccine status apps.

Now, society faces an increasing number of decisions about our relationship with technology. For example, do we want our workforce needs fulfilled by automation, migrant workers, or an increased birth rate?

In the coming years, we will also need to balance technological innovation with people’s wellbeing – both in terms of the work they do and the social support they receive.

And there is the question of trust. When humans should trust robots, and vice versa, is a question our Trust Node team is researching as part of the UKRI Trustworthy Autonomous Systems hub. We want to better understand human-robot interactions – based on an individual’s propensity to trust others, the type of robot, and the nature of the task. This, and projects like it, could ultimately help inform robot design.

This is an important time to discuss what roles we want robots and AI to take in our collective future – before decisions are taken that may prove hard to reverse. One way to frame this dialogue is to think about the various roles robots can fulfill.

Robots as our servants

The word “robot” was first used by the Czech writer, Karel Čapek, in his 1920 sci-fi play Rossum’s Universal Robots. It comes from the word “robota”, meaning to do the drudgery or donkey work. This etymology suggests robots exist to do work that humans would rather not. And there should be no obvious controversy, for example, in tasking robots to maintain nuclear power plants or repair offshore wind farms.

The more human a robot looks, the more we trust it. Antonello Marangi/Shutterstock

However, some service tasks assigned to robots are more controversial, because they could be seen as taking jobs from humans.

For example, studies show that people who have lost movement in their upper limbs could benefit from robot-assisted dressing. But this could be seen as automating tasks that nurses currently perform. Equally, it could free up time for nurses and careworkers – currently sectors that are very short-staffed – to focus on other tasks that require more sophisticated human input.

Authority figures

The dystopian 1987 film Robocop imagined the future of law enforcement as autonomous, privatised, and delegated to cyborgs or robots.

Today, some elements of this vision are not so far away: the San Francisco Police Department has considered deploying robots – albeit under direct human control – to kill dangerous suspects.

This US military robot is fitted with a machine gun to turn it into a remote weapons platform. US Army

But having robots as authority figures needs careful consideration, as research has shown that humans can place excessive trust in them.

In one experiment, a “fire robot” was assigned to evacuate people from a building during a simulated blaze. All 26 participants dutifully followed the robot, even though half had previously seen the robot perform poorly in a navigation task.

Robots as our companions

It might be difficult to imagine that a human-robot attachment would have the same quality as that between humans or with a pet. However, increasing levels of loneliness in society might mean that for some people, having a non-human companion is better than nothing.

The Paro Robot is one of the most commercially successful companion robots to date – and is designed to look like a baby harp seal. Yet research suggests that the more human a robot looks, the more we trust it.

The Paro companion robot is designed to look like a baby seal. Angela Ostafichuk / Shutterstock

A study has also shown that different areas of the brain are activated when humans interact with either another human or a robot. This suggests our brains may recognise interactions with a robot differently from human ones.

Creating useful robot companions involves a complex interplay of computer science, engineering and psychology. A robot pet might be ideal for someone who is not physically able to take a dog for its exercise. It might also be able to detect falls and remind someone to take their medication.

How we tackle social isolation, however, raises questions for us as a society. Some might regard efforts to “solve” loneliness with technology as the wrong solution for this pervasive problem.

What can robotics and AI teach us?

Music is a source of interesting observations about the differences between human and robotic talents. Committing errors in the way humans do all the time, but robots might not, appears to be a vital component of creativity.

A study by Adrian Hazzard and colleagues pitted professional pianists against an autonomous disklavier (an automated piano with keys that move as if played by an invisible pianist). The researchers discovered that, eventually, the pianists made mistakes. But they did so in ways that were interesting to humans listening to the performance.

This concept of “aesthetic failure” can also be applied to how we live our lives. It offers a powerful counter-narrative to the idealistic and perfectionist messages we constantly receive through television and social media – on everything from physical appearance to career and relationships.

As a species, we are approaching many crossroads, including how to respond to climate change, gene editing, and the role of robotics and AI. However, these dilemmas are also opportunities. AI and robotics can mirror our less-appealing characteristics, such as gender and racial biases. But they can also free us from drudgery and highlight unique and appealing qualities, such as our creativity.

We are in the driving seat when it comes to our relationship with robots – nothing is set in stone, yet. But to make educated, informed choices, we need to learn to ask the right questions, starting with: what do we actually want robots to do for us?

The Conversation

Thusha Rajendran receives funding from the UKRI and EU. He would like to acknowledge evolutionary anthropologist Anna Machin’s contribution to this article through her book Why We Love, personal communications and draft review.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Robots are everywhere – improving how they communicate with people could advance human-robot collaboration

Emotionally intelligent’ robots could improve their interactions with people. Andriy Onufriyenko/Moment via Getty Images

By Ramana Vinjamuri (Assistant Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County)

Robots are machines that can sense the environment and use that information to perform an action. You can find them nearly everywhere in industrialized societies today. There are household robots that vacuum floors and warehouse robots that pack and ship goods. Lab robots test hundreds of clinical samples a day. Education robots support teachers by acting as one-on-one tutors, assistants and discussion facilitators. And medical robotics composed of prosthetic limbs can enable someone to grasp and pick up objects with their thoughts.

Figuring out how humans and robots can collaborate to effectively carry out tasks together is a rapidly growing area of interest to the scientists and engineers that design robots as well as the people who will use them. For successful collaboration between humans and robots, communication is key.

Robotics can help patients recover physical function in rehabilitation. BSIP/Universal Images Group via Getty Images

How people communicate with robots

Robots were originally designed to undertake repetitive and mundane tasks and operate exclusively in robot-only zones like factories. Robots have since advanced to work collaboratively with people with new ways to communicate with each other.

Cooperative control is one way to transmit information and messages between a robot and a person. It involves combining human abilities and decision making with robot speed, accuracy and strength to accomplish a task.

For example, robots in the agriculture industry can help farmers monitor and harvest crops. A human can control a semi-autonomous vineyard sprayer through a user interface, as opposed to manually spraying their crops or broadly spraying the entire field and risking pesticide overuse.

Robots can also support patients in physical therapy. Patients who had a stroke or spinal cord injury can use robots to practice hand grasping and assisted walking during rehabilitation.

Another form of communication, emotional intelligence perception, involves developing robots that adapt their behaviors based on social interactions with humans. In this approach, the robot detects a person’s emotions when collaborating on a task, assesses their satisfaction, then modifies and improves its execution based on this feedback.

For example, if the robot detects that a physical therapy patient is dissatisfied with a specific rehabilitation activity, it could direct the patient to an alternate activity. Facial expression and body gesture recognition ability are important design considerations for this approach. Recent advances in machine learning can help robots decipher emotional body language and better interact with and perceive humans.

Robots in rehab

Questions like how to make robotic limbs feel more natural and capable of more complex functions like typing and playing musical instruments have yet to be answered.

I am an electrical engineer who studies how the brain controls and communicates with other parts of the body, and my lab investigates in particular how the brain and hand coordinate signals between each other. Our goal is to design technologies like prosthetic and wearable robotic exoskeleton devices that could help improve function for individuals with stroke, spinal cord and traumatic brain injuries.

One approach is through brain-computer interfaces, which use brain signals to communicate between robots and humans. By accessing an individual’s brain signals and providing targeted feedback, this technology can potentially improve recovery time in stroke rehabilitation. Brain-computer interfaces may also help restore some communication abilities and physical manipulation of the environment for patients with motor neuron disorders.

Brain-computer interfaces could allow people to control robotic arms by thought alone. Ramana Kumar Vinjamuri, CC BY-ND

The future of human-robot interaction

Effective integration of robots into human life requires balancing responsibility between people and robots, and designating clear roles for both in different environments.

As robots are increasingly working hand in hand with people, the ethical questions and challenges they pose cannot be ignored. Concerns surrounding privacy, bias and discrimination, security risks and robot morality need to be seriously investigated in order to create a more comfortable, safer and trustworthy world with robots for everyone. Scientists and engineers studying the “dark side” of human-robot interaction are developing guidelines to identify and prevent negative outcomes.

Human-robot interaction has the potential to affect every aspect of daily life. It is the collective responsibility of both the designers and the users to create a human-robot ecosystem that is safe and satisfactory for all.

The Conversation

Ramana Vinjamuri receives funding from National Science Foundation.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Our future could be full of undying, self-repairing robots – here’s how

Robotic head, 3D illustration (frank60/Shutterstock)

By Jonathan Roberts (Professor in Robotics, Queensland University of Technology)

With generative artificial intelligence (AI) systems such as ChatGPT and StableDiffusion being the talk of the town right now, it might feel like we’ve taken a giant leap closer to a sci-fi reality where AIs are physical entities all around us.

Indeed, computer-based AI appears to be advancing at an unprecedented rate. But the rate of advancement in robotics – which we could think of as the potential physical embodiment of AI – is slow.

Could it be that future AI systems will need robotic “bodies” to interact with the world? If so, will nightmarish ideas like the self-repairing, shape-shifting T-1000 robot from the Terminator 2 movie come to fruition? And could a robot be created that could “live” forever?

Energy for ‘life’

Biological lifeforms like ourselves need energy to operate. We get ours via a combination of food, water, and oxygen. The majority of plants also need access to light to grow.

By the same token, an everlasting robot needs an ongoing energy supply. Currently, electrical power dominates energy supply in the world of robotics. Most robots are powered by the chemistry of batteries.

An alternative battery type has been proposed that uses nuclear waste and ultra-thin diamonds at its core. The inventors, a San Francisco startup called Nano Diamond Battery, claim a possible battery life of tens of thousands of years. Very small robots would be an ideal user of such batteries.

But a more likely long-term solution for powering robots may involve different chemistry – and even biology. In 2021, scientists from the Berkeley Lab and UMAss Amherst in the US demonstrated tiny nanobots could get their energy from chemicals in the liquid they swim in.

The researchers are now working out how to scale up this idea to larger robots that can work on solid surfaces.

Repairing and copying oneself

Of course, an undying robot might still need occasional repairs.

Ideally, a robot would repair itself if possible. In 2019, a Japanese research group demonstrated a research robot called PR2 tightening its own screw using a screwdriver. This is like self-surgery! However, such a technique would only work if non-critical components needed repair.

Other research groups are exploring how soft robots can self-heal when damaged. A group in Belgium showed how a robot they developed recovered after being stabbed six times in one of its legs. It stopped for a few minutes until its skin healed itself, and then walked off.

Another unusual concept for repair is to use other things a robot might find in the environment to replace its broken part.

Last year, scientists reported how dead spiders can be used as robot grippers. This form of robotics is known as “necrobotics”. The idea is to use dead animals as ready-made mechanical devices and attach them to robots to become part of the robot.

The proof-of-concept in necrobotics involved taking a dead spider and ‘reanimating’ its hydraulic legs with air, creating a surprisingly strong gripper. Preston Innovation Laboratory/Rice University

A robot colony?

From all these recent developments, it’s quite clear that in principle, a single robot may be able to live forever. But there is a very long way to go.

Most of the proposed solutions to the energy, repair and replication problems have only been demonstrated in the lab, in very controlled conditions and generally at tiny scales.

The ultimate solution may be one of large colonies or swarms of tiny robots who share a common brain, or mind. After all, this is exactly how many species of insects have evolved.

The concept of the “mind” of an ant colony has been pondered for decades. Research published in 2019 showed ant colonies themselves have a form of memory that is not contained within any of the ants.

This idea aligns very well with one day having massive clusters of robots that could use this trick to replace individual robots when needed, but keep the cluster “alive” indefinitely.

Ant colonies can contain ‘memories’ that are distributed between many individual insects. frank60/Shutterstock

Ultimately, the scary robot scenarios outlined in countless science fiction books and movies are unlikely to suddenly develop without anyone noticing.

Engineering ultra-reliable hardware is extremely difficult, especially with complex systems. There are currently no engineered products that can last forever, or even for hundreds of years. If we do ever invent an undying robot, we’ll also have the chance to build in some safeguards.The Conversation

Jonathan Roberts is Director of the Australian Cobotics Centre, the Technical Director of the Advanced Robotics for Manufacturing (ARM) Hub, and is a Chief Investigator at the QUT Centre for Robotics. He receives funding from the Australian Research Council. He was the co-founder of the UAV Challenge – an international drone competition.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Five ways drones will change the way buildings are designed

By Paul Cureton (Senior Lecturer in Design (People, Places, Products), Lancaster University) and Ole B. Jensen (Professor of Urban Theory and Urban Design, Aalborg University)

Drones are already shaping the face of our cities – used for building planning, heritage, construction and safety enhancement. But, as studies by the UK’s Department of Transport have found, swathes of the public have a limited understanding of how drones might be practically applied.

It’s crucial that the ways drones are affecting our future are understood by the majority of people. As experts in design futures and mobility, we hope this short overview of five ways drones will affect building design offers some knowledge of how things are likely to change.

Infographic showcasing other ways drones will influence future building design. Nuri Kwon, Drone Near-Futures, Imagination Lancaster, Author provided

1. Creating digital models of buildings

Drones can take photographs of buildings, which are then used to build 3D models of buildings in computer-aided design software.

These models have accuracy to within a centimetre, and can be combined with other data, such as 3D scans of interiors using drones or laser scanners, in order to provide a completely accurate picture of the structure for surveyors, architects and clients.

Using these digital models saves time and money in the construction process by providing a single source thaOle B. Jensent architects and planners can view.

2. Heritage simulations

Studio Drift are a multidisciplinary team of Dutch artists who have used drones to construct images through theatrical outdoor drone performances at damaged national heritage sites such as the Notre Dame in Paris, Colosseum in Rome and Gaudí’s Sagrada Familia in Barcelona.

Drones could be used in the near-future in a similar way to help planners to visualise the final impact of restoration or construction work on a damaged or partially finished building.

3. Drone delivery

The arrival of drone delivery services will see significant changes to buildings in our communities, which will need to provide for docking stations at community hubs, shops and pick-up points.

Wingcopter are one of many companies trialling delivery drones. Akash 1997, CC BY-SA

There are likely to be landing pads installed on the roofs of residential homes and dedicated drone-delivery hubs. Research has shown that drones can help with the last mile of any delivery in the UK, Germany, France and Italy.

Architects of the future will need to add these facilities into their building designs.

4. Drones mounted with 3D printers

Two research projects from architecture, design, planning, and consulting firm Gensler and another from a consortium led by Imperial College London (comprising University College London, University of Bath, University of Pennsylvania, Queen Mary University of London, and Technical University of Munich) named Empa have been experimenting with drones with mounted 3D printers. These drones would work at speed to construct emergency shelters or repair buildings at significant heights, without the need for scaffolding, or in difficult to reach locations, providing safety benefits.

Gensler have already used drones for wind turbine repair and researchers at Imperial College are exploring bee-like drone swarms that work together to construct blueprints. The drones coordinate with each other to follow a pre-defined path in a project called Aerial Additive Manufacturing. For now, the work is merely a demonstration of the technology, and not working on a specific building.

In the future, drones with mounted 3D printers could help create highly customised buildings at speed, but how this could change the workforce and the potential consequences for manual labour jobs is yet to be understood.

5. Agile surveillance

Drones offer new possibilities for surveillance away from the static, fixed nature of current systems such as closed circuit television.

Drones with cameras and sensors relying on complex software systems such as biometric indicators and “face recognition” will probably be the next level of surveillance applied by governments and police forces, as well as providing security monitoring for homeowners. Drones would likely be fitted with monitoring devices, which could communicate with security or police forces.

Drones used in this way could help our buildings become more responsive to intrusions, and adaptable to changing climates. Drones may move parts of the building such as shade-creating devices, following the path of the sun to stop buildings overheating, for example.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How shoring up drones with artificial intelligence helps surf lifesavers spot sharks at the beach

A close encounter between a white shark and a surfer. Author provided.

By Cormac Purcell (Adjunct Senior Lecturer, UNSW Sydney) and Paul Butcher (Adjunct Professor, Southern Cross University)

Australian surf lifesavers are increasingly using drones to spot sharks at the beach before they get too close to swimmers. But just how reliable are they?

Discerning whether that dark splodge in the water is a shark or just, say, seaweed isn’t always straightforward and, in reasonable conditions, drone pilots generally make the right call only 60% of the time. While this has implications for public safety, it can also lead to unnecessary beach closures and public alarm.

Engineers are trying to boost the accuracy of these shark-spotting drones with artificial intelligence (AI). While they show great promise in the lab, AI systems are notoriously difficult to get right in the real world, so remain out of reach for surf lifesavers. And importantly, overconfidence in such software can have serious consequences.

With these challenges in mind, our team set out to build the most robust shark detector possible and test it in real-world conditions. By using masses of data, we created a highly reliable mobile app for surf lifesavers that could not only improve beach safety, but help monitor the health of Australian coastlines.

White shark being observed by a drone.A white shark being tracked by a drone. Author provided.

Detecting dangerous sharks with drones

The New South Wales government has invested more than A$85 million in shark mitigation measures over the next four years. Of all approaches on offer, a 2020 survey showed drone-based shark surveillance is the public’s preferred method to protect beach-goers.

The state government has been trialling drones as shark-spotting tools since 2016, and with Surf Life Saving NSW since 2018. Trained surf lifesaving pilots fly the drone over the ocean at a height of 60 metres, watching the live video feed on portable screens for the shape of sharks swimming under the surface.

Identifying sharks by carefully analysing the video footage in good conditions seems easy. But water clarity, sea glitter (sea-surface reflection), animal depth, pilot experience and fatigue all reduce the reliability of real-time detection to a predicted average of 60%. This reliability falls further when conditions are turbid.

Pilots also need to confidently identify the species of shark and tell the difference between dangerous and non-dangerous animals, such as rays, which are often misidentified.

Identifying shark species from the air.

AI-driven computer vision has been touted as an ideal tool to virtually “tag” sharks and other animals in the video footage streamed from the drones, and to help identify whether a species nearing the beach is cause for concern.

AI to the rescue?

Early results from previous AI-enhanced shark-spotting systems have suggested the problem has been solved, as these systems report detection accuracies of over 90%.

But scaling these systems to make a real-world difference across NSW beaches has been challenging.

AI systems are trained to locate and identify species using large collections of example images and perform remarkably well when processing familiar scenes in the real world.

However, problems quickly arise when they encounter conditions not well represented in the training data. As any regular ocean swimmer can tell you, every beach is different – the lighting, weather and water conditions can change dramatically across days and seasons.

Animals can also frequently change their position in the water column, which means their visible characteristics (such as their outline) changes, too.

All this variation makes it crucial for training data to cover the full gamut of conditions, or that AI systems be flexible enough to track the changes over time. Such challenges have been recognised for years, giving rise to the new discipline of “machine learning operations”.

Essentially, machine learning operations explicitly recognises that AI-driven software requires regular updates to maintain its effectiveness.

Examples of the drone footage used in our huge dataset.

Building a better shark spotter

We aimed to overcome these challenges with a new shark detector mobile app. We gathered a huge dataset of drone footage, and shark experts then spent weeks inspecting the videos, carefully tracking and labelling sharks and other marine fauna in the hours of footage.

Using this new dataset, we trained a machine learning model to recognise ten types of marine life, including different species of dangerous sharks such as great white and whaler sharks.

And then we embedded this model into a new mobile app that can highlight sharks in live drone footage and predict the species. We worked closely with the NSW government and Surf Lifesaving NSW to trial this app on five beaches during summer 2020.

Drone flying at a beach.A drone in surf lifesaver NSW livery preparing to go on patrol. Author provided.

Our AI shark detector did quite well. It identified dangerous sharks on a frame-by-frame basis 80% of the time, in realistic conditions.

We deliberately went out of our way to make our tests difficult by challenging the AI to run on unseen data taken at different times of year, or from different-looking beaches. These critical tests on “external data” are often omitted in AI research.

A more detailed analysis turned up common-sense limitations: white, whaler and bull sharks are difficult to tell apart because they look similar, while small animals (such as turtles and rays) are harder to detect in general.

Spurious detections (like mistaking seaweed as a shark) are a real concern for beach managers, but we found the AI could easily be “tuned” to eliminate these by showing it empty ocean scenes of each beach.

Seaweed identified as sharks.Example of where the AI gets it wrong – seaweed identified as sharks. Author provided.

The future of AI for shark spotting

In the short term, AI is now mature enough to be deployed in drone-based shark-spotting operations across Australian beaches. But, unlike regular software, it will need to be monitored and updated frequently to maintain its high reliability of detecting dangerous sharks.

An added bonus is that such a machine learning system for spotting sharks would also continually collect valuable ecological data on the health of our coastline and marine fauna.

In the longer term, getting the AI to look at how sharks swim and using new AI technology that learns on-the-fly will make AI shark detection even more reliable and easy to deploy.

The NSW government has new drone trials for the coming summer, testing the usefulness of efficient long-range flights that can cover more beaches.

AI can play a key role in making these flights more effective, enabling greater reliability in drone surveillance, and may eventually lead to fully-automated shark-spotting operations and trusted automatic alerts.

The authors acknowledge the substantial contributions from Dr Andrew Colefax and Dr Andrew Walsh at Sci-eye.The Conversation

This article appeared in The Conversation.

A new type of material called a mechanical neural network can learn and change its physical properties to create adaptable, strong structures

This connection of springs is a new type of material that can change shape and learn new properties. Jonathan Hopkins, CC BY-ND

By Ryan H. Lee (PhD Student in Mechanical and Aerospace Engineering, University of California, Los Angeles)

A new type of material can learn and improve its ability to deal with unexpected forces thanks to a unique lattice structure with connections of variable stiffness, as described in a new paper by my colleagues and me.

Architected materials – like this 3D lattice – get their properties not from what they are made out of, but from their structure. Ryan Lee, CC BY-ND

The new material is a type of architected material, which gets its properties mainly from the geometry and specific traits of its design rather than what it is made out of. Take hook-and-loop fabric closures like Velcro, for example. It doesn’t matter whether it is made from cotton, plastic or any other substance. As long as one side is a fabric with stiff hooks and the other side has fluffy loops, the material will have the sticky properties of Velcro.

My colleagues and I based our new material’s architecture on that of an artificial neural network – layers of interconnected nodes that can learn to do tasks by changing how much importance, or weight, they place on each connection. We hypothesized that a mechanical lattice with physical nodes could be trained to take on certain mechanical properties by adjusting each connection’s rigidity.

To find out if a mechanical lattice would be able to adopt and maintain new properties – like taking on a new shape or changing directional strength – we started off by building a computer model. We then selected a desired shape for the material as well as input forces and had a computer algorithm tune the tensions of the connections so that the input forces would produce the desired shape. We did this training on 200 different lattice structures and found that a triangular lattice was best at achieving all of the shapes we tested.

Once the many connections are tuned to achieve a set of tasks, the material will continue to react in the desired way. The training is – in a sense – remembered in the structure of the material itself.

We then built a physical prototype lattice with adjustable electromechanical springs arranged in a triangular lattice. The prototype is made of 6-inch connections and is about 2 feet long by 1½ feet wide. And it worked. When the lattice and algorithm worked together, the material was able to learn and change shape in particular ways when subjected to different forces. We call this new material a mechanical neural network.

The prototype is 2D, but a 3D version of this material could have many uses. Jonathan Hopkins, CC BY-ND

Why it matters

Besides some living tissues, very few materials can learn to be better at dealing with unanticipated loads. Imagine a plane wing that suddenly catches a gust of wind and is forced in an unanticipated direction. The wing can’t change its design to be stronger in that direction.

The prototype lattice material we designed can adapt to changing or unknown conditions. In a wing, for example, these changes could be the accumulation of internal damage, changes in how the wing is attached to a craft or fluctuating external loads. Every time a wing made out of a mechanical neural network experienced one of these scenarios, it could strengthen and soften its connections to maintain desired attributes like directional strength. Over time, through successive adjustments made by the algorithm, the wing adopts and maintains new properties, adding each behavior to the rest as a sort of muscle memory.

This type of material could have far reaching applications for the longevity and efficiency of built structures. Not only could a wing made of a mechanical neural network material be stronger, it could also be trained to morph into shapes that maximize fuel efficiency in response to changing conditions around it.

What’s still not known

So far, our team has worked only with 2D lattices. But using computer modeling, we predict that 3D lattices would have a much larger capacity for learning and adaptation. This increase is due to the fact that a 3D structure could have tens of times more connections, or springs, that don’t intersect with one another. However, the mechanisms we used in our first model are far too complex to support in a large 3D structure.

What’s next

The material my colleagues and I created is a proof of concept and shows the potential of mechanical neural networks. But to bring this idea into the real world will require figuring out how to make the individual pieces smaller and with precise properties of flex and tension.

We hope new research in the manufacturing of materials at the micron scale, as well as work on new materials with adjustable stiffness, will lead to advances that make powerful smart mechanical neural networks with micron-scale elements and dense 3D connections a ubiquitous reality in the near future.

The Conversation

Ryan Lee has received funding from the Air Force Office of Science Research .

This article appeared in The Conversation.

‘Killer robots’ will be nothing like the movies show – here’s where the real threats lie

By Toby Walsh (Professor of AI at UNSW, Research Group Leader, UNSW Sydney)

You might suppose Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the US equivalent of MI6’s fictional Q, has recounted how Russian spies would watch the latest Bond movie to see what technologies might be coming their way.

Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming sex robot courtroom drama Dolly.

I never thought I’d write the phrase “sex robot courtroom drama”, but there you go. Based on a 2011 short story by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that then asks for a lawyer to defend its murderous actions.

The real killer robots

Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey, and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, conflict between robots and humans was at the centre of the very first feature-length science fiction film, Fritz Lang’s 1927 classic Metropolis.

But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent. This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away.

Indeed, contrary to recent fears, robots may never be sentient.

It’s much simpler technologies we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and Nagorno-Karabakh.

A war transformed

Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of the real future of killer robots.

On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store.

And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms.

This is taking the world to a dangerous place, with a host of moral, legal and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see Turkey emerging as a major drone power.

And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies.

Robot manufacturers are, however, starting to push back against this future.

A pledge not to weaponise

Last week, six leading robotics companies pledged they would never weaponise their robot platforms. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can perform an impressive backflip, and the Spot robot dog, which looks like it’s straight out of the Black Mirror TV series.

This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organised an open letter signed by Elon Musk and more than 100 founders of other AI and robot companies calling for the United Nations to regulate the use of killer robots. The letter even knocked the Pope into third place for a global disarmament award.

However, the fact that leading robotics companies are pledging not to weaponise their robot platforms is more virtue signalling than anything else.

We have, for example, already seen third parties mount guns on clones of Boston Dynamics’ Spot robot dog. And such modified robots have proven effective in action. Iran’s top nuclear scientist was assassinated by Israeli agents using a robot machine gun in 2020.

Collective action to safeguard our future

The only way we can safeguard against this terrifying future is if nations collectively take action, as they have with chemical weapons, biological weapons and even nuclear weapons.

Such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation.

Therefore, it’s even more important than a pledge from robotics companies to see the UN Human Rights council has recently unanimously decided to explore the human rights implications of new and emerging technologies like autonomous weapons.

Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary General, Nobel peace laureates, church leaders, politicians and thousands of AI and robotics researchers like myself have all called for regulation.

Australian is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.

The Conversation

Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article appeared in The Conversation.

Tesla’s Optimus robot isn’t very impressive – but it may be a sign of better things to come

By Wafa Johal (Senior Lecturer, Computing & Information Systems, The University of Melbourne)

In August 2021, Tesla CEO Elon Musk announced the electric car manufacturer was planning to get into the robot business. In a presentation accompanied by a human dressed as a robot, Musk said work was beginning on a “friendly” humanoid robot to “navigate through a world built for humans and eliminate dangerous, repetitive and boring tasks”.

Musk has now unveiled a prototype of the robot, called Optimus, which he hopes to mass-produce and sell for less than US$20,000 (A$31,000).

At the unveiling, the robot walked on a flat surface and waved to the crowd, and was shown doing simple manual tasks such as carrying and lifting in a video. As a robotics researcher, I didn’t find the demonstration very impressive – but I am hopeful it will lead to bigger and better things.

Why would we want humanoid robots?

Most of the robots used today don’t look anything like people. Instead, they are machines designed to carry out a specific purpose, like the industrial robots used in factories or the robot vacuum cleaner you might have in your house.

So why would you want one shaped like a human? The basic answer is they would be able to operate in environments designed for humans.

Unlike industrial robots, humanoid robots might be able to move around and interact with humans. Unlike robot vacuum cleaners, they might be able to go up stairs or traverse uneven terrain.

And as well as practical considerations, the idea of “artificial humans” has long had an appeal for inventors and science-fiction writers!

Room for improvement

Based on what we saw in the Tesla presentation, Optimus is a long way from being able to operate with humans or in human environments. The capabilities of the robot showcased fall far short of the state of the art in humanoid robotics.

The Atlas robot made by Boston Dynamics, for example, can walk outdoors and carry out flips and other acrobatic manoeuvres.

And while Atlas is an experimental system, even the commercially available Digit from Agility Robotics is much more capable than what we have seen from Optimus. Digit can walk on various terrains, avoid obstacles, rebalance itself when bumped, and pick up and put down objects.

Bipedal walking (on two feet) alone is no longer a great achievement for a robot. Indeed, with a bit of knowledge and determination you can build such a robot yourself using open source software.

There was also no sign in the Optimus presentation of how it will interact with humans. This will be essential for any robot that works in human environments: not only for collaborating with humans, but also for basic safety.

It can be very tricky for a robot to accomplish seemingly simple tasks such as handing an object to a human, but this is something we would want a domestic humanoid robot to be able to do.

Sceptical consumers

Others have tried to build and sell humanoid robots in the past, such as Honda’s ASIMO and SoftBank’s Pepper. But so far they have never really taken off.

Amazon’s recently released Astro robot may make inroads here, but it may also go the way of its predecessors.

Consumers seem to be sceptical of robots. To date, the only widely adopted household robots are the Roomba-like vacuum cleaners, which have been available since 2002.

To succeed, a humanoid robot will need be able to do something humans can’t to justify the price tag. At this stage the use case for Optimus is still not very clear.

Hope for the future

Despite these criticisms, I am hopeful about the Optimus project. It is still in the very early stages, and the presentation seemed to be aimed at recruiting new staff as much as anything else.

Tesla certainly has plenty of resources to throw at the problem. We know it has the capacity to mass produce the robots if development gets that far.

Musk’s knack for gaining attention may also be helpful – not only for attracting talent to the project, but also to drum up interest among consumers.

Robotics is a challenging field, and it’s difficult to move fast. I hope Optimus succeeds, both to make something cool we can use – and to push the field of robotics forward.

The Conversation

Wafa Johal receives funding from the Australian Research Council.

This article appeared in The Conversation.

Why household robot servants are a lot harder to build than robotic vacuums and automated warehouse workers

Who wouldn’t want a robot to handle all the household drudgery? Skathi/iStock via Getty Images

By Ayonga Hereid (Assistant Professor of Mechanical and Aerospace Engineering, The Ohio State University)

With recent advances in artificial intelligence and robotics technology, there is growing interest in developing and marketing household robots capable of handling a variety of domestic chores.

Tesla is building a humanoid robot, which, according to CEO Elon Musk, could be used for cooking meals and helping elderly people. Amazon recently acquired iRobot, a prominent robotic vacuum manufacturer, and has been investing heavily in the technology through the Amazon Robotics program to expand robotics technology to the consumer market. In May 2022, Dyson, a company renowned for its power vacuum cleaners, announced that it plans to build the U.K.’s largest robotics center devoted to developing household robots that carry out daily domestic tasks in residential spaces.

Despite the growing interest, would-be customers may have to wait awhile for those robots to come on the market. While devices such as smart thermostats and security systems are widely used in homes today, the commercial use of household robots is still in its infancy.

As a robotics researcher, I know firsthand how household robots are considerably more difficult to build than smart digital devices or industrial robots.

Robots that can handle a variety of domestic chores are an age-old staple of science fiction.

Handling objects

One major difference between digital and robotic devices is that household robots need to manipulate objects through physical contact to carry out their tasks. They have to carry the plates, move the chairs and pick up dirty laundry and place it in the washer. These operations require the robot to be able to handle fragile, soft and sometimes heavy objects with irregular shapes.

The state-of-the-art AI and machine learning algorithms perform well in simulated environments. But contact with objects in the real world often trips them up. This happens because physical contact is often difficult to model and even harder to control. While a human can easily perform these tasks, there exist significant technical hurdles for household robots to reach human-level ability to handle objects.

Robots have difficulty in two aspects of manipulating objects: control and sensing. Many pick-and-place robot manipulators like those on assembly lines are equipped with a simple gripper or specialized tools dedicated only to certain tasks like grasping and carrying a particular part. They often struggle to manipulate objects with irregular shapes or elastic materials, especially because they lack the efficient force, or haptic, feedback humans are naturally endowed with. Building a general-purpose robot hand with flexible fingers is still technically challenging and expensive.

It is also worth mentioning that traditional robot manipulators require a stable platform to operate accurately, but the accuracy drops considerably when using them with platforms that move around, particularly on a variety of surfaces. Coordinating locomotion and manipulation in a mobile robot is an open problem in the robotics community that needs to be addressed before broadly capable household robots can make it onto the market.

A sophisticated robotic kitchen is already on the market, but it operates in a highly structured environment, meaning all of the objects it interacts with – cookware, food containers, appliances – are where it expects them to be, and there are no pesky humans to get in the way.

They like structure

In an assembly line or a warehouse, the environment and sequence of tasks are strictly organized. This allows engineers to preprogram the robot’s movements or use simple methods like QR codes to locate objects or target locations. However, household items are often disorganized and placed randomly.

Home robots must deal with many uncertainties in their workspaces. The robot must first locate and identify the target item among many others. Quite often it also requires clearing or avoiding other obstacles in the workspace to be able to reach the item and perform given tasks. This requires the robot to have an excellent perception system, efficient navigation skills, and powerful and accurate manipulation capability.

For example, users of robot vacuums know they must remove all small furniture and other obstacles such as cables from the floor, because even the best robot vacuum cannot clear them by itself. Even more challenging, the robot has to operate in the presence of moving obstacles when people and pets walk within close range.

Keeping it simple

While they appear straightforward for humans, many household tasks are too complex for robots. Industrial robots are excellent for repetitive operations in which the robot motion can be preprogrammed. But household tasks are often unique to the situation and could be full of surprises that require the robot to constantly make decisions and change its route in order to perform the tasks.

The vision for household humanoid robots like the proposed Tesla Bot is of an artificial servant capable of handling any mundane task. Courtesy Tesla

Think about cooking or cleaning dishes. In the course of a few minutes of cooking, you might grasp a sauté pan, a spatula, a stove knob, a refrigerator door handle, an egg and a bottle of cooking oil. To wash a pan, you typically hold and move it with one hand while scrubbing with the other, and ensure that all cooked-on food residue is removed and then all soap is rinsed off.

There has been significant development in recent years using machine learning to train robots to make intelligent decisions when picking and placing different objects, meaning grasping and moving objects from one spot to another. However, to be able to train robots to master all different types of kitchen tools and household appliances would be another level of difficulty even for the best learning algorithms.

Not to mention that people’s homes often have stairs, narrow passageways and high shelves. Those hard-to-reach spaces limit the use of today’s mobile robots, which tend to use wheels or four legs. Humanoid robots, which would more closely match the environments humans build and organize for themselves, have yet to be reliably used outside of lab settings.

A solution to task complexity is to build special-purpose robots, such as robot vacuum cleaners or kitchen robots. Many different types of such devices are likely to be developed in the near future. However, I believe that general-purpose home robots are still a long way off.

The Conversation

Ayonga Hereid does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article appeared in The Conversation.

UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research

Humanitarian groups have been calling for a ban on autonomous weapons. Wolfgang Kumm/picture alliance via Getty Images

By James Dawes

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn’t reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks, and because they could be combined with chemical, biological, radiological and nuclear weapons themselves.

As a specialist in human rights with a focus on the weaponization of artificial intelligence, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained authority to launch a strike – more unsteady and more fragmented. Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.

Lethal errors and black boxes

I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat?

Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)

The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison.

Autonomous weapons expert Paul Scharre uses the metaphor of the runaway gun to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard.

Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can generate internally correct outcomes that nonetheless spread terrible errors rapidly across populations.

For example, a neural net designed for use in Pittsburgh hospitals identified asthma as a risk-reducer in pneumonia cases; image recognition software used by Google identified Black people as gorillas; and a machine-learning tool used by Amazon to rank job candidates systematically assigned negative scores to women.

The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The black box problem of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems.

The proliferation problems

The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread.

Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.

The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people. Ministry of Defense of Ukraine, CC BY

High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of mounting chemical, biological, radiological and nuclear arms. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.

High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive ethical governors designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the “myth of a surgical strike” to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the cost-benefit analysis that nations undergo while launching and maintaining wars.

Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today. Multiply that by every country currently aiming for high-end autonomous weapons.

Undermining the laws of war

Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 Geneva Convention, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is Slobodan Milosevic, former president of the Federal Republic of Yugoslavia, who was indicted on charges of crimes against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.

But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap.

To hold a soldier criminally responsible for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great.

The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate meaningful human control of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.

A new global arms race

Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.

In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.

The Conversation

This is an updated version of an article originally published on September 29, 2021.

James Dawes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Robots can be companions, caregivers, collaborators — and social influencers

Robot and artificial intelligence are poised to increase their influences within our every day lives. (Shutterstock)

By Shane Saunderson

In the mid-1990s, there was research going on at Stanford University that would change the way we think about computers. The Media Equation experiments were simple: participants were asked to interact with a computer that acted socially for a few minutes after which, they were asked to give feedback about the interaction.

Participants would provide this feedback either on the same computer (No. 1) they had just been working on or on another computer (No. 2) across the room. The study found that participants responding on computer No. 2 were far more critical of computer No. 1 than those responding on the same machine they’d worked on.

People responding on the first computer seemed to not want to hurt the computer’s feelings to its face, but had no problem talking about it behind its back. This phenomenon became known as the computers as social actors (CASA) paradigm because it showed that people are hardwired to respond socially to technology that presents itself as even vaguely social.

The CASA phenomenon continues to be explored, particularly as our technologies have become more social. As a researcher, lecturer and all-around lover of robotics, I observe this phenomenon in my work every time someone thanks a robot, assigns it a gender or tries to justify its behaviour using human, or anthropomorphic, rationales.

What I’ve witnessed during my research is that while few are under any delusions that robots are people, we tend to defer to them just like we would another person.

Social tendencies

While this may sound like the beginnings of a Black Mirror episode, this tendency is precisely what allows us to enjoy social interactions with robots and place them in caregiver, collaborator or companion roles.

The positive aspects of treating a robot like a person is precisely why roboticists design them as such — we like interacting with people. As these technologies become more human-like, they become more capable of influencing us. However, if we continue to follow the current path of robot and AI deployment, these technologies could emerge as far more dystopian than utopian.

The Sophia robot, manufactured by Hanson Robotics, has been on 60 Minutes, received honorary citizenship from Saudi Arabia, holds a title from the United Nations and has gone on a date with actor Will Smith. While Sophia undoubtedly highlights many technological advancements, few surpass Hanson’s achievements in marketing. If Sophia truly were a person, we would acknowledge its role as an influencer.

However, worse than robots or AI being sociopathic agents — goal-oriented without morality or human judgment — these technologies become tools of mass influence for whichever organization or individual controls them.

If you thought the Cambridge Analytica scandal was bad, imagine what Facebook’s algorithms of influence could do if they had an accompanying, human-like face. Or a thousand faces. Or a million. The true value of a persuasive technology is not in its cold, calculated efficiency, but its scale.

Seeing through intent

Recent scandals and exposures in the tech world have left many of us feeling helpless against these corporate giants. Fortunately, many of these issues can be solved through transparency.

There are fundamental questions that are important for social technologies to answer because we would expect the same answers when interacting with another person, albeit often implicitly. Who owns or sets the mandate of this technology? What are its objectives? What approaches can it use? What data can it access?

Since robots could have the potential to soon leverage superhuman capabilities, enacting the will of an unseen owner, and without showing verbal or non-verbal cues that shed light on their intent, we must demand that these types of questions be answered explicitly.

As a roboticist, I get asked the question, “When will robots take over the world?” so often that I’ve developed a stock answer: “As soon as I tell them to.” However, my joke is underpinned by an important lesson: don’t scapegoat machines for decisions made by humans.

I consider myself a robot sympathizer because I think robots get unfairly blamed for many human decisions and errors. It is important that we periodically remind ourselves that a robot is not your friend, your enemy or anything in between. A robot is a tool, wielded by a person (however far removed), and increasingly used to influence us.

The Conversation

Shane receives funding from the Natural Sciences and Engineering Research Council of Canada (NSERC). He is affiliated with the Human Futures Institute, a Toronto-based think tank.

This article appeared in The Conversation.

To swim like a tuna, robotic fish need to change how stiff their tails are in real time

Researchers have been building robotic fish for years, but the performance has never approached the efficiency of real fish. Daniel Quinn, CC BY-NC

By Daniel Quinn

Underwater vehicles haven’t changed much since the submarines of World War II. They’re rigid, fairly boxy and use propellers to move. And whether they are large manned vessels or small robots, most underwater vehicles have one cruising speed where they are most energy efficient.

Fish take a very different approach to moving through water: Their bodies and fins are very flexible, and this flexibility allows them to interact with water more efficiently than rigid machines. Researchers have been designing and building flexible fishlike robots for years, but they still trail far behind real fish in terms of efficiency.

What’s missing?

I am an engineer and study fluid dynamics. My labmates and I wondered if something in particular about the flexibility of fish tails allows fish to be so fast and efficient in the water. So, we created a model and built a robot to study the effect of stiffness on swimming efficiency. We found fish swim so efficiently over a wide range of speeds because they can change how rigid or flexible their tails are in real time.

A sketch of a human–powered helicopter with a large spiral propeller on top.
Leonardo Da Vinci designed a propeller–driven helicopter in 1481.
Leonardo Da Vinci/WikimediaCommons

Why are people still using propellers?

Fluid dynamics applies to both liquids and gasses. Humans have been using rotating rigid objects to move vehicles for hundreds of years – Leonardo Da Vinci incorporated the concept into his helicopter designs, and the first propeller–driven boats were built in the 1830s. Propellers are easy to make, and they work just fine at their designed cruise speed.

It has only been in the past couple of decades that advances in soft robotics have made actively controlled flexible components a reality. Now, marine roboticists are turning to flexible fish and their amazing swimming abilities for inspiration.

When engineers like me talk about flexibility in a swimming robot, we are usually referring to how stiff the tail of the fish is. The tail is the entire rear half of a fish’s body that moves back and forth when it swims.

Consider tuna, which can swim up to 50 mph and are extremely energy efficient over a wide range of speeds.

Tuna are some of the fastest fish in the ocean.

The tricky part about copying the biomechanics of fish is that biologists don’t know how flexible they are in the real world. If you want to know how flexible a rubber band is, you simply pull on it. If you pull on a fish’s tail, the stiffness depends on how much the fish is tensing its various muscles.

The best that researchers can do to estimate flexibility is film a swimming fish and measure how its body shape changes.

Visualization of a fish swimming with colorful representations of water flow.
Visualizing how water flows around the fish tail showed that tail stiffness had to increase as the square of swimming speed for a fish to be most efficient.
Qiang Zhong and Daniel Quinn, CC BY-ND

Searching for answers in the math

Researchers have built dozens of robots in an attempt to mimic the flexibility and swimming patterns of tuna and other fish, but none have matched the performance of the real things.

In my lab at the University of Virginia, my colleagues and I ran into the same questions as others: How flexible should our robot be? And if there’s no one best flexibility, how should our robot change its stiffness as it swims?

We looked for the answer in an old NASA paper about vibrating airplane wings. The report explains how when a plane’s wings vibrate, the vibrations change the amount of lift the wings produce. Since fish fins and airplane wings have similar shapes, the same math works well to model how much thrust fish tails produce as they flap back and forth.

Using the old wing theory, postdoctoral researcher Qiang Zhong and I created a mathematical model of a swimming fish and added a spring and pulley to the tail to represent the effects of a tensing muscle. We discovered a surprisingly simple hypothesis hiding in the equations. To maximize efficiency, muscle tension needs to increase as the square of swimming speed. So, if swimming speed doubles, stiffness needs to increase by a factor of four. To swim three times faster while maintaining high efficiency, a fish or fish-like robot needs to pull on its tendon about nine times harder.

To confirm our theory, we simply added an artificial tendon to one of our tunalike robots and then programmed the robot to vary its tail stiffness based on speed. We then put our new robot into our test tank and ran it through various “missions” – like a 200-meter sprint where it had to dodge simulated obstacles. With the ability to vary its tail’s flexibility, the robot used about half as much energy on average across a wide range of speeds compared to robots with a single stiffness.

Two people standing with a fish robot over a tank of water.
Qiang Zhong (left) and Daniel Quinn designed their robot to vary its stiffness as it swam at different speeds.
Yicong Fu, CC BY-ND

Why it matters

While it is great to build one excellent robot, the thing my colleagues and I are most excited about is that our model is adaptable. We can tweak it based on body size, swimming style or even fluid type. It can be applied to animals and machines whether they are big or small, swimmers or flyers.

For example, our model suggests that dolphins have a lot to gain from the ability to vary their tails’ stiffness, whereas goldfish don’t get much benefit due to their body size, body shape and swimming style.

The model also has applications for robotic design too. Higher energy efficiency when swimming or flying – which also means quieter robots – would enable radically new missions for vehicles and robots that currently have only one efficient cruising speed. In the short term, this could help biologists study river beds and coral reefs more easily, enable researchers to track wind and ocean currents at unprecedented scales or allow search and rescue teams to operate farther and longer.

In the long term, I hope our research could inspire new designs for submarines and airplanes. Humans have only been working on swimming and flying machines for a couple centuries, while animals have been perfecting their skills for millions of years. There’s no doubt there is still a lot to learn from them.

The Conversation

Daniel Quinn receives funding from The National Science Foundation and The Office of Naval Research.

This article appeared in The Conversation.

Page 1 of 2
1 2