All posts by Frontiers Journals & Blog

Do humans get lazier when robots help with tasks?

Image/Shutterstock.com

By Angharad Brewer Gillham, Frontiers science writer

‘Social loafing’ is a phenomenon which happens when members of a team start to put less effort in because they know others will cover for them. Scientists investigating whether this happens in teams which combine work by robots and humans found that humans carrying out quality assurance tasks spotted fewer errors when they had been told that robots had already checked a piece, suggesting they relied on the robots and paid less attention to the work.

Now that improvements in technology mean that some robots work alongside humans, there is evidence that those humans have learned to see them as team-mates — and teamwork can have negative as well as positive effects on people’s performance. People sometimes relax, letting their colleagues do the work instead. This is called ‘social loafing’, and it’s common where people know their contribution won’t be noticed or they’ve acclimatized to another team member’s high performance. Scientists at the Technical University of Berlin investigated whether humans social loaf when they work with robots.

“Teamwork is a mixed blessing,” said Dietlind Helene Cymek, first author of the study in Frontiers in Robotics and AI. “Working together can motivate people to perform well but it can also lead to a loss of motivation because the individual contribution is not as visible. We were interested in whether we could also find such motivational effects when the team partner is a robot.”

A helping hand

The scientists tested their hypothesis using a simulated industrial defect-inspection task: looking at circuit boards for errors. The scientists provided images of circuit boards to 42 participants. The circuit boards were blurred, and the sharpened images could only be viewed by holding a mouse tool over them. This allowed the scientists to track participants’ inspection of the board.

Half of the participants were told that they were working on circuit boards that had been inspected by a robot called Panda. Although these participants did not work directly with Panda, they had seen the robot and could hear it while they worked. After examining the boards for errors and marking them, all participants were asked to rate their own effort, how responsible for the task they felt, and how they performed.

Looking but not seeing

At first sight, it looked as if the presence of Panda had made no difference — there was no statistically significant difference between the groups in terms of time spent inspecting the circuit boards and the area searched. Participants in both groups rated their feelings of responsibility for the task, effort expended, and performance similarly.

But when the scientists looked more closely at participants’ error rates, they realized that the participants working with Panda were catching fewer defects later in the task, when they’d already seen that Panda had successfully flagged many errors. This could reflect a ‘looking but not seeing’ effect, where people get used to relying on something and engage with it less mentally. Although the participants thought they were paying an equivalent amount of attention, subconsciously they assumed that Panda hadn’t missed any defects.

“It is easy to track where a person is looking, but much harder to tell whether that visual information is being sufficiently processed at a mental level,” said Dr Linda Onnasch, senior author of the study.

The experimental set-up with the human-robot team. Image supplied by the authors.

Safety at risk?

The authors warned that this could have safety implications. “In our experiment, the subjects worked on the task for about 90 minutes, and we already found that fewer quality errors were detected when they worked in a team,” said Onnasch. “In longer shifts, when tasks are routine and the working environment offers little performance monitoring and feedback, the loss of motivation tends to be much greater. In manufacturing in general, but especially in safety-related areas where double checking is common, this can have a negative impact on work outcomes.”

The scientists pointed out that their test has some limitations. While participants were told they were in a team with the robot and shown its work, they did not work directly with Panda. Additionally, social loafing is hard to simulate in the laboratory because participants know they are being watched.

“The main limitation is the laboratory setting,” Cymek explained. “To find out how big the problem of loss of motivation is in human-robot interaction, we need to go into the field and test our assumptions in real work environments, with skilled workers who routinely do their work in teams with robots.”

Can charismatic robots help teams be more creative?

Image/Shutterstock.com

By Angharad Brewer Gillham, Frontiers science writer

Increasingly, social robots are being used for support in educational contexts. But does the sound of a social robot affect how well they perform, especially when dealing with teams of humans? Teamwork is a key factor in human creativity, boosting collaboration and new ideas. Danish scientists set out to understand whether robots using a voice designed to sound charismatic would be more successful as team creativity facilitators.

“We had a robot instruct teams of students in a creativity task. The robot either used a confident, passionate — ie charismatic — tone of voice or a normal, matter-of-fact tone of voice,” said Dr Kerstin Fischer of the University of Southern Denmark, corresponding author of the study in Frontiers in Communication. “We found that when the robot spoke in a charismatic speaking style, students’ ideas were more original and more elaborate.”

Can a robot be charismatic?

We know that social robots acting as facilitators can boost creativity, and that the success of facilitators is at least partly dependent on charisma: people respond to charismatic speech by becoming more confident and engaged. Fischer and her colleagues aimed to see if this effect could be reproduced with the voices of social robots by using a text-to-speech function engineered for characteristics associated with charismatic speaking, such as a specific pitch range and way of stressing words. Two voices were developed, one charismatic and one less expressive, based on a range of parameters which correlate with perceived speaker charisma.

The scientists recruited five classes of university students, all taking courses which included an element of team creativity. The students were told that they were testing a creativity workshop, which involved brainstorming ideas based on images and then using those ideas to come up with a new chocolate product. The workshop was led by videos of a robot speaking: introducing the task, reassuring the teams of students that there were no bad ideas, and then congratulating them for completing the task and asking them to fill out a self-evaluation questionnaire. The questionnaire evaluated the robot’s performance, the students’ own views on how their teamwork went, and the success of the session. The creativity of each session, as measured by the number of original ideas produced and how elaborate they were, was also measured by the researchers.

Powering creativity with charisma

The group that heard the charismatic voice rated the robot more positively, finding it more charismatic and interactive. Their perception of their teamwork was more positive, and they produced more original and elaborate ideas. They rated their teamwork more highly. However, the group that heard the non-charismatic voice perceived themselves as more resilient and efficient, possibly because a less charismatic leader led to better organization by the team members themselves, even though they produced fewer ideas.

“I had suspected that charismatic speech has very important effects, but our study provides clear evidence for the effect of charismatic speech on listener creativity,” said Dr Oliver Niebuhr of the University of Southern Denmark, co-author of the study. “This is the first time that such a link between charismatic voices, artificial speakers, and creativity outputs has been found.”

The scientists pointed out that although the sessions with the charismatic voice were generally more successful, not all the teams responded identically to the different voices: previous experiences in their different classes may have affected their response. Larger studies will be needed to understand how these external factors affected team performance.

“The robot was present only in videos, but one could suspect that more exposure or repeated exposure to the charismatic speaking style would have even stronger effects,” said Fischer. “Moreover, we have only varied a few features between the two robot conditions. We don’t know how the effect size would change if other or more features were varied. Finally, since charismatic speaking patterns differ between cultures, we would expect that the same stimuli will not yield the same results in all languages and cultures.”

Why diversity and inclusion needs to be at the forefront of future AI

Image: shutterstock.com

By Inês Hipólito/Deborah Pirchner, Frontiers science writer

Inês Hipólito is a highly accomplished researcher, recognized for her work in esteemed journals and contributions as a co-editor. She has received research awards including the prestigious Talent Grant from the University of Amsterdam in 2021. After her PhD, she held positions at the Berlin School of Mind and Brain and Humboldt-Universität zu Berlin. Currently, she is a permanent lecturer of the philosophy of AI at Macquarie University, focusing on cognitive development and the interplay between augmented cognition (AI) and the sociocultural environment.

Inês co-leads a consortium project on ‘Exploring and Designing Urban Density. Neurourbanism as a Novel Approach in Global Health,’ funded by the Berlin University Alliance. She also serves as an ethicist of AI at Verses.

Beyond her research, she co-founded and serves as vice-president of the International Society for the Philosophy of the Sciences of the Mind. Inês is the host of the thought-provoking podcast ‘The PhilospHER’s Way’ and has actively contributed to the Women in Philosophy Committee and the Committee in Diversity and Inclusivity at the Australasian Association of Philosophy from 2017 to 2020.

As part of our Frontier Scientist series, Hipólito caught up with Frontiers to tell us about her career and research.

Image: Inês Hipólito

What inspired you to become a researcher?
Throughout my personal journey, my innate curiosity and passion for understanding our experience of the world have been the driving forces in my life. Interacting with inspiring teachers and mentors during my education further fueled my motivation to explore the possibilities of objective understanding. This led me to pursue a multidisciplinary path in philosophy and neuroscience, embracing the original intent of cognitive science for interdisciplinary collaboration. I believe that by bridging disciplinary gaps, we can gain an understanding of the human mind and its interaction with the world. This integrative approach enables me to contribute to both scientific knowledge and real-world applications benefitting individuals and society as a whole.

Can you tell us about the research you’re currently working on?
My research centers around cognitive development and its implications in the cognitive science of AI. Sociocultural contexts play a pivotal role in shaping cognitive development, ranging from fundamental cognitive processes to more advanced, semantically sophisticated cognitive activities that we acquire and engage with.

As our world becomes increasingly permeated by AI, my research focuses on two main aspects. Firstly, I investigate how smart environments such as online spaces, virtual reality, and digitalized citizenship influence context-dependent cognitive development. By exploring the impact of these environments, I aim to gain insights into how cognition is shaped and adapted within these technologically mediated contexts.

Secondly, I examine how AI design emerges from specific sociocultural settings. Rather than merely reflecting society, AI design embodies societal values and aspirations. I explore the intricate relationship between AI and its sociocultural origins to understand how technology can both shape and be influenced by the context in which it is developed.

In your opinion, why is your research important?
The aim of my work is to contribute to the understanding of the complex relationship between cognition and AI, focusing on the sociocultural dynamics that influence both cognitive development and the design of artificial intelligence systems. I am particularly interested in understanding and the paradoxical nature of AI development and its societal impact: while technology historically improved lives, AI has also brought attention to problematic biases and segregation highlighted in feminist technoscience literature.

As AI progresses, it is crucial to ensure that advancements benefit everyone and do not perpetuate historical inequalities. Inclusivity and equality should be prioritized, challenging dominant narratives that favor certain groups, particularly white males. Recognizing that AI technologies embody our implicit biases and reflect our attitudes towards diversity and our relationship with the natural world enables us to navigate the ethical and societal implications of AI more effectively.

Are there any common misconceptions about this area of research? How would you address them?
The common misconception of viewing the mind as a computer has significant implications for AI design and our understanding of cognition. When cognition is seen as a simple input-output process in the brain, it overlooks the embodied complexities of human experience and the biases embedded in AI design. This reductionist view fails to account for the importance of embodied interaction, cognitive development, mental health, well-being, and societal equity.

This subjective experience of the world cannot be reduced to mere information processing, as it is context-dependent and imbued with meanings partly constructed in societal power dynamics.

Because the environment is ever more AI-permeated, understanding how it is shaped by and shapes the human experience requires investigation beyond the conceiving of cognition as (meaningless) information processes. By recognizing the distributed and embodied nature of cognition, we can ensure that AI technologies are designed and integrated in a way that respects the complexities of human experience, embraces ambiguity, and promotes meaningful and equitable societal interactions.

What are some of the areas of research you’d like to see tackled in the years ahead?
In the years ahead, it is crucial to tackle several AI-related areas to shape a more inclusive and sustainable future:

Design AI to reduce bias and discrimination, ensuring equal opportunities for individuals from diverse backgrounds.

Make AI systems transparent and explainable, enabling people to understand how decisions are made and how to hold them accountable for unintended consequences.

Collaborate with diverse stakeholders to address biases, cultural sensitivities, and challenges faced by marginalized communities in AI development.

Consider the ecological impact, resource consumption, waste generation, and carbon footprint throughout the entire lifecycle of AI technologies.

How has open science benefited the reach and impact of your research?
Scientific knowledge that is publicly funded should be made freely available to align with the principles of open science. Open science emphasizes transparency, collaboration, and accessibility in scientific research and knowledge dissemination. By openly sharing AI-related knowledge, including code, data, and algorithms, we encourage diverse stakeholders to contribute their expertise, identify potential biases, and address ethical concerns within technoscience.

Furthermore, incorporating philosophical reasoning into the development of the philosophy of mind theory can inform ethical deliberation and decision-making in AI design and implementation by researchers and policymakers. This transparent and collaborative approach enables critical assessment and improvement of AI technologies to ensure fairness, diminishing of bias, and overall equity.


This article is republished from Frontiers in Robotics and AI blog. You can read the original article here.

Scientists unveil plan to create biocomputers powered by human brain cells + interview with Prof Thomas Hartung (senior author of the paper)

Credit: Thomas Hartung, Johns Hopkins University

By Liad Hollender, Frontiers science writer

Despite AI’s impressive track record, its computational power pales in comparison with that of the human brain. Scientists unveil a revolutionary path to drive computing forward: organoid intelligence (OI), where lab-grown brain organoids serve as biological hardware. “This new field of biocomputing promises unprecedented advances in computing speed, processing power, data efficiency, and storage capabilities – all with lower energy needs,” say the authors in an article published in Frontiers in Science.

Artificial intelligence (AI) has long been inspired by the human brain. This approach proved highly successful: AI boasts impressive achievements – from diagnosing medical conditions to composing poetry. Still, the original model continues to outperform machines in many ways. This is why, for example, we can ‘prove our humanity’ with trivial image tests online. What if instead of trying to make AI more brain-like, we went straight to the source?

Scientists across multiple disciplines are working to create revolutionary biocomputers where three-dimensional cultures of brain cells, called brain organoids, serve as biological hardware. They describe their roadmap for realizing this vision in the journal Frontiers in Science.

“We call this new interdisciplinary field ‘organoid intelligence’ (OI),” said Prof Thomas Hartung of Johns Hopkins University. “A community of top scientists has gathered to develop this technology, which we believe will launch a new era of fast, powerful, and efficient biocomputing.”

What are brain organoids, and why would they make good computers?

Brain organoids are a type of lab-grown cell-culture. Even though brain organoids aren’t ‘mini brains’, they share key aspects of brain function and structure such as neurons and other brain cells that are essential for cognitive functions like learning and memory. Also, whereas most cell cultures are flat, organoids have a three-dimensional structure. This increases the culture’s cell density 1,000-fold, meaning that neurons can form many more connections.

But even if brain organoids are a good imitation of brains, why would they make good computers? After all, aren’t computers smarter and faster than brains?

“While silicon-based computers are certainly better with numbers, brains are better at learning,” Hartung explained. “For example, AlphaGo [the AI that beat the world’s number one Go player in 2017] was trained on data from 160,000 games. A person would have to play five hours a day for more than 175 years to experience these many games.” 

Brains are not only superior learners, they are also more energy efficient. For instance, the amount of energy spent training AlphaGo is more than is needed to sustain an active adult for a decade.

“Brains also have an amazing capacity to store information, estimated at 2,500TB,” Hartung added. “We’re reaching the physical limits of silicon computers because we cannot pack more transistors into a tiny chip. But the brain is wired completely differently. It has about 100bn neurons linked through over 10^{15} connection points. It’s an enormous power difference compared to our current technology.”

What would organoid intelligence bio computers look like?

According to Hartung, current brain organoids need to be scaled-up for OI. “They are too small, each containing about 50,000 cells. For OI, we would need to increase this number to 10 million,” he explained.

In parallel, the authors are also developing technologies to communicate with the organoids: in other words, to send them information and read out what they’re ‘thinking’. The authors plan to adapt tools from various scientific disciplines, such as bioengineering and machine learning, as well as engineer new stimulation and recording devices.

“We developed a brain-computer interface device that is a kind of an EEG cap for organoids, which we presented in an article published last August. It is a flexible shell that is densely covered with tiny electrodes that can both pick up signals from the organoid, and transmit signals to it,” said Hartung.

The authors envision that eventually OI would integrate a wide range of stimulation and recording tools. These will orchestrate interactions across networks of interconnected organoids that implement more complex computations.

Organoid intelligence could help prevent and treat neurological conditions

OI’s promise goes beyond computing and into medicine. Thanks to a groundbreaking technique developed by Noble Laureates John Gurdon and Shinya Yamanaka, brain organoids can be produced from adult tissues. This means that scientists can develop personalized brain organoids from skin samples of patients suffering from neural disorders, such as Alzheimer’s disease. They can then run multiple tests to investigate how genetic factors, medicines, and toxins influence these conditions.

“With OI, we could study the cognitive aspects of neurological conditions as well,” Hartung said. “For example, we could compare memory formation in organoids derived from healthy people and from Alzheimer’s patients, and try to repair relative deficits. We could also use OI to test whether certain substances, such as pesticides, cause memory or learning problems.”

Taking ethical considerations into account

Creating human brain organoids that can learn, remember, and interact with their environment raises complex ethical questions. For example, could they develop consciousness, even in a rudimentary form? Could they experience pain or suffering? And what rights would people have concerning brain organoids made from their cells?

The authors are acutely aware of these issues. “A key part of our vision is to develop OI in an ethical and socially responsible manner,” Hartung said. “For this reason, we have partnered with ethicists from the very beginning to establish an ‘embedded ethics’ approach. All ethical issues will be continuously assessed by teams made up of scientists, ethicists, and the public, as the research evolves.”

How far are we from the first organoid intelligence?

Even though OI is still in its infancy, a recently-published study by one of the article’s co-authors – Dr Brett Kagan, Chief Scientific Officer at Cortical Labs – provides proof of concept. His team showed that a normal, flat brain cell culture can learn to play the video game Pong.

“Their team is already testing this with brain organoids,” Hartung added. “And I would say that replicating this experiment with organoids already fulfills the basic definition of OI. From here on, it’s just a matter of building the community, the tools, and the technologies to realize OI’s full potential,” he concluded.

Interview with Prof Thomas Hartung

Image: Prof Thomas Hartung

To learn more about this exciting new field, we interviewed the senior author of the article, Prof Thomas Hartung. He is the director of the Center for Alternatives to Animal Testing in Europe (CAAT-Europe), and a professor at Johns Hopkins University’s Bloomberg School of Public Health.

How do you define organoid intelligence?

Reproducing cognitive functions – such as learning and sensory processing – in a lab-grown human-brain model.

How did this idea emerge?

I’m a pharmacologist and toxicologist, so I’m interested in developing medicines and identifying substances that are dangerous to our health, specifically those that affect brain development and function. This requires testing – ideally in conditions that mimic a living brain. For that reason, producing cultures of human brain cells has been a longstanding aim in the field.

This goal was finally realized in 2006 thanks to a groundbreaking technique developed by John B. Gurdon and Shinya Yamanaka, who received a Nobel prize for this achievement in 2012. This method allowed us to generate brain cells from fully developed tissues, such as the skin. Soon after, we began mass producing three-dimensional cultures of brain cells called brain organoids.

People asked if the organoids were thinking, if they were conscious even. I said: “no, they are too tiny. And more importantly, they don’t have any input nor output, so what would they be thinking about?” But later I began wondering: what if we changed this? What if we gave the organoids information about their environment and the means to interact with it? That was the birth of organoid intelligence.

How would you know what an organoid is ‘thinking’ about?

We’re building tools that will enable us to communicate with the organoids – send input and receive output. For example, we developed a recording/stimulation device that looks like a mini EEG-cap that surrounds the organoid. We’ve also been working on feeding biological inputs to brain organoids, for instance, by connecting them to retinal organoids, which respond to light. Our partner and co-author Alysson Muotri at the University of San Diego is already testing this approach by producing systems that combine several organoids.

My dream is to form a channel of communication between an artificial intelligence program and an OI system that would allow the two to explore each other’s capabilities. I imagine that form will follow function – that the organoid will change and develop towards creating meaningful inputs. This is a bit of philosophy, but my expectation is that we’ll see a lot of surprises.

What uses do you envision for organoid intelligence?

In my opinion, there are three main areas. The first is fundamental neuroscience – to understand how the brain generates cognitive functions, such as learning and memory. Even though current brain organoids are still far from being what one might call intelligent, they could still have the machinery to support basic cognitive operations.

The second area is toxicology and pharmacology. Since we can now produce brain organoids from skin samples, we can study individual disease characteristics of patients. We already have brain-organoid lines from Alzheimer’s patients, for example. And even though these organoids were made from skin cells, we still see hallmarks of the disease in them.

Next, we would like to test if there are also differences in their memory function, and if so, if we could repair it. We can also test whether substances, such as pesticides, worsen cognitive deficits, or cause them in brain organoids produced from healthy subjects. This is a very exciting line of research, which I believe is nearly within reach.

The third area is computing. As we laid out in our article, considering the brain’s size, its computational power is simply unmatched. Just for comparison, a supercomputer finally surpassed the computational power of a single human brain in 2022. But it cost $600m and occupies 680 square meters [about twice the area of a tennis court].

We’re also reaching the limits of computing. Moore’s Law, which states that the number of transistors in a microchip doubles every two years, has held for 60 years. But soon we won’t be able to physically fit more transistors into a chip. A single neuron, on the other hand, can connect to up to 10,000 other neurons – this is a very different way of processing and storing information. Through OI, we hope that we’ll be able to leverage the brain’s computational principles to build computers differently.

How do you intend to tackle ethical issues that might arise from organoid intelligence?

There are many questions that we face now, ranging from the rights of people over organoids developed from their cells, to understanding whether OI is conscious. I find this aspect of the work fascinating, and I believe it’s a fantastic opportunity to investigate the physical manifestation of concepts like sentience and consciousness.

We teamed up with Jeffrey Kahn of the Bloomberg School of Public Health at Johns Hopkins University at the very beginning, asking him to lead the discussion around the ethics of neural systems. We have come up with two main strategies. One is called embedded ethics: we want ethicists to closely observe the work, take part in the planning, and raise points early on. The second part focuses on the public – we intend to share our work broadly and clearly as it advances. We want to know how people feel about this technology and define our research plan accordingly.

How far are we from the first organoid intelligence?

Even though OI is still in its infancy, past work shows that it’s possible. A study by one of our partners and co-authors – Brett Kagan of the Cortical Labs – is a recent example. His team showed that a standard brain cell culture can learn to play the video game Pong. They are already experimenting with brain organoids, and I would say that replicating this with organoids already fulfills what we call OI.

Still, we are a long way from realizing OI’s full potential. When it becomes a real tool, it will look very different from these first baby steps we are taking now. The important thing is that it’s a starting point. I see this like sequencing the first genes of the human genome project: the enabling technology is in our hands, and we’re bound to learn a lot on the way.


This post is a combination of the original articles published on the Frontiers in Robotics and AI blog. You can read the originals here and here.

Smart ‘Joey’ bots could soon swarm underground to clean and inspect our pipes

Joey’s design. Image credit: TL Nguyen, A Blight, A Pickering, A Barber, GH Jackson-Mills, JH Boyle, R Richardson, M Dogar, N Cohen

By Mischa Dijkstra, Frontiers science writer

Researchers from the University of Leeds have developed the first mini-robot, called Joey, that can find its own way independently through networks of narrow pipes underground, to inspect any damage or leaks. Joeys are cheap to produce, smart, small, and light, and can move through pipes inclined at a slope or over slippery or muddy sediment at the bottom of the pipes. Future versions of Joey will operate in swarms, with their mobile base on a larger ‘mother’ robot Kanga, which will be equipped with arms and tools for repairs to the pipes.

Beneath our streets lies a maze of pipes, conduits for water, sewage, and gas. Regular inspection of these pipes for leaks, or repair, normally requires these to be dug up. The latter is not only onerous and expensive – with an estimated annual cost of £5.5bn in the UK alone – but causes disruption to traffic as well as nuisance to people living nearby, not to mention damage to the environment.

Now imagine a robot that can find its way through the narrowest of pipe networks and relay images of damage or obstructions to human operators. This isn’t a pipedream anymore, shows a study in Frontiers in Robotics and AI by a team of researchers from the University of Leeds.

“Here we present Joey – a new miniature robot – and show that Joeys can explore real pipe networks completely on their own, without even needing a camera to navigate,” said Dr Netta Cohen, a professor at the University of Leeds and the final author on the study.

Joey is the first to be able to navigate all by itself through mazes of pipes as narrow as 7.5 cm across. Weighing just 70 g, it’s small enough to fit in the palm of your hand.

Pipebots project

The present work forms part of the ‘Pipebots’ project of the universities of Sheffield, Bristol, Birmingham, and Leeds, in collaboration with UK utility companies and other international academic and industrial partners.

First author Dr Thanh Luan Nguyen, a postdoctoral scientist at the University of Leeds who developed Joey’s control algorithms (or ‘brain’), said: “Underground water and sewer networks are some of the least hospitable environments, not only for humans, but also for robots. Sat Nav is not accessible undergound. And Joeys are tiny, so have to function with very simple motors, sensors, and computers that take little space, while the small batteries must be able to operate for long enough.”

Joey moves on 3D-printed ‘wheel-legs’ that roll through straight sections and walk over small obstacles. It is equipped with a range of energy-efficient sensors that measure its distance to walls, junctions, and corners, navigational tools, a microphone, and a camera and ‘spot lights’ to film faults in the pipe network and save the images. The prototype cost only £300 to produce.

Mud and slippery slopes

The team showed that Joey is able to find its way, without any instructions from human operators, through an experimental network of pipes including a T-junction, a left and right corner, a dead-end, an obstacle, and three straight sections. On average, Joey managed to explore about one meter of pipe network in just over 45 seconds.

To make life more difficult for the robot, the researchers verified that the robot easily moves up and down inclined pipes with realistic slopes. And to test Joey’s ability to navigate through muddy or slippery tubes, they also added sand and gooey gel (actually dishwashing liquid) to the pipes – again with success.

Importantly, the sensors are enough to allow Joey to navigate without the need to turn on the camera or use power-hungry computer vision. This saves energy and extends Joey’s current battery life. Whenever the battery runs low, Joey will return to its point of origin, to ‘feed’ on power.

Currently, Joeys have one weakness: they can’t right themselves if they inadvertently turn on their back, like an upside-down tortoise. The authors suggest that the next prototype will be able to overcome this challenge. Future generations of Joey should also be waterproof, to operate underwater in pipes entirely filled with liquid.

Joey’s future is collaborative

The Pipebots scientists aim to develop a swarm of Joeys that communicate and work together, based off a larger ‘mother’ robot named Kanga. Kanga, currently under development and testing by some of the same authors at Leeds School of Computing, will be equipped with more sophisticated sensors and repair tools such as robot arms, and carry multiple Joeys.

“Ultimately we hope to design a system that can inspect and map the condition of extensive pipe networks, monitor the pipes over time, and even execute some maintenance and repair tasks,” said Cohen.

“We envision the technology to scale up and diversify, creating an ecology of multi-species of robots that collaborate underground. In this scenario, groups of Joeys would be deployed by larger robots that have more power and capabilities but are restricted to the larger pipes. Meeting this challenge will require more research, development, and testing over 10 to 20 years. It may start to come into play around 2040 or 2050.” 

Top half: navigating through a T-junction in the pipe network. Bottom half: encountering an obstruction and turning back. Image credit: TL Nguyen, A Blight, A Pickering, A Barber, GH Jackson-Mills, JH Boyle, R Richardson, M Dogar, N Cohen

Top half: moving through sand, slippery goo, or mud. Bottom half: moving through pipe sloped at an angle. Image credit: TL Nguyen, A Blight, A Pickering, A Barber, GH Jackson-Mills, JH Boyle, R Richardson, M Dogar, N Cohen

New walking robot design could revolutionize how we build things in space

Image: Shutterstock.com

By Suzanna Burgelman, Frontiers science writer

Researchers have designed a state-of-the-art walking robot that could revolutionize large construction projects in space. They tested the feasibility of the robot for the in-space assembly of a 25m Large Aperture Space Telescope. They present their findings in Frontiers in Robotics and AI. A scaled-down prototype of the robot also showed promise for large construction applications on Earth.

Maintenance and servicing of large constructions are nowhere more needed than in space, where the conditions are extreme and human technology has a short lifespan. Extravehicular activities (activities done by an astronaut outside a spacecraft), robotics, and autonomous systems solutions have been useful for servicing and maintenance missions and have helped the space community conduct ground-breaking research on various space missions. Advancements in robotics and autonomous systems facilitate a multitude of in-space services. This includes, but is not limited to, manufacturing, assembly, maintenance, astronomy, earth observation, and debris removal.

With the countless risks involved, only relying on human builders is not enough, and current technologies are becoming outdated.

“We need to introduce sustainable, futuristic technology to support the current and growing orbital ecosystem,” explained corresponding author Manu Nair, PhD candidate at the University of Lincoln.

“As the scale of space missions grows, there is a need for more extensive infrastructures in orbit. Assembly missions in space would hold one of the key responsibilities in meeting the increasing demand.”

In their paper, Nair and his colleagues introduced an innovative, dexterous walking robotic system that can be used for in orbit assembly missions. As a use case, the researchers tested the robot for the assembly of a 25m Large Aperture Space Telescope (LAST).

Assembling telescopes in orbit

Ever since the launch of the Hubble Space Telescope and its successor, the James Webb Space Telescope, the space community has been continuously moving towards deploying newer and larger telescopes with larger apertures (the diameter of the light collecting region).

Assembling such telescopes, such as a 25m LAST, on Earth is not possible with our current launch vehicles due to their limited size. That is why larger telescopes ideally need to be assembled in space (or in orbit).

“The prospect of in-orbit commissioning of a LAST has fueled scientific and commercial interests in deep-space astronomy and Earth observation,” said Nair.

To assemble a telescope of that magnitude in space, we need the right tools: “Although conventional space walking robotic candidates are dexterous, they are constrained in maneuverability. Therefore, it is significant for future in-orbit walking robot designs to incorporate mobility features to offer access to a much larger workspace without compromising the dexterity.”

E-Walker robot

The researchers proposed a seven degrees-of-freedom fully dexterous end-over-end walking robot (a limbed robotic system that can move along a surface to different locations to perform tasks with seven degrees of motion capabilities), or, in short, an E-Walker.

They conducted an in-depth design engineering exercise to test the robot for its capabilities to efficiently assemble a 25m LAST in orbit. The robot was compared to the existing Canadarm2 and the European Robotic Arm on the International Space Station. Additionally, a scaled down prototype for Earth-analog testing was developed and another design engineering exercise performed.

“Our analysis shows that the proposed innovative E-Walker design proves to be versatile and an ideal candidate for future in-orbit missions. The E-Walker would be able to extend the life cycle of a mission by carrying out routine maintenance and servicing missions post assembly, in space” explained Nair.

“The analysis of the scaled-down prototype identifies it to also be an ideal candidate for servicing, maintenance, and assembly operations on Earth, such as carrying out regular maintenance checks on wind turbines.”

Yet a lot remains to be explored. The research was limited to the design engineering analysis of a full-scale and prototype model of the E-Walker. Nair explained: “The E-Walker prototyping work is now in progress at the University of Lincoln; therefore, the experimental verification and validation will be published separately.”


This article was originally published on the Frontiers blog.

Bees’ ‘waggle dance’ may revolutionize how robots talk to each other in disaster zones

Image credit: rtbilder / Shutterstock.com

By Conn Hastings, science writer

Honeybees use a sophisticated dance to tell their sisters about the location of nearby flowers. This phenomenon forms the inspiration for a form of robot-robot communication that does not rely on digital networks. A recent study presents a simple technique whereby robots view and interpret each other’s movements or a gesture from a human to communicate a geographical location. This approach could prove invaluable when network coverage is unreliable or absent, such as in disaster zones.

Where are those flowers and how far away are they? This is the crux of the ‘waggle dance’ performed by honeybees to alert others to the location of nectar-rich flowers. A new study in Frontiers in Robotics and AI has taken inspiration from this technique to devise a way for robots to communicate. The first robot traces a shape on the floor, and the shape’s orientation and the time it takes to trace it tell the second robot the required direction and distance of travel. The technique could prove invaluable in situations where robot labor is required but network communications are unreliable, such as in a disaster zone or in space.

Honeybees excel at non-verbal communication

If you have ever found yourself in a noisy environment, such as a factory floor, you may have noticed that humans are adept at communicating using gestures. Well, we aren’t the only ones. In fact, honeybees take non-verbal communication to a whole new level.

By wiggling their backside while parading through the hive, they can let other honeybees know about the location of food. The direction of this ‘waggle dance’ lets other bees know the direction of the food with respect to the hive and the sun, and the duration of the dance lets them know how far away it is. It is a simple but effective way to convey complex geographical coordinates.

Applying the dance to robots

This ingenious method of communication inspired the researchers behind this latest study to apply it to the world of robotics. Robot cooperation allows multiple robots to coordinate and complete complex tasks. Typically, robots communicate using digital networks, but what happens when these are unreliable, such as during an emergency or in remote locations? Moreover, how can humans communicate with robots in such a scenario?

To address this, the researchers designed a visual communication system for robots with on-board cameras, using algorithms that allow the robots to interpret what they see. They tested the system using a simple task, where a package in a warehouse needs to be moved. The system allows a human to communicate with a ‘messenger robot’, which supervises and instructs a ‘handling robot’ that performs the task.

Robot dancing in practice

In this situation, the human can communicate with the messenger robot using gestures, such as a raised hand with a closed fist. The robot can recognize the gesture using its on-board camera and skeletal tracking algorithms. Once the human has shown the messenger robot where the package is, it conveys this information to the handling robot.

This involves positioning itself in front of the handling robot and tracing a specific shape on the ground. The orientation of the shape indicates the required direction of travel, while the length of time it takes to trace it indicates the distance. This robot dance would make a worker bee proud, but did it work?

The researchers put it to the test using a computer simulation, and with real robots and human volunteers. The robots interpreted the gestures correctly 90% and 93.3% of the time, respectively, highlighting the potential of the technique.

“This technique could be useful in places where communication network coverage is insufficient and intermittent, such as robot search-and-rescue operations in disaster zones or in robots that undertake space walks,” said Prof Abhra Roy Chowdhury of the Indian Institute of Science, senior author on the study. “This method depends on robot vision through a simple camera, and therefore it is compatible with robots of various sizes and configurations and is scalable,” added Kaustubh Joshi of the University of Maryland, first author on the study.


Video credit: K Joshi and AR Chowdury


This article was originally published on the Frontiers blog.