Archive 31.10.2023

Page 1 of 7
1 2 3 7

Assessing permanent damage to self-healing polymers in soft robots

A new study assesses the maximum number of damage and healing cycles a self-healing actuator can endure. The study, which presents a method to automatically and autonomously assess the repeatable healability of a soft self-healing actuator, is published in the journal Robotics Reports.

Training underwater robots to find charging stations on the seabed

NTNU's largest laboratory—the Trondheim fjord—is something of an El Dorado for researchers developing underwater robots. A charging station has been installed on the seabed, and to ensure the robots can find the shortest route to the charging station, they train in the fjord.

AAAI Fall Symposium: Patrícia Alves-Oliveira on human-robot interaction design

An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The AAAI Fall Symposium Series took place in Arlington, USA, and comprised seven different symposia. One of these, the tenth Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium was run as a hybrid in-person/online event, and we tuned in to the opening keynote, which was given by Patrícia Alves-Oliveira.

As a psychology student, Patrícia’s dream was to become a therapist. However, an internship, where she encountered a robot for the first time, inspired her to change her plans, and she decided to go into the field of human-robot interaction. Following a PhD in the field, she worked as a postdoc, before heading to industry as a designer in the Amazon Astro robot team.

Patrícia has worked on a number of interesting projects during her time in academia and in industry. Thinking about how to design robots for specific user needs, and keeping the user at the forefront during the design process, has been core to her work. She began by summarising three very different academic projects.

Creativity and robotics

The objective of this project was to design, fabricate, and evaluate robots as creativity-provoking tools for kids. Patrícia created a social robot named YOLO (or Your Own Living Object) that she designed to be child-proof (in other words, it could withstand being dropped and knocked over), with the aim of trying to help children explore their creativity during play. A machine learning algorithm learns the pattern of play that the child has and adapts the robot behaviour accordingly. You can see the robot in action in the demo below:

FLEXI robot

As a postdoc project, Patrícia worked on building FLEXI, a social robot embodiment kit. This kit consists of a robot (with a face, and a torso with a screen on the front), which can be customised, and an open-source end-user programming interface designed to be user-friendly. The customisation element means that it can be used for many applications. The team has deployed FLEXI across three application scenarios: community-support, mental health, and education, with the aim of assessing the flexibility of the system. You can see the robot in action, in different scenarios, here.

Social dining

This project centred on a robotic arm for people with impaired mobility. Such systems already exist for assisting people with tasks such as eating. However, in a social context they can often form a barrier between the user and the rest of the group. The idea behind this project was to consider how such a robot could be adapted to work well in a social context, for example, during a meal with family or friends. The team interviewed people with impaired mobility to assess their needs, and came up with a set of design principles for creating robot-assisted feeding systems and an implementation guide for future research in this area. You can read the research paper on this project here.

You can find out more about these three projects, and the other projects that Patrícia has been involved in, here.

Astro robot

Patrícia has long been interested in robots for the real world, and how this real-world experience is aligned with the study of robots in academia and industry. She decided to leave academia and join the Astro robot programme, which she felt was a great opportunity to work on a large-scale real-world robot project.

The Astro robot is a home robot designed to assist with tasks such as monitoring your house, delivering small objects within the home, recognising your pet, telling a story, or playing games.

Patrícia took us through a typical day in the life of a designer where she always has in mind the bigger picture of what the team is aiming for, in other words, what the ideal robot, and its interactions with humans, would look like. Coupled to that, the process is governed by core design tenets, such as the customer needs, and non-negotiable core elements that the robot should include. When considering a particular element of the robot design, for example, the delivery of an item in the robot tray, Patrícia uses storyboards to map out details of potential human-robot interactions. An important aspect of design concerns edge cases, which occur regularly in the real world. Good design will consider potential edge cases and incorporate ways to deal with them.

Patrícia closed by emphasising the importance of teamwork in the design process, in particular, the need for interdisciplinary teams; by considering design from many different points of view, the chance of innovation is higher.

You can find out more about the Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium here.

Robot space maintenance based on human arm dynamics

On-orbit assembly has become a crucial aspect of space operations, where the manipulator frequently and directly interacts with objects in a complex assembly process. The traditional manipulator control has limitations in adapting to diverse assembly tasks and is vulnerable to vibration, leading to assembly failure.

Robots for deep-sea recovery missions in sci-fi and reality

A science fiction/science fact review of Three Miles Down by Harry Turtledove, the fictionalized version of the Hughes Glomar Explorer expedition 50 years before the OceanGate Titan tragedy.

My new science fiction/science fact article for Science Robotics is out on why deep ocean robotics is hard. Especially when trying to bring up a sunken submarine 3 miles underwater, which the CIA actually did in 1974. It’s even harder if you’re trying to bring up an alien spaceship- which is the plot of Harry Turtledove’s new sci-fi novel Three Miles Under. It’s a delightful Forrest Gump version of that 1974 Hughes Glomar Explorer expedition. Though the expedition was 50 years before the OceanGate Titan tragedy, the same challenges exist for today’s robots. The robotics science in the book is very real, the aliens, not so much.

In 1974, the CIA deployed a 3 mile long, 6 million pound robot manipulator to recover a Russian submarine. The cover story was that Howard Hughes was deep sea mining for manganese nodules- which accidentally started everyone else investing in deep sea mining.

The Glomar Explorer was also a breakthrough in computer control, as the ship had to stay on station and move the arm to the sub in the presence of wind, waves, and currents. All with an array of 16-bit microprocessor, 5MHz clock, 32K words of core memory Honeywell computers. Consider that a late model iPhone uses a 64-bit microprocessor, a 3GHz clock, 6GB of RAM and a GPU.

Turtledove takes one major liberty with the otherwise hard science retrospective: the CIA recovering the Soviet sub was in turn a cover story masking the real mission to salvage the alien space ship that apparently collided with the sub!

The dry humor and attention to scientific details makes for an entertaining sci-fi compare-and-contrast between deep sea robotics and computers in the 1970s and the present day. It’s a fun read- not just for roboticists and computer scientists.

For further robotics science reading:

For further scifi reading, check out:

Do humans get lazier when robots help with tasks?

Image/Shutterstock.com

By Angharad Brewer Gillham, Frontiers science writer

‘Social loafing’ is a phenomenon which happens when members of a team start to put less effort in because they know others will cover for them. Scientists investigating whether this happens in teams which combine work by robots and humans found that humans carrying out quality assurance tasks spotted fewer errors when they had been told that robots had already checked a piece, suggesting they relied on the robots and paid less attention to the work.

Now that improvements in technology mean that some robots work alongside humans, there is evidence that those humans have learned to see them as team-mates — and teamwork can have negative as well as positive effects on people’s performance. People sometimes relax, letting their colleagues do the work instead. This is called ‘social loafing’, and it’s common where people know their contribution won’t be noticed or they’ve acclimatized to another team member’s high performance. Scientists at the Technical University of Berlin investigated whether humans social loaf when they work with robots.

“Teamwork is a mixed blessing,” said Dietlind Helene Cymek, first author of the study in Frontiers in Robotics and AI. “Working together can motivate people to perform well but it can also lead to a loss of motivation because the individual contribution is not as visible. We were interested in whether we could also find such motivational effects when the team partner is a robot.”

A helping hand

The scientists tested their hypothesis using a simulated industrial defect-inspection task: looking at circuit boards for errors. The scientists provided images of circuit boards to 42 participants. The circuit boards were blurred, and the sharpened images could only be viewed by holding a mouse tool over them. This allowed the scientists to track participants’ inspection of the board.

Half of the participants were told that they were working on circuit boards that had been inspected by a robot called Panda. Although these participants did not work directly with Panda, they had seen the robot and could hear it while they worked. After examining the boards for errors and marking them, all participants were asked to rate their own effort, how responsible for the task they felt, and how they performed.

Looking but not seeing

At first sight, it looked as if the presence of Panda had made no difference — there was no statistically significant difference between the groups in terms of time spent inspecting the circuit boards and the area searched. Participants in both groups rated their feelings of responsibility for the task, effort expended, and performance similarly.

But when the scientists looked more closely at participants’ error rates, they realized that the participants working with Panda were catching fewer defects later in the task, when they’d already seen that Panda had successfully flagged many errors. This could reflect a ‘looking but not seeing’ effect, where people get used to relying on something and engage with it less mentally. Although the participants thought they were paying an equivalent amount of attention, subconsciously they assumed that Panda hadn’t missed any defects.

“It is easy to track where a person is looking, but much harder to tell whether that visual information is being sufficiently processed at a mental level,” said Dr Linda Onnasch, senior author of the study.

The experimental set-up with the human-robot team. Image supplied by the authors.

Safety at risk?

The authors warned that this could have safety implications. “In our experiment, the subjects worked on the task for about 90 minutes, and we already found that fewer quality errors were detected when they worked in a team,” said Onnasch. “In longer shifts, when tasks are routine and the working environment offers little performance monitoring and feedback, the loss of motivation tends to be much greater. In manufacturing in general, but especially in safety-related areas where double checking is common, this can have a negative impact on work outcomes.”

The scientists pointed out that their test has some limitations. While participants were told they were in a team with the robot and shown its work, they did not work directly with Panda. Additionally, social loafing is hard to simulate in the laboratory because participants know they are being watched.

“The main limitation is the laboratory setting,” Cymek explained. “To find out how big the problem of loss of motivation is in human-robot interaction, we need to go into the field and test our assumptions in real work environments, with skilled workers who routinely do their work in teams with robots.”

Bidirectional reflectivity measurements for ground-based objects

Measuring bidirectional reflectivity of ground-based objects has long posed a challenging task, hampered by limitations in both ground-based and satellite-based observations from multiple angles. However, in recent years, unmanned aerial vehicles (UAVs) have emerged as a valuable remote sensing solution, providing convenience and cost-effectiveness while enabling multi-view observations.

Using large language models to enable open-world, interactive and personalized robot navigation

Robots should ideally interact with users and objects in their surroundings in flexible ways, rather than always sticking to the same sets of responses and actions. A robotics approach aimed towards this goal that recently gained significant research attention is zero-shot object navigation (ZSON).
Page 1 of 7
1 2 3 7