Robo-Insight #1

Source: OpenAI’s DALL·E 2 with prompt “a hyperrealistic picture of a robot reading the news on a laptop at a coffee shop”
Welcome to the inaugural edition of Robo-Insight, a biweekly robotics news update! In this post, we are thrilled to present a range of remarkable advancements in the field, highlighting robotics progress in terrain traversability, shape morphing, object avoidance, mechanical memory, physics-based AI techniques, and new home robotics kits. These developments exemplify the continuous evolution and potential of robotics technology.
Four-legged robot traverses tricky terrains thanks to improved 3D vision
Recently, researchers from the University of California San Diego have given four-legged robots forward-facing depth cameras to enable them to clearly analyze the environment around and below them. The researchers utilized a model that obtains 3D information from short 2D frame videos. This data can also be compared with past images to estimate possible 3D transformation. Furthermore, their system is also self-checking, as it fuses information to give it a sort of short-term memory. Although the model does not guide the robot to a specific location, it enables the robot to traverse challenging terrain. The full paper, more videos, and the code (coming soon) can be found here.
Neural Volumetric Memory for Visual Locomotion Control
Mori3: A polygon shape-shifting robot for space travel
Along the lines of performing in difficult settings, Mori3, a robot that can change shape and interact with objects and people, was created by researchers at the Engineering School of EPFL. The modular Mori3 robot can change from 2D triangles into numerous 3D shapes by fusing digital polygon meshing with swarm behavior. The study helps highlight how modular robotics can be used for tasks like space exploration. The robot shows a great deal of versatility thanks to its adaptability and ability to assemble and disassemble. The Mori3 robots will be used by the crew to communicate with spacecraft and perform exterior repairs.
Mori3, the shape-shifter and modular origami robot
A step toward safe and reliable autopilots for flying
And speaking off-ground, a machine-learning method has recently been devised by MIT researchers to address challenging stabilize-avoid issues in autonomous aircraft. The method offers a tenfold increase in stability and outperforms previous techniques in terms of safety. The researchers were able to attain stable trajectories while avoiding obstacles by redefining the issue as a restricted optimization and employing a deep reinforcement learning technique. The method avoided crashing a simulated jet aircraft when it was flown in a tight space. The method may be used to create dynamic robot controllers and maintain stability and safety in mission-critical systems. Improvements to uncertainty accounting and hardware testing will be made in the future.

This video shows how the researchers used their technique to effectively fly a simulated jet aircraft in a scenario where it had to stabilize to a target near the ground while maintaining a very low altitude and staying within a narrow flight corridor. Courtesy of the researchers.
Metamaterials with built-in frustration have mechanical memory
A breakthrough in the development of materials with mechanical memory has been reached by researchers from the University of Amsterdam and ENS de Lyon. They created materials that can remember how they were previously bent or stretched and that have a special part or line that won’t change shape when pushed or pulled. This development in metamaterials may be used in mechanical and quantum computers, as well as in robotics and photonics. To create this mechanical memory effect, the researchers used the idea of non-orientable order, which is present in items like Möbius strips.
Metamaterials with built-in frustration have mechanical memory
Hybrid AI-powered computer vision combines physics and Big Data
On the topic of enhancing computer vision technology, researchers from UCLA and the United States Army Research Laboratory have developed a hybrid strategy that integrates physics-based awareness into data-driven algorithms. The article presents multiple approaches to integrate physics and data in AI like physics-based AI datasets, network designs, and network loss functions. The hybrid technique has demonstrated promising outcomes in image enhancement, motion prediction, and object tracking. Deep learning-based AI systems may eventually be able to autonomously master the rules of physics, according to the researchers.

Achuta Kadambi/UCLA
Graphic showing two techniques to incorporate physics into machine learning pipelines — residual physics (top) and physical fusion (bottom). Source.
myCobot 320 AI Kit 2023
On the industry side, the myCobot 320 AI Kit 2023, a ground-breaking robotic arm built for user-programmable development, was just released by Elephant Robotics. It offers flexibility for business, research, and creative endeavors because of its increased working radius, higher payload capacity, and intelligent grasping abilities. The kit features considerable advancements over earlier designs, supports five sophisticated vision recognition algorithms, includes grippers, and comes with user-friendly visualization software.

Source: Elephant Robotics
Bowl Bot
Finally, the Bowl Bot is an autonomous, self-cleaning robot recently created by Nala Robotics that can prepare a wide range of individualized food bowls. It offers a wide variety of 28 ingredients for bases, proteins, garnishes, and sauces in a small footprint. The Bowl Bot, which is outfitted with cutting-edge AI and vision technologies, runs at rapid speeds while upholding cleanliness and eliminating cross-contamination with its self-cleaning system.

Source: Nala Robotics
These remarkable breakthroughs are merely a glimpse into the vibrant and dynamic world of robotics. The field continues to inspire and push boundaries, propelling us toward a future where robotics technology plays an increasingly pivotal role. Stay tuned for more exciting updates in our next edition!
Sources:
- “Four-Legged Robot Traverses Tricky Terrains Thanks to Improved 3D Vision.” Accessed 1 July 2023.
- Christoph H. Belke, Kevin Holdcroft, Alexander Sigrist, Jamie Paik. Morphological flexibility in robotic systems through physical polygon meshing. Nature Machine Intelligence, 2023; DOI: 10.1038/s42256-023-00676-8
- “A Step toward Safe and Reliable Autopilots for Flying.” MIT News | Massachusetts Institute of Technology, Accessed 12 June 2023
- Amsterdam, Universiteit van. “Metamaterials with Built-in Frustration Have Mechanical Memory.” University of Amsterdam. Accessed 1 July 2023.
- Hybrid AI-Powered Computer Vision Combines Physics and Big Data. Accessed 1 July 2023.
- Empowering Research and Development: Introducing the MyCobot 320 AI Kit 2023 by Elephant Robotics. Accessed 1 July 2023.
- “The Bowls, a Fully Automated Robotic Salad Bowl Maker – Nala Robotics.”. Accessed 1 July 2023.
What’s coming up at #RoboCup2023?
This year, RoboCup will be held in Bordeaux, from 4-10 July. The event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. You can see the schedule for the week here.
The leagues and their competitions
The league competitions will take place on 6-9 July. You can find out more about the different leagues at these links:
- RoboCupSoccer
- RoboCupRescue
- RoboCup@Home
- This league comprises: Open platform, Domestic standard platform, and Social standard platform
- RoboCupIndustrial
- RoboCupJunior
Symposium
The RoboCup symposium will take place on 10 July. The programme can be found here.
There will be three keynote talks:
- Cynthia Breazeal, Social Robots: Reflections and Predictions of Our Future Relationship with Personal Robots
- Ben Moran and Guy Lever, Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
- Laurence Devillers, Socio-affective robots: ethical issues
Find out more at the event website.
Why diversity and inclusion needs to be at the forefront of future AI

Image: shutterstock.com
By Inês Hipólito/Deborah Pirchner, Frontiers science writer
Inês Hipólito is a highly accomplished researcher, recognized for her work in esteemed journals and contributions as a co-editor. She has received research awards including the prestigious Talent Grant from the University of Amsterdam in 2021. After her PhD, she held positions at the Berlin School of Mind and Brain and Humboldt-Universität zu Berlin. Currently, she is a permanent lecturer of the philosophy of AI at Macquarie University, focusing on cognitive development and the interplay between augmented cognition (AI) and the sociocultural environment.
Inês co-leads a consortium project on ‘Exploring and Designing Urban Density. Neurourbanism as a Novel Approach in Global Health,’ funded by the Berlin University Alliance. She also serves as an ethicist of AI at Verses.
Beyond her research, she co-founded and serves as vice-president of the International Society for the Philosophy of the Sciences of the Mind. Inês is the host of the thought-provoking podcast ‘The PhilospHER’s Way’ and has actively contributed to the Women in Philosophy Committee and the Committee in Diversity and Inclusivity at the Australasian Association of Philosophy from 2017 to 2020.
As part of our Frontier Scientist series, Hipólito caught up with Frontiers to tell us about her career and research.

Image: Inês Hipólito
What inspired you to become a researcher?
Throughout my personal journey, my innate curiosity and passion for understanding our experience of the world have been the driving forces in my life. Interacting with inspiring teachers and mentors during my education further fueled my motivation to explore the possibilities of objective understanding. This led me to pursue a multidisciplinary path in philosophy and neuroscience, embracing the original intent of cognitive science for interdisciplinary collaboration. I believe that by bridging disciplinary gaps, we can gain an understanding of the human mind and its interaction with the world. This integrative approach enables me to contribute to both scientific knowledge and real-world applications benefitting individuals and society as a whole.
Can you tell us about the research you’re currently working on?
My research centers around cognitive development and its implications in the cognitive science of AI. Sociocultural contexts play a pivotal role in shaping cognitive development, ranging from fundamental cognitive processes to more advanced, semantically sophisticated cognitive activities that we acquire and engage with.
As our world becomes increasingly permeated by AI, my research focuses on two main aspects. Firstly, I investigate how smart environments such as online spaces, virtual reality, and digitalized citizenship influence context-dependent cognitive development. By exploring the impact of these environments, I aim to gain insights into how cognition is shaped and adapted within these technologically mediated contexts.
Secondly, I examine how AI design emerges from specific sociocultural settings. Rather than merely reflecting society, AI design embodies societal values and aspirations. I explore the intricate relationship between AI and its sociocultural origins to understand how technology can both shape and be influenced by the context in which it is developed.
In your opinion, why is your research important?
The aim of my work is to contribute to the understanding of the complex relationship between cognition and AI, focusing on the sociocultural dynamics that influence both cognitive development and the design of artificial intelligence systems. I am particularly interested in understanding and the paradoxical nature of AI development and its societal impact: while technology historically improved lives, AI has also brought attention to problematic biases and segregation highlighted in feminist technoscience literature.
As AI progresses, it is crucial to ensure that advancements benefit everyone and do not perpetuate historical inequalities. Inclusivity and equality should be prioritized, challenging dominant narratives that favor certain groups, particularly white males. Recognizing that AI technologies embody our implicit biases and reflect our attitudes towards diversity and our relationship with the natural world enables us to navigate the ethical and societal implications of AI more effectively.
Are there any common misconceptions about this area of research? How would you address them?
The common misconception of viewing the mind as a computer has significant implications for AI design and our understanding of cognition. When cognition is seen as a simple input-output process in the brain, it overlooks the embodied complexities of human experience and the biases embedded in AI design. This reductionist view fails to account for the importance of embodied interaction, cognitive development, mental health, well-being, and societal equity.
This subjective experience of the world cannot be reduced to mere information processing, as it is context-dependent and imbued with meanings partly constructed in societal power dynamics.
Because the environment is ever more AI-permeated, understanding how it is shaped by and shapes the human experience requires investigation beyond the conceiving of cognition as (meaningless) information processes. By recognizing the distributed and embodied nature of cognition, we can ensure that AI technologies are designed and integrated in a way that respects the complexities of human experience, embraces ambiguity, and promotes meaningful and equitable societal interactions.
What are some of the areas of research you’d like to see tackled in the years ahead?
In the years ahead, it is crucial to tackle several AI-related areas to shape a more inclusive and sustainable future:
Design AI to reduce bias and discrimination, ensuring equal opportunities for individuals from diverse backgrounds.
Make AI systems transparent and explainable, enabling people to understand how decisions are made and how to hold them accountable for unintended consequences.
Collaborate with diverse stakeholders to address biases, cultural sensitivities, and challenges faced by marginalized communities in AI development.
Consider the ecological impact, resource consumption, waste generation, and carbon footprint throughout the entire lifecycle of AI technologies.
How has open science benefited the reach and impact of your research?
Scientific knowledge that is publicly funded should be made freely available to align with the principles of open science. Open science emphasizes transparency, collaboration, and accessibility in scientific research and knowledge dissemination. By openly sharing AI-related knowledge, including code, data, and algorithms, we encourage diverse stakeholders to contribute their expertise, identify potential biases, and address ethical concerns within technoscience.
Furthermore, incorporating philosophical reasoning into the development of the philosophy of mind theory can inform ethical deliberation and decision-making in AI design and implementation by researchers and policymakers. This transparent and collaborative approach enables critical assessment and improvement of AI technologies to ensure fairness, diminishing of bias, and overall equity.
This article is republished from Frontiers in Robotics and AI blog. You can read the original article here.
Fusion hybrid linear actuator: Concept and disturbance resistance evaluation
Orchestra-conducting robot wows audience in S. Korean capital
Robot swarms neutralize harmful Byzantine robots using a blockchain-based token economy
Geek+ drives automation of advanced BMW-producing plant in China
Challenges in Drone Technology
Joanne Pransky: Rest in Peace (1959-2023)

It is with great sadness that I am sharing that Joanne Pransky, the World’s First Robotic Psychariatrist, and who Isaac Asimov called the real Susan Calvin passed away recently. I had several delight conversations with her, including an interview and moderated panel.
Joanne was a tireless advocate for robotics AND for women in robotics. She didn’t have advanced degrees in robotics but had worked in the robotics industry and then in robot trade journals- she had quite the eye for finding really useful technology versus hype. Her enthusiasm and passion for constantly learning was an inspiration to me and I was privileged to know her as a friend. I interviewed her a few years ago about Asimov which you can see here. She points out how amazing Asimov was- at 19 years old writing about robots and imagining them in a positive way, as helpers, companions, tools to enable us to do more of the “human” stuff- not the shoot-em-up, take over the world Frankenstein monster motif. Joanne was one of the first to really push what is now called human-centered robotics– that there is always a human involved in any robot system.
Since she knew Asimov, she was in a good position to discuss Dr. Susan Calvin- possible the worst stereotype of a woman roboticist ever- no family, no friends, totally obsessed by work. You definitely should hear her discussion- I don’t want to spoil it by trying to paraphrase it. I don’t know if Alec Nevala-Lee, the author of Astounding:John W. Campbell, Isaac Asimov, Robert A. Heinlein, L. Ron Hubbard, and the Golden Age of Science Fiction- a terrific book, you should read it, would agree but it definitely adds a new dimension to understanding- and enjoying- Asimov’s robot stories.
I also moderated the 100 Years of R.U.R. panel with Joanne and Jeremy Brett for the 2021 We Robot conference. Her talk and comments were brilliant. While I had always heard that “robot” came from the Czech word “robota”, she pointed out that “robota” stems from the Greek work “orphanos” which means a change in status (like being orphaned) – where the “o” and “r” are switched in Slavic languages. So the roots in R.U.R. aren’t just drudgery, it is that being a robot is also a lower status. Both words convey exactly what Capek was trying to express about the dehumanization of workers. What an interesting detail!
That sums up Joanne: smart, seeing things that others missed, warm, positive, enthusiastic, engaging, wanting everyone to know more, do more, have a better life through robotics.
Joanne, I miss you.
And if you never met her, please check out her interview:
Soft robo-glove can help stroke patients relearn to play music
Researchers develop first-ever wooden robotic gripper that is driven by moisture, temperature and lighting
How should a robot explore the moon? A simple question shows the limits of current AI systems
Titan submersible disaster underscores dangers of deep-sea exploration – an engineer explains why most ocean science is conducted with crewless submarines

Researchers are increasingly using small, autonomous underwater robots to collect data in the world’s oceans. NOAA Teacher at Sea Program, NOAA Ship PISCES, CC BY-SA
By Nina Mahmoudian (Associate Professor of Mechanical Engineering, Purdue University)
Rescuers spotted debris from the tourist submarine Titan on the ocean floor near the wreck of the Titanic on June 22, 2023, indicating that the vessel suffered a catastrophic failure and the five people aboard were killed.
Bringing people to the bottom of the deep ocean is inherently dangerous. At the same time, climate change means collecting data from the world’s oceans is more vital than ever. Purdue University mechanical engineer Nina Mahmoudian explains how researchers reduce the risks and costs associated with deep-sea exploration: Send down subs, but keep people on the surface.
Why is most underwater research conducted with remotely operated and autonomous underwater vehicles?
When we talk about water studies, we’re talking about vast areas. And covering vast areas requires tools that can work for extended periods of time, sometimes months. Having people aboard underwater vehicles, especially for such long periods of time, is expensive and dangerous.
One of the tools researchers use is remotely operated vehicles, or ROVs. Basically, there is a cable between the vehicle and operator that allows the operator to command and move the vehicle, and the vehicle can relay data in real time. ROV technology has progressed a lot to be able to reach deep ocean – up to a depth of 6,000 meters (19,685 feet). It’s also better able to provide the mobility necessary for observing the sea bed and gathering data.
Autonomous underwater vehicles provide another opportunity for underwater exploration. They are usually not tethered to a ship. They are typically programmed ahead of time to do a specific mission. And while they are underwater they usually don’t have constant communication. At some interval, they surface, relay the whole amount of data that they have gathered, change the battery or recharge and receive renewed instructions before again submerging and continuing their mission.
What can remotely operated and autonomous underwater vehicles do that crewed submersibles can’t, and vice versa?
Crewed submersibles will be exciting for the public and those involved and helpful for the increased capabilities humans bring in operating instruments and making decisions, similar to crewed space exploration. However, it will be much more expensive compared with uncrewed explorations because of the required size of the platforms and the need for life-support systems and safety systems. Crewed submersibles today cost tens of thousands of dollars a day to operate.
Use of unmanned systems will provide better opportunities for exploration at less cost and risk in operating over vast areas and in inhospitable locations. Using remotely operated and autonomous underwater vehicles gives operators the opportunity to perform tasks that are dangerous for humans, like observing under ice and detecting underwater mines.
Remotely operated vehicles can operate under Antarctic ice and other dangerous places.
How has the technology for deep ocean research evolved?
The technology has advanced dramatically in recent years due to progress in sensors and computation. There has been great progress in miniaturization of acoustic sensors and sonars for use underwater. Computers have also become more miniaturized, capable and power efficient. There has been a lot of work on battery technology and connectors that are watertight. Additive manufacturing and 3D printing also help build hulls and components that can withstand the high pressures at depth at much lower costs.
There has also been great progress toward increasing autonomy using more advanced algorithms, in addition to traditional methods for navigation, localization and detection. For example, machine learning algorithms can help a vehicle detect and classify objects, whether stationary like a pipeline or mobile like schools of fish.
What kinds of discoveries have been made using remotely operated and autonomous underwater vehicles?
One example is underwater gliders. These are buoyancy-driven autonomous underwater vehicles. They can stay in water for months. They can collect data on pressure, temperature and salinity as they go up and down in water. All of these are very helpful for researchers to have an understanding of changes that are happening in oceans.
One of these platforms traveled across the North Atlantic Ocean from the coast of Massachusetts to Ireland for nearly a year in 2016 and 2017. The amount of data that was captured in that amount of time was unprecedented. To put it in perspective, a vehicle like that costs about $200,000. The operators were remote. Every eight hours the glider came to the surface, got connected to GPS and said, “Hey, I am here,” and the crew basically gave it the plan for the next leg of the mission. If a crewed ship was sent to gather that amount of data for that long it would cost in the millions.
In 2019, researchers used an autonomous underwater vehicle to collect invaluable data about the seabed beneath the Thwaites glacier in Antarctica.
Energy companies are also using remotely operated and autonomous underwater vehicles for inspecting and monitoring offshore renewable energy and oil and gas infrastructure on the seabed.
Where is the technology headed?
Underwater systems are slow-moving platforms, and if researchers can deploy them in large numbers that would give them an advantage for covering large areas of ocean. A great deal of effort is being put into coordination and fleet-oriented autonomy of these platforms, as well as into advancing data gathering using onboard sensors such as cameras, sonars and dissolved oxygen sensors. Another aspect of advancing vehicle autonomy is real-time underwater decision-making and data analysis.
What is the focus of your research on these submersibles?
My team and I focus on developing navigational and mission-planning algorithms for persistent operations, meaning long-term missions with minimal human oversight. The goal is to respond to two of the main constraints in the deployment of autonomous systems. One is battery life. The other is unknown situations.
The author’s research includes a project to allow autonomous underwater vehicles to recharge their batteries without human intervention.
For battery life, we work on at-sea recharging, both underwater and surface water. We are developing tools for autonomous deployment, recovery, recharging and data transfer for longer missions at sea. For unknown situations, we are working on recognizing and avoiding obstacles and adapting to different ocean currents – basically allowing a vehicle to navigate in rough conditions on its own.
To adapt to changing dynamics and component failures, we are working on methodologies to help the vehicle detect the change and compensate to be able to continue and finish the mission.
These efforts will enable long-term ocean studies including observing environmental conditions and mapping uncharted areas.

Nina Mahmoudian receives funding from National Science Foundation and Office of Naval Research.
This article is republished from The Conversation under a Creative Commons license. Read the original article.