The robots are coming. Company offers solution to labor shortage in housing industry
Drives for Speed in Intralogistics
Robo-Insight #1
Welcome to the inaugural edition of Robo-Insight, a biweekly robotics news update! In this post, we are thrilled to present a range of remarkable advancements in the field, highlighting robotics progress in terrain traversability, shape morphing, object avoidance, mechanical memory, physics-based AI techniques, and new home robotics kits. These developments exemplify the continuous evolution and potential of robotics technology.
Four-legged robot traverses tricky terrains thanks to improved 3D vision
Recently, researchers from the University of California San Diego have given four-legged robots forward-facing depth cameras to enable them to clearly analyze the environment around and below them. The researchers utilized a model that obtains 3D information from short 2D frame videos. This data can also be compared with past images to estimate possible 3D transformation. Furthermore, their system is also self-checking, as it fuses information to give it a sort of short-term memory. Although the model does not guide the robot to a specific location, it enables the robot to traverse challenging terrain. The full paper, more videos, and the code (coming soon) can be found here.
Neural Volumetric Memory for Visual Locomotion Control
Mori3: A polygon shape-shifting robot for space travel
Along the lines of performing in difficult settings, Mori3, a robot that can change shape and interact with objects and people, was created by researchers at the Engineering School of EPFL. The modular Mori3 robot can change from 2D triangles into numerous 3D shapes by fusing digital polygon meshing with swarm behavior. The study helps highlight how modular robotics can be used for tasks like space exploration. The robot shows a great deal of versatility thanks to its adaptability and ability to assemble and disassemble. The Mori3 robots will be used by the crew to communicate with spacecraft and perform exterior repairs.
Mori3, the shape-shifter and modular origami robot
A step toward safe and reliable autopilots for flying
And speaking off-ground, a machine-learning method has recently been devised by MIT researchers to address challenging stabilize-avoid issues in autonomous aircraft. The method offers a tenfold increase in stability and outperforms previous techniques in terms of safety. The researchers were able to attain stable trajectories while avoiding obstacles by redefining the issue as a restricted optimization and employing a deep reinforcement learning technique. The method avoided crashing a simulated jet aircraft when it was flown in a tight space. The method may be used to create dynamic robot controllers and maintain stability and safety in mission-critical systems. Improvements to uncertainty accounting and hardware testing will be made in the future.
Metamaterials with built-in frustration have mechanical memory
A breakthrough in the development of materials with mechanical memory has been reached by researchers from the University of Amsterdam and ENS de Lyon. They created materials that can remember how they were previously bent or stretched and that have a special part or line that won’t change shape when pushed or pulled. This development in metamaterials may be used in mechanical and quantum computers, as well as in robotics and photonics. To create this mechanical memory effect, the researchers used the idea of non-orientable order, which is present in items like Möbius strips.
Metamaterials with built-in frustration have mechanical memory
Hybrid AI-powered computer vision combines physics and Big Data
On the topic of enhancing computer vision technology, researchers from UCLA and the United States Army Research Laboratory have developed a hybrid strategy that integrates physics-based awareness into data-driven algorithms. The article presents multiple approaches to integrate physics and data in AI like physics-based AI datasets, network designs, and network loss functions. The hybrid technique has demonstrated promising outcomes in image enhancement, motion prediction, and object tracking. Deep learning-based AI systems may eventually be able to autonomously master the rules of physics, according to the researchers.
myCobot 320 AI Kit 2023
On the industry side, the myCobot 320 AI Kit 2023, a ground-breaking robotic arm built for user-programmable development, was just released by Elephant Robotics. It offers flexibility for business, research, and creative endeavors because of its increased working radius, higher payload capacity, and intelligent grasping abilities. The kit features considerable advancements over earlier designs, supports five sophisticated vision recognition algorithms, includes grippers, and comes with user-friendly visualization software.
Bowl Bot
Finally, the Bowl Bot is an autonomous, self-cleaning robot recently created by Nala Robotics that can prepare a wide range of individualized food bowls. It offers a wide variety of 28 ingredients for bases, proteins, garnishes, and sauces in a small footprint. The Bowl Bot, which is outfitted with cutting-edge AI and vision technologies, runs at rapid speeds while upholding cleanliness and eliminating cross-contamination with its self-cleaning system.
These remarkable breakthroughs are merely a glimpse into the vibrant and dynamic world of robotics. The field continues to inspire and push boundaries, propelling us toward a future where robotics technology plays an increasingly pivotal role. Stay tuned for more exciting updates in our next edition!
Sources:
- “Four-Legged Robot Traverses Tricky Terrains Thanks to Improved 3D Vision.” Accessed 1 July 2023.
- Christoph H. Belke, Kevin Holdcroft, Alexander Sigrist, Jamie Paik. Morphological flexibility in robotic systems through physical polygon meshing. Nature Machine Intelligence, 2023; DOI: 10.1038/s42256-023-00676-8
- “A Step toward Safe and Reliable Autopilots for Flying.” MIT News | Massachusetts Institute of Technology, Accessed 12 June 2023
- Amsterdam, Universiteit van. “Metamaterials with Built-in Frustration Have Mechanical Memory.” University of Amsterdam. Accessed 1 July 2023.
- Hybrid AI-Powered Computer Vision Combines Physics and Big Data. Accessed 1 July 2023.
- Empowering Research and Development: Introducing the MyCobot 320 AI Kit 2023 by Elephant Robotics. Accessed 1 July 2023.
- “The Bowls, a Fully Automated Robotic Salad Bowl Maker – Nala Robotics.”. Accessed 1 July 2023.
What’s coming up at #RoboCup2023?
This year, RoboCup will be held in Bordeaux, from 4-10 July. The event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. You can see the schedule for the week here.
The leagues and their competitions
The league competitions will take place on 6-9 July. You can find out more about the different leagues at these links:
- RoboCupSoccer
- RoboCupRescue
- RoboCup@Home
- This league comprises: Open platform, Domestic standard platform, and Social standard platform
- RoboCupIndustrial
- RoboCupJunior
Symposium
The RoboCup symposium will take place on 10 July. The programme can be found here.
There will be three keynote talks:
- Cynthia Breazeal, Social Robots: Reflections and Predictions of Our Future Relationship with Personal Robots
- Ben Moran and Guy Lever, Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
- Laurence Devillers, Socio-affective robots: ethical issues
Find out more at the event website.
Why diversity and inclusion needs to be at the forefront of future AI
By Inês Hipólito/Deborah Pirchner, Frontiers science writer
Inês Hipólito is a highly accomplished researcher, recognized for her work in esteemed journals and contributions as a co-editor. She has received research awards including the prestigious Talent Grant from the University of Amsterdam in 2021. After her PhD, she held positions at the Berlin School of Mind and Brain and Humboldt-Universität zu Berlin. Currently, she is a permanent lecturer of the philosophy of AI at Macquarie University, focusing on cognitive development and the interplay between augmented cognition (AI) and the sociocultural environment.
Inês co-leads a consortium project on ‘Exploring and Designing Urban Density. Neurourbanism as a Novel Approach in Global Health,’ funded by the Berlin University Alliance. She also serves as an ethicist of AI at Verses.
Beyond her research, she co-founded and serves as vice-president of the International Society for the Philosophy of the Sciences of the Mind. Inês is the host of the thought-provoking podcast ‘The PhilospHER’s Way’ and has actively contributed to the Women in Philosophy Committee and the Committee in Diversity and Inclusivity at the Australasian Association of Philosophy from 2017 to 2020.
As part of our Frontier Scientist series, Hipólito caught up with Frontiers to tell us about her career and research.
What inspired you to become a researcher?
Throughout my personal journey, my innate curiosity and passion for understanding our experience of the world have been the driving forces in my life. Interacting with inspiring teachers and mentors during my education further fueled my motivation to explore the possibilities of objective understanding. This led me to pursue a multidisciplinary path in philosophy and neuroscience, embracing the original intent of cognitive science for interdisciplinary collaboration. I believe that by bridging disciplinary gaps, we can gain an understanding of the human mind and its interaction with the world. This integrative approach enables me to contribute to both scientific knowledge and real-world applications benefitting individuals and society as a whole.
Can you tell us about the research you’re currently working on?
My research centers around cognitive development and its implications in the cognitive science of AI. Sociocultural contexts play a pivotal role in shaping cognitive development, ranging from fundamental cognitive processes to more advanced, semantically sophisticated cognitive activities that we acquire and engage with.
As our world becomes increasingly permeated by AI, my research focuses on two main aspects. Firstly, I investigate how smart environments such as online spaces, virtual reality, and digitalized citizenship influence context-dependent cognitive development. By exploring the impact of these environments, I aim to gain insights into how cognition is shaped and adapted within these technologically mediated contexts.
Secondly, I examine how AI design emerges from specific sociocultural settings. Rather than merely reflecting society, AI design embodies societal values and aspirations. I explore the intricate relationship between AI and its sociocultural origins to understand how technology can both shape and be influenced by the context in which it is developed.
In your opinion, why is your research important?
The aim of my work is to contribute to the understanding of the complex relationship between cognition and AI, focusing on the sociocultural dynamics that influence both cognitive development and the design of artificial intelligence systems. I am particularly interested in understanding and the paradoxical nature of AI development and its societal impact: while technology historically improved lives, AI has also brought attention to problematic biases and segregation highlighted in feminist technoscience literature.
As AI progresses, it is crucial to ensure that advancements benefit everyone and do not perpetuate historical inequalities. Inclusivity and equality should be prioritized, challenging dominant narratives that favor certain groups, particularly white males. Recognizing that AI technologies embody our implicit biases and reflect our attitudes towards diversity and our relationship with the natural world enables us to navigate the ethical and societal implications of AI more effectively.
Are there any common misconceptions about this area of research? How would you address them?
The common misconception of viewing the mind as a computer has significant implications for AI design and our understanding of cognition. When cognition is seen as a simple input-output process in the brain, it overlooks the embodied complexities of human experience and the biases embedded in AI design. This reductionist view fails to account for the importance of embodied interaction, cognitive development, mental health, well-being, and societal equity.
This subjective experience of the world cannot be reduced to mere information processing, as it is context-dependent and imbued with meanings partly constructed in societal power dynamics.
Because the environment is ever more AI-permeated, understanding how it is shaped by and shapes the human experience requires investigation beyond the conceiving of cognition as (meaningless) information processes. By recognizing the distributed and embodied nature of cognition, we can ensure that AI technologies are designed and integrated in a way that respects the complexities of human experience, embraces ambiguity, and promotes meaningful and equitable societal interactions.
What are some of the areas of research you’d like to see tackled in the years ahead?
In the years ahead, it is crucial to tackle several AI-related areas to shape a more inclusive and sustainable future:
Design AI to reduce bias and discrimination, ensuring equal opportunities for individuals from diverse backgrounds.
Make AI systems transparent and explainable, enabling people to understand how decisions are made and how to hold them accountable for unintended consequences.
Collaborate with diverse stakeholders to address biases, cultural sensitivities, and challenges faced by marginalized communities in AI development.
Consider the ecological impact, resource consumption, waste generation, and carbon footprint throughout the entire lifecycle of AI technologies.
How has open science benefited the reach and impact of your research?
Scientific knowledge that is publicly funded should be made freely available to align with the principles of open science. Open science emphasizes transparency, collaboration, and accessibility in scientific research and knowledge dissemination. By openly sharing AI-related knowledge, including code, data, and algorithms, we encourage diverse stakeholders to contribute their expertise, identify potential biases, and address ethical concerns within technoscience.
Furthermore, incorporating philosophical reasoning into the development of the philosophy of mind theory can inform ethical deliberation and decision-making in AI design and implementation by researchers and policymakers. This transparent and collaborative approach enables critical assessment and improvement of AI technologies to ensure fairness, diminishing of bias, and overall equity.
This article is republished from Frontiers in Robotics and AI blog. You can read the original article here.
Fusion hybrid linear actuator: Concept and disturbance resistance evaluation
Orchestra-conducting robot wows audience in S. Korean capital
Robot swarms neutralize harmful Byzantine robots using a blockchain-based token economy
Geek+ drives automation of advanced BMW-producing plant in China
Challenges in Drone Technology
Joanne Pransky: Rest in Peace (1959-2023)
It is with great sadness that I am sharing that Joanne Pransky, the World’s First Robotic Psychariatrist, and who Isaac Asimov called the real Susan Calvin passed away recently. I had several delight conversations with her, including an interview and moderated panel.
Joanne was a tireless advocate for robotics AND for women in robotics. She didn’t have advanced degrees in robotics but had worked in the robotics industry and then in robot trade journals- she had quite the eye for finding really useful technology versus hype. Her enthusiasm and passion for constantly learning was an inspiration to me and I was privileged to know her as a friend. I interviewed her a few years ago about Asimov which you can see here. She points out how amazing Asimov was- at 19 years old writing about robots and imagining them in a positive way, as helpers, companions, tools to enable us to do more of the “human” stuff- not the shoot-em-up, take over the world Frankenstein monster motif. Joanne was one of the first to really push what is now called human-centered robotics– that there is always a human involved in any robot system.
Since she knew Asimov, she was in a good position to discuss Dr. Susan Calvin- possible the worst stereotype of a woman roboticist ever- no family, no friends, totally obsessed by work. You definitely should hear her discussion- I don’t want to spoil it by trying to paraphrase it. I don’t know if Alec Nevala-Lee, the author of Astounding:John W. Campbell, Isaac Asimov, Robert A. Heinlein, L. Ron Hubbard, and the Golden Age of Science Fiction- a terrific book, you should read it, would agree but it definitely adds a new dimension to understanding- and enjoying- Asimov’s robot stories.
I also moderated the 100 Years of R.U.R. panel with Joanne and Jeremy Brett for the 2021 We Robot conference. Her talk and comments were brilliant. While I had always heard that “robot” came from the Czech word “robota”, she pointed out that “robota” stems from the Greek work “orphanos” which means a change in status (like being orphaned) – where the “o” and “r” are switched in Slavic languages. So the roots in R.U.R. aren’t just drudgery, it is that being a robot is also a lower status. Both words convey exactly what Capek was trying to express about the dehumanization of workers. What an interesting detail!
That sums up Joanne: smart, seeing things that others missed, warm, positive, enthusiastic, engaging, wanting everyone to know more, do more, have a better life through robotics.
Joanne, I miss you.
And if you never met her, please check out her interview: