Archive 24.08.2022

Page 2 of 5
1 2 3 4 5

How an Autonomous Machine Vision System Increased Accuracy and Cut Waste

QA is a crucial step in every manufacturing process, regardless of industry size or sector. However, manual QA is not fit for the strict standards of Industry 4.0, since human inspectors may miss defects, especially when inspecting highly complex electrical items.

Robot helps reveal how ants pass on knowledge

Ant leading other ant to new nest, known as tandem running. Image credit: Norasmah Basari and Nigel R Franks

The team built the robot to mimic the behaviour of rock ants that use one-to-one tuition, in which an ant that has discovered a much better new nest can teach the route there to another individual.

The findings, published in the Journal of Experimental Biology, confirm that most of the important elements of teaching in these ants are now understood because the teaching ant can be replaced by a machine.

Key to this process of teaching is tandem running where one ant literally leads another ant quite slowly along a route to the new nest. The pupil ant learns the route sufficiently well that it can find its own way back home and then lead a tandem-run with another ant to the new nest, and so on.

Prof Nigel Franks of Bristol’s School of Biological Sciences said: “Teaching is so important in our own lives that we spend a great deal of time either instructing others or being taught ourselves. This should cause us to wonder whether teaching actually occurs among non-human animals. And, in fact, the first case in which teaching was demonstrated rigorously in any other animal was in an ant.” The team wanted to determine what was necessary and sufficient in such teaching. If they could build a robot that successfully replaced the teacher, this should show that they largely understood all the essential elements in this process.

Prof Nigel Franks showing Sir David Attenborough the gantry during the opening of the new Life Sciences Building in 2014. Image credit: University of Bristol

The researchers built a large arena so there was an appreciable distance between the ants’ old nest, which was deliberately made to be of low quality, and a new much better one that ants could be led to by a robot. A gantry was placed atop the arena to move back and forth with a small sliding robot attached to it, so that the scientists could direct the robot to move along either straight or wavy routes. Attractive scent glands, from a worker ant, were attached to the robot to give it the pheromones of an ant teacher.

Prof Franks explained: “We waited for an ant to leave the old nest and put the robot pin, adorned with attractive pheromones, directly ahead of it. The pinhead was programmed to move towards the new nest either on a straight path or on a beautifully sinuous one. We had to allow for the robot to be interrupted in its journey, by us, so that we could wait for the following ant to catch up after it had looked around to learn landmarks.”

Diagram of ant pheromone glands. Image credit: Norasmah Basari

“When the follower ant had been led by the robot to the new nest, we allowed it to examine the new nest and then, in its own time, begin its homeward journey. We then used the gantry automatically to track the path of the returning ant.”

The team found that the robot had indeed taught the route successfully to the apprentice ant. The ants knew their way back to the old nest whether they had taken a winding path or a straight one.

Prof Franks explained: “A straight path might be quicker but a winding path would provide more time in which the following ant could better learn landmarks so that it could find its way home as efficiently as if it had been on a straight path.

“Crucially, we could compare the performance of the ants that the robot had taught with ones that we carried to the site of the new nest and that had not had an opportunity to learn the route. The taught ants found their way home much more quickly and successfully.”

The experiments were conducted by undergraduates Jacob Podesta, who is now a PhD student at York, and Edward Jarvis, who was also a Masters student at Professor Nigel Franks’s Lab. The gantry programming was accomplished by Dr. Alan Worley and all the statistical analyses were driven by Dr. Ana Sendova-Franks.

Their approach should make it possible to interrogate further exactly what is involved in successful teaching.

Bionic underwater vehicle inspired by fish with enlarged pectoral fins

Underwater robots are being widely used as tools in a variety of marine tasks. The RobDact is one such bionic underwater vehicle, inspired by a fish called Dactylopteridae known for its enlarged pectoral fins. A research team has combined computational fluid dynamics and a force measurement experiment to study the RobDact, creating an accurate hydrodynamic model of the RobDact that allows them to better control the vehicle.

A reinforcement learning framework to improve the soccer shooting skills of quadruped robots

Researchers University of California, Berkeley (UC Berkeley), Université de Montréal and Mila have recently developed a hierarchical reinforcement learning framework to improve the precision of quadrupedal robots in soccer shooting. This framework, introduced in a paper pre-published on arXiv, was deployed on a Unitree A1, a quadruped robot developed by UnitreeRobotics.

ep.359: Perception and Decision-Making for Underwater Robots, with Brendan Englot

Prof Brendan Englot, from Stevens Institute of Technology, discusses the challenges in perception and decision-making for underwater robots – especially in the field. He discusses ongoing research using the BlueROV platform and autonomous driving simulators.

Brendan Englot

Brendan Englot received his S.B., S.M., and Ph.D. degrees in mechanical engineering from the Massachusetts Institute of Technology in 2007, 2009, and 2012, respectively. He is currently an Associate Professor with the Department of Mechanical Engineering at Stevens Institute of Technology in Hoboken, New Jersey. At Stevens, he also serves as interim director of the Stevens Institute for Artificial Intelligence. He is interested in perception, planning, optimization, and control that enable mobile robots to achieve robust autonomy in complex physical environments, and his recent work has considered sensing tasks motivated by underwater surveillance and inspection applications, and path planning with multiple objectives, unreliable sensors, and imprecise maps.

Links

Using reinforcement learning for control of direct ink writing

3d printing machine using viscous materilClosed-loop printing enhanced by machine learning. © Michal Piovarči/ISTA

Using fluids for 3D printing may seem paradoxical at first glance, but not all fluids are watery. Many useful materials are more viscous, from inks to hydrogels, and thus qualify for printing. Yet their potential has been relatively unexplored due to the limited control over their behaviour. Now, researchers of the Bickel group at the Institute of Science and Technology Austria (ISTA) are employing machine learning in virtual environments to achieve better results in real-world experiments.

3D printing is on the rise. Many people are familiar with the characteristic plastic structures. However, attention has also turned to different printing materials, such as inks, viscous pastes and hydrogels, which could be potentially be used to 3D-print biomaterials and even food. But printing such fluids is challenging. Exact control over them requires painstaking trial-and-error experiments, because they typically tend to deform and spread after application.

A team of researchers, including Michal Piovarči and Bernd Bickel, are tackling these challenges. In their laboratories at the Institute of Science and Technology Austria (ISTA), they are using reinforcement learning – a type of machine learning – to improve the printing technique of viscous materials. The results were presented at the SIGGRAPH conference, the annual meeting of simulation and visual computing researchers.

A critical component of manufacturing is identifying the parameters that consistently produce high-quality structures. Certainly, an assumption is implicit here: the relationship between parameters and outcome is predictable. However, real processes always exhibit some variability due to the nature of the materials used. In printing with viscous materials, this notion is more prevalent, because they take significant time to settle after deposition. The question is: how can we understand, and deal with, the complex dynamics?

“Instead of printing thousands of samples, which is not only expensive, but rather tedious, we put our expertise in computer simulations to action,” responds Piovarči, lead-author of the study. While computer graphics often trade physical accuracy for faster simulation, here, the team came up with a simulated environment that mirrors the physical processes with accuracy. “We modelled the ink’s current and short-horizon future states based on fluid physics. The efficiency of our model allowed us to simulate hundreds of prints simultaneously, more often than we could ever have done in the experiment. We used the dataset for reinforcement learning and gained the knowledge of how to control the ink and other materials.”

Learning in virtual environments how to control the ink. © Michal Piovarči/ISTA

The machine learning algorithm established various policies, including one to control the movement of the ink-dispensing nozzle at a corner such that no unwanted blobs occur. The printing apparatus would not follow the baseline of the desired shape anymore, but rather take a slightly altered path which eventually yields better results. To verify that these rules can handle various materials, they trained three models using liquids of different viscosity. They tested their method with experiments using inks of various thicknesses.

The team opted for closed-loop forms instead of simple lines or writing, because “closed loops represent the standard case for 3D printing and that is our target application,” explains Piovarči. Although the single-layer printing in this project is sufficient for the use cases in printed electronics, he wants to add another dimension. “Naturally, three dimensional objects are our goal, such that one day we can print optical designs, food or functional mechanisms. I find it fascinating that we as computer graphics community can be the major driving force in machine learning for 3D printing.”

Read the research in full

Closed-Loop Control of Direct Ink Writing via Reinforcement Learning
Michal Piovarči, Michael Foshey, Jie Xu, Timmothy Erps, Vahid Babaei, Piotr Didyk, Szymon Rusinkiewicz, Wojciech Matusik, Bernd Bickel

Why Should Businesses Outsource Payroll?

Payroll processing is something that every business owner or manager has to deal with. No matter the kind of sector you work in or the size of the team you manage, payroll cannot be avoided, and this is an important aspect of your business finances.  This process relates to salary information and payments, which can...

The post Why Should Businesses Outsource Payroll? appeared first on 1redDrop.

Aquabots: Ultrasoft liquid robots for biomedical and environmental applications

In recent years, roboticists have developed a wide variety of robotic systems with different body structures and capabilities. Most of these robots are either made of hard materials, such as metals, or soft materials, such as silicon and rubbery materials.

Debrief: The Reddit Robotics Showcase 2022

Once again the global robotics community rallied to provide a unique opportunity for amateurs and hobbyists to share their robotics projects alongside academics and industry professionals. Below are the recorded sessions of this year.

Industrial / Automation
Keynote “Matt Whelan (Ocado Technology) – The Ocado 600 Series Robot”

  • Nye Mech Works (HAPPA) – Real Power Armor
  • 3D Printed 6-Axis Robot Arm
  • Vasily Morzhakov (Rembrain) – Cloud Platform for Smart Robotics


Mobile Robots

Keynote “Prof. Marc Hanheide (Lincoln Centre for Autonomous Systems) – Mobile Robots in the Wild”

  • Julius Sustarevas – Armstone: Autonomous Mobile 3D Printer
  • Camera Controller Hexapod & Screw/Augur All-Terrain Robot
  • Keegan Neave – NE-Five
  • Dimitar – Gravis and Ricardo *Kamal Carter – Aim-Hack Robot
  • Calvin – BeBOT Real Time


Bio-Inspired Robots

Keynote “Dr. Matteo Russo (Rolls-Royce UTC in Manufacturing and On-Wing Technology) – Entering the Maze: Snake-Like Robots from Aerospace to Industry”

  • Colin MacKenzie – Humanoid, Hexapod, and Legged Robot Control
  • Halid Yildirim – Design of a Modular Quadruped Robot Dog
  • Jakub Bartoszek – Honey Badger Quadruped
  • Lutz Freitag – 01. RFC Berlin
  • Hamburg Bit-Bots
  • William Kerber – Human Mode Robotics – Lynx Quadruped and AI Training
  • Sanjeev Hegde – Juggernaut


Human Robot Interaction

Dr. Ruth Aylett (The National Robotarium) – Social Agents and Human Robot Interaction”

  • Nathan Boyd – Developing humanoids for general purpose applications
  • Hand Controlled Artificial Hand
  • Ethan Fowler & Rich Walker – The Shadow Robot Company
  • Maël Abril – 6 Axis Dynamixel Robot Arm
  • Laura Smith (Tentacular) – Interactive Robotic Art

Thanks sincerely on behalf of the RRS22 committee to every applicant, participant, and audience member who took to time to share their passion for robotics. We wish you all the best in your robotics endeavors.

As a volunteer run endeavour in it’s second year, there is still plenty of room for improvement. On reflection, this year’s event had greater enthusiasm and participation from the community, despite a smaller audience during the livestream. The RRS committee is aware of this, and will be making strategy changes to ensure that RRS2023 justifies the effort put in by everyone. I will note that the positive feedback this year has been wonderful, a few people went out of their way to express how much they enjoyed this year’s event, the variety of speakers and the passion of the community. We’re confident that next year’s event we will be able to iron out the kinks and run a brilliant event for an audience worthy of the talent on display.

Discovering when an agent is present in a system

We want to build safe, aligned artificial general intelligence (AGI) systems that pursue the intended goals of its designers. Causal influence diagrams (CIDs) are a way to model decision-making situations that allow us to reason about agent incentives. By relating training setups to the incentives that shape agent behaviour, CIDs help illuminate potential risks before training an agent and can inspire better agent designs. But how do we know when a CID is an accurate model of a training setup?
Page 2 of 5
1 2 3 4 5