Page 1 of 254
1 2 3 254

A robot made of sticks

In late summer, just as the leaves were starting to crisp and curl in the heat, Devin Carroll walked out of his apartment, looked on the ground, and picked up a couple of sticks that he thought might work for his robot. About half an inch thick and the length of an adult hand, he stripped the three sticks of their bark and lashed them with string to StickBot, a modular robot composed of circuitry, actuators, a microcontroller, and a motor driver.

Women in Tech leadership resources from IMTS 2022

There’ve been quite a few events recently focusing on Women in Robotics, Women in Manufacturing, Women in 3D Printing, in Engineering, and in Tech Leadership. One of the largest tradeshows in the US IMTS 2022 kicked off with a Women Make Manufacturing Move Reception, with Allison Grealis, President of Women in Manufacturing, Andra Keay, President of Women in Robotics and Kristin Mulherin, President of Women in 3D Printing, ahead of a program packed with curated technical content and leadership sessions. (see the resource list below)

On Tuesday, I also moderated the A3 Webinar “Robots and Beyond Roundtable: How Women in Robotics and Automation are Changing Manufacturing” with Joanne Moretti, CRO of Fictiv, Jackie Ram, VP of Operations, IAM Robotics, Jessica Moran, SVP and General Manager, Berkshire Grey, and Mikell Taylor, Principal Technical Program Manager, Amazon Robotics.

And on Thursday Sept. 15 I moderated a panel on “Reaching New Heights: Women in Tech Leadership” with Meaghan Ziemba, Owner, Z-Ink Solutions and Host, Mavens of Manufacturing Podcast, Nicole Wolter, President & CEO, HM Manufacturing, Laura Elan, Senior Director of Cybersecurity, MxD. We discussed so many great resources during our panel precall, that we really wanted to put together a list that could be shared more widely after the events were over.

Resources for Women in Tech Leadership:

Suggestions from Nicole Wolter;

  • Sporting experience is a great pathway into leadership for women who often miss out on formal leadership training or experiences.
  • Find mentors and champions (male or female) with an eye towards helping you develop your career pathway and deal with your next challenges.
  • Build on your strengths, while we can always improve we should embrace your strength.
  • The Goal is a great book about running a company. (now also available as a Business Graphic Novel)
  • Crucial Conversations, Tools for Talking When Stakes are High by Joseph Grenny, Kerry Patterson and Ron McMillan

From Meaghan Ziemba;

From Laura Elan;

Other Resources:

Hardworking women deserve footwear that is both beautiful and strong. Xena Workwear’s women’s steel toe boots & safety shoes are handcrafted with high-quality materials, look stunning, and are not bulky or masculine. Each style is ASTM certified and handcrafted to help you feel safe and confident. (not just footwear!)

But wait, there’s more!

Additionally, you can find interviews with some of these kickass women in the IMTS+ coverage. 

IMTS+ In Conversation With Host, Tim Shinbara, Chief Technology Officer, AMT – The Association for Manufacturing Technology interviews Barbara Humpton,  President & CEO, Siemens Corporation. 

https://www.imts.com/watch/video-details/In-Conversation-With-Barbara-Humpton/199

Join IMTS+  Host, Marley Kayden for IMTS Unwind from Wednesday at the show. Featuring live interviews with: Nicole Wolter, President & CEO, HM Manufacturing; Aneesa Muthana, President, CEO, and Owner of Pioneer Service, Inc.; Yvonne Wiedemann, President & Owner of CAM Logic; Austin Schmidt, President of Additive Engineering Solutions; Max Egan, and Travis Egan, Chief Revenue Officer, AMT – The Association for Manufacturing Technology.

https://www.imts.com/watch/video-details/IMTS-Unwind-Wednesday-September-14-2022/198

Join IMTS+ Host, Marley Kayden for IMTS Today (Wednesday, September 14).  Featuring: Andra Keay, Vice President of Global Robotics; Robby Komljenovic, Chairman & CEO, Acieta; Richard Browning, Technologist and Founder of Gravity Industries; and Barbara Humpton, President & CEO of Siemens

Join IMTS+ Host Marley Kayden for IMTS Today, Friday, September 16: Featuring: Dr. Jeffrey Ahrstrom, CEO, Ingersoll; Meaghan Ziemba, Owner, Z-Ink Solutions, Founder & Host, Mavens of Manufacturing; Mitch Free Founder & CEO, ZYCI and Trusted Source; and John Dyck, CEO, CESMII: The Smart Manufacturing Institute.

https://www.imts.com/watch/video-details/IMTS-Today-Friday-September-16-2022/204

Join IMTS+ Host, Marley Kayden for IMTS Unwind. Featuring live interviews with: Tim Shinbara, Chief Technology Officer, AMT – The Association for Manufacturing Technology; Jeremy Nyenhuis, Owner of J3 Machine & Engineering, LLC.; and Courtney Tate, Owner of Ontime Quality Machining. Additional live interviews with social media influencers James Soto, Founder & CEO, INDUSTRIAL and Partnership Advocate; and Charli K. Matthews, Founder & CEO, Empowering Pumps & Equipment, and Champion of Women.

https://www.imts.com/watch/video-details/IMTS-Unwind-Monday-September-12-2022/188

Robotic capsule developed to deliver drugs to the gut

One reason that it's so difficult to deliver large protein drugs orally is that these drugs can't pass through the mucus barrier that lines the digestive tract. This means that insulin and most other "biologic drugs"—drugs consisting of proteins or nucleic acids—have to be injected or administered in a hospital.

Active matter, curved spaces: Mini robots learn to ‘swim’ on stretchy surfaces

When self-propelling objects interact with each other, interesting phenomena can occur. Birds align with each other when they flock together. People at a concert spontaneously create vortices when they nudge and bump into each other. Fire ants work together to create rafts that float on the water's surface.

MIT engineers build a battery-free, wireless underwater camera

A battery-free, wireless underwater camera developed at MIT could have many uses, including climate modeling. “We are missing data from over 95 percent of the ocean. This technology could help us build more accurate climate models and better understand how climate change impacts the underwater world,” says Associate Professor Fadel Adib. Image: Adam Glanzman

By Adam Zewe | MIT News Office

Scientists estimate that more than 95 percent of Earth’s oceans have never been observed, which means we have seen less of our planet’s ocean than we have the far side of the moon or the surface of Mars.

The high cost of powering an underwater camera for a long time, by tethering it to a research vessel or sending a ship to recharge its batteries, is a steep challenge preventing widespread undersea exploration.

MIT researchers have taken a major step to overcome this problem by developing a battery-free, wireless underwater camera that is about 100,000 times more energy-efficient than other undersea cameras. The device takes color photos, even in dark underwater environments, and transmits image data wirelessly through the water.

The autonomous camera is powered by sound. It converts mechanical energy from sound waves traveling through water into electrical energy that powers its imaging and communications equipment. After capturing and encoding image data, the camera also uses sound waves to transmit data to a receiver that reconstructs the image. 

Because it doesn’t need a power source, the camera could run for weeks on end before retrieval, enabling scientists to search remote parts of the ocean for new species. It could also be used to capture images of ocean pollution or monitor the health and growth of fish raised in aquaculture farms.

“One of the most exciting applications of this camera for me personally is in the context of climate monitoring. We are building climate models, but we are missing data from over 95 percent of the ocean. This technology could help us build more accurate climate models and better understand how climate change impacts the underwater world,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab, and senior author of a new paper on the system.

Joining Adib on the paper are co-lead authors and Signal Kinetics group research assistants Sayed Saad Afzal, Waleed Akbar, and Osvy Rodriguez, as well as research scientist Unsoo Ha, and former group researchers Mario Doumet and Reza Ghaffarivardavagh. The paper is published in Nature Communications.

Going battery-free

To build a camera that could operate autonomously for long periods, the researchers needed a device that could harvest energy underwater on its own while consuming very little power.

The camera acquires energy using transducers made from piezoelectric materials that are placed around its exterior. Piezoelectric materials produce an electric signal when a mechanical force is applied to them. When a sound wave traveling through the water hits the transducers, they vibrate and convert that mechanical energy into electrical energy.

Those sound waves could come from any source, like a passing ship or marine life. The camera stores harvested energy until it has built up enough to power the electronics that take photos and communicate data.

To keep power consumption as a low as possible, the researchers used off-the-shelf, ultra-low-power imaging sensors. But these sensors only capture grayscale images. And since most underwater environments lack a light source, they needed to develop a low-power flash, too.

“We were trying to minimize the hardware as much as possible, and that creates new constraints on how to build the system, send information, and perform image reconstruction. It took a fair amount of creativity to figure out how to do this,” Adib says.

They solved both problems simultaneously using red, green, and blue LEDs. When the camera captures an image, it shines a red LED and then uses image sensors to take the photo. It repeats the same process with green and blue LEDs.

Even though the image looks black and white, the red, green, and blue colored light is reflected in the white part of each photo, Akbar explains. When the image data are combined in post-processing, the color image can be reconstructed.

“When we were kids in art class, we were taught that we could make all colors using three basic colors. The same rules follow for color images we see on our computers. We just need red, green, and blue — these three channels — to construct color images,” he says.

Fadel Adib (left) associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab, and Research Assistant Waleed Akbar display the battery-free wireless underwater camera that their group developed. Image: Adam Glanzman

Sending data with sound

Once image data are captured, they are encoded as bits (1s and 0s) and sent to a receiver one bit at a time using a process called underwater backscatter. The receiver transmits sound waves through the water to the camera, which acts as a mirror to reflect those waves. The camera either reflects a wave back to the receiver or changes its mirror to an absorber so that it does not reflect back.

A hydrophone next to the transmitter senses if a signal is reflected back from the camera. If it receives a signal, that is a bit-1, and if there is no signal, that is a bit-0. The system uses this binary information to reconstruct and post-process the image.

“This whole process, since it just requires a single switch to convert the device from a nonreflective state to a reflective state, consumes five orders of magnitude less power than typical underwater communications systems,” Afzal says.

The researchers tested the camera in several underwater environments. In one, they captured color images of plastic bottles floating in a New Hampshire pond. They were also able to take such high-quality photos of an African starfish that tiny tubercles along its arms were clearly visible. The device was also effective at repeatedly imaging the underwater plant Aponogeton ulvaceus in a dark environment over the course of a week to monitor its growth.

Now that they have demonstrated a working prototype, the researchers plan to enhance the device so it is practical for deployment in real-world settings. They want to increase the camera’s memory so it could capture photos in real-time, stream images, or even shoot underwater video.

They also want to extend the camera’s range. They successfully transmitted data 40 meters from the receiver, but pushing that range wider would enable the camera to be used in more underwater settings.

“This will open up great opportunities for research both in low-power IoT devices as well as underwater monitoring and research,” says Haitham Al-Hassanieh, an assistant professor of electrical and computer engineering at the University of Illinois Urbana-Champaign, who was not involved with this research.

This research is supported, in part, by the Office of Naval Research, the Sloan Research Fellowship, the National Science Foundation, the MIT Media Lab, and the Doherty Chair in Ocean Utilization.

How do we control robots on the moon?

In the future, we imagine that teams of robots will explore and develop the surface of nearby planets, moons and asteroids – taking samples, building structures, deploying instruments. Hundreds of bright research minds are busy designing such robots. We are interested in another question: how to provide the astronauts the tools to efficiently operate their robot teams on the planetary surface, in a way that doesn’t frustrate or exhaust them?

Received wisdom says that more automation is always better. After all, with automation, the job usually gets done faster, and the more tasks (or sub-tasks) robots can do on their own, the less the workload on the operator. Imagine a robot building a structure or setting up a telescope array, planning and executing tasks by itself, similar to a “factory of the future”, with only sporadic input from an astronaut supervisor orbiting in a spaceship. This is something we tested in the ISS experiment SUPVIS Justin in 2017-18, with astronauts on board the ISS commanding DLR Robotic and Mechatronic Center’s humanoid robot, Rollin’ Justin, in Supervised Autonomy.

However, the unstructured environment and harsh lighting on planetary surfaces makes things difficult for even the best object-detection algorithms. And what happens when things go wrong, or a task needs to be done that was not foreseen by the robot programmers? In a factory on Earth, the supervisor might go down to the shop floor to set things right – an expensive and dangerous trip if you are an astronaut!

The next best thing is to operate the robot as an avatar of yourself on the planet surface – seeing what it sees, feeling what it feels. Immersing yourself in the robot’s environment, you can command the robot to do exactly what you want – subject to its physical capabilities.

Space Experiments

In 2019, we tested this in our next ISS experiment, ANALOG-1, with the Interact Rover from ESA’s Human Robot Interaction Lab. This is an all-wheel-drive platform with two robot arms, both equipped with cameras and one fitted with a gripper and force-torque sensor, as well as numerous other sensors.

On a laptop screen on the ISS, the astronaut – Luca Parmitano – saw the views from the robot’s cameras, and could move one camera and drive the platform with a custom-built joystick. The manipulator arm was controlled with the sigma.7 force-feedback device: the astronaut strapped his hand to it, and could move the robot arm and open its gripper by moving and opening his own hand. He could also feel the forces from touching the ground or the rock samples – crucial to help him understand the situation, since the low bandwidth to the ISS limited the quality of the video feed.

There were other challenges. Over such large distances, delays of up to a second are typical, which mean that traditional teleoperation with force-feedback might have become unstable. Furthermore, the time delay the robot between making contact with the environment and the astronaut feeling it can lead to dangerous motions which can damage the robot.

To help with this we developed a control method: the Time Domain Passivity Approach for High Delays (TDPA-HD). It monitors the amount of energy that the operator puts in (i.e. force multiplied by velocity integrated over time), and sends that value along with the velocity command. On the robot side, it measures the force that the robot is exerting, and reduces the velocity so that it doesn’t transfer more energy to the environment than the operator put in.

On the human’s side, it reduces the force-feedback to the operator so that no more energy is transferred to the operator than is measured from the environment. This means that the system stays stable, but also that the operator never accidentally commands the robot to exert more force on the environment than they intend to – keeping both operator and robot safe.

This was the first time that an astronaut had teleoperated a robot from space while feeling force-feedback in all six degrees of freedom (three rotational, three translational). The astronaut did all the sampling tasks assigned to him – while we could gather valuable data to validate our method, and publish it in Science Robotics. We also reported our findings on the astronaut’s experience.

Some things were still lacking. The experiment was conducted in a hangar on an old Dutch air base – not really representative of a planet surface.

Also, the astronaut asked if the robot could do more on its own – in contrast to SUPVIS Justin, when the astronauts sometimes found the Supervised Autonomy interface limiting and wished for more immersion. What if the operator could choose the level of robot autonomy appropriate to the task?

Scalable Autonomy

In June and July 2022, we joined the DLR’s ARCHES experiment campaign on Mt. Etna. The robot – on a lava field 2,700 metres above sea level – was controlled by former astronaut Thomas Reiter from the control room in the nearby town of Catania. Looking through the robot’s cameras, it wasn’t a great leap of the imagination to imagine yourself on another planet – save for the occasional bumblebee or group of tourists.

This was our first venture into “Scalable Autonomy” – allowing the astronaut to scale up or down the robot’s autonomy, according to the task. In 2019, Luca could only see through the robot’s cameras and drive with a joystick, this time Thomas Reiter had an interactive map, on which he could place markers for the robot to automatically drive to. In 2019, the astronaut could control the robot arm with force feedback; he could now also automatically detect and collect rocks with help from a Mask R-CNN (region-based convolutional neural network).

We learned a lot from testing our system in a realistic environment. Not least, that the assumption that more automation means a lower astronaut workload is not always true. While the astronaut used the automated rock-picking a lot, he warmed less to the automated navigation – indicating that it was more effort than driving with the joystick. We suspect that a lot more factors come into play, including how much the astronaut trusts the automated system, how well it works, and the feedback that the astronaut gets from it on screen – not to mention the delay. The longer the delay, the more difficult it is to create an immersive experience (think of online video games with lots of lag) and therefore the more attractive autonomy becomes.

What are the next steps? We want to test a truly scalable-autonomy, multi-robot scenario. We are working towards this in the project Surface Avatar – in a large-scale Mars-analog environment, astronauts on the ISS will command a team of four robots on ground. After two preliminary tests with astronauts Samantha Christoforetti and Jessica Watkins in 2022, the first big experiment is planned for 2023.

Here the technical challenges are different. Beyond the formidable engineering challenge of getting four robots to work together with a shared understanding of their world, we also have to try and predict which tasks would be easier for the astronaut with which level of autonomy, when and how she could scale the autonomy up or down, and how to integrate this all into one, intuitive user interface.

The insights we hope to gain from this would be useful not only for space exploration, but for any operator commanding a team of robots at a distance – for maintenance of solar or wind energy parks, for example, or search and rescue missions. A space experiment of this sort and scale will be our most complex ISS telerobotic mission yet – but we are looking forward to this exciting challenge ahead.

Have a say on these robotics solutions before they enter the market!

Robotics solutions and technologies are rapidly changing society and transforming the way we live and work – with both positive improvements and unforeseen consequences.

In Robotics4EU we wish to ensure that citizens have a say when it comes to these new technologies and how they affect everyday life. Therefore, we have gathered robots which are being developed right now or have just entered the market. We have set these up in a survey style consultation.

TAKE THE CITIZENS’ AUDIT HERE

By answering the survey, you get the opportunity to have an influence on these robotics solutions, as your answers will be given directly to the companies behind the robots, who will use your feedback in the further development of the robots.

The solutions to give feedback to are various: from a robot that gives throat swabs, to a humanoid that assists medical personnel and even a solution that aims to protect farmers’ crops.

Page 1 of 254
1 2 3 254